text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Delayed versus early umbilical cord clamping for near-term infants born to preeclamptic mothers; a randomized controlled trial
Ahmed Rashwan1,
Ashraf Eldaly1,
Ahmed El-Harty1,
Moutaz Elsherbini1,
Mazen Abdel-Rasheed ORCID: orcid.org/0000-0001-5575-17822 &
Marwa M. Eid1
This study aims to assess delayed versus early umbilical cord clamping in preeclamptic mothers undergoing scheduled caesarean delivery regarding the maternal intra-operative blood loss and neonatal outcomes.
A clinical trial was conducted on 62 near-term preeclamptic mothers (36-38+6 weeks) who were planned for caesarean delivery. They were randomly assigned into two groups. The first group was the early cord clamping (ECC) group (n= 31), in which clamping the umbilical cord was within 15 seconds, while the second group was the delayed cord clamping (DCC) group (n= 31), in which clamping the umbilical cord was at 60 seconds. All patients were assessed for intra-operative blood loss and incidence of primary postpartum haemorrhage (PPH). Otherwise, all neonates were assessed for APGAR scores, the need for the neonatal intensive care unit (NICU) admission due to jaundice, and blood tests (haemoglobin, haematocrit. and serum bilirubin).
There was not any significant difference between the two groups regarding the maternal estimated blood loss (P=0.673), the rates of PPH (P=0.1), post-delivery haemoglobin (P=0.154), and haematocrit values (P=0.092). Neonatal outcomes also were showing no significant difference regarding APGAR scores at the first minute (P=1) and after 5 minutes (P=0.114), day 1 serum bilirubin (P=0.561), day 3 serum bilirubin (P=0.676), and the rate of NICU admission (P=0.671). However, haemoglobin and haematocrit values were significantly higher in the DCC group than in the ECC group (P<0.001).
There is no significant difference between DCC and ECC regarding maternal blood loss. However, DCC has the advantage of significantly higher neonatal haemoglobin.
It was first registered at ClinicalTrials.gov on 10/12/2019 with registration number NCT04193345.
Delayed umbilical cord clamping has shown enormous health advantages for both preterm and term infants, as demonstrated in several randomized controlled trials and meta-analyses [1, 2]. In term infants, it was found that delayed cord clamping (DCC) increased neonatal haemoglobin and ferritin levels stores out to 4 months with higher myelin content out to 12 months [3]. DCC in preterm infants also showed many significant benefits, including establishing better red blood cell volume, decreasing the need for blood transfusion, improving the transitional circulation, and lowering the incidence of intraventricular haemorrhage and necrotizing enterocolitis [4]. As a result, the American College of Obstetricians and Gynaecologists (ACOG) recommends DCC for at least 30–60 seconds after birth in both preterm and term newborns [5].
In patients undergoing caesarean section, the average blood loss is at least double the vaginal deliveries [6, 7]. This blood loss may increase with the delay in the uterine incision closure in DCC [8,9,10]. However, a systematic review on DCC at term presents no significant differences in the maternal blood loss, although those included studies were done on low-risk patients expected to deliver vaginally [1].
In DCC, there is an increase of about 20-30% in infants' blood volume with a 50% increase in red blood cell volume [11]. In the latest Cochrane database review on term infants, it has been found that there were no differences in the infants' outcomes between early cord clamping (ECC) and DCC regarding neonatal morbidities such as neonatal intensive care unit (NICU) admission, APGAR scores <7 at 5 minutes, or clinical jaundice [1].
Preeclampsia complications are around 10-15% of all pregnancies and are usually associated with several foetal and neonatal complications related to prematurity and uteroplacental insufficiency and are causing a compromised foetal blood flow [12]. The risks of polycythaemia and thrombocytopenia were higher in neonates born to mothers with hypertensive pregnancy disorders than in the general population [13]. Additionally, changes that occur in a normal, uncomplicated pregnancy, including hyperlipidaemia, neutrophilic leucocytosis, and hypofibrinolytic changes, were found to be enhanced in preeclampsia and together with the presence of placental abnormalities resulting in both foetal and neonatal complications [14, 15].
Newborns of preeclamptic mothers are at risk of many complications; however, no study has thoroughly investigated the effect of DCC on maternal blood loss during caesarean delivery and the neonatal outcomes in these pregnancies. Therefore, it was the focus of our study
Following the CONSORT guidelines, a randomized clinical trial was conducted in Kasr El-Ainy Hospital (Obstetrics and Gynaecology Department, Faculty of Medicine, Cairo University) from January 2020 to May 2021 after approval of the Medical Ethical Committee. It was first registered at ClinicalTrials.gov on 10/12/2019 with registration number NCT04193345.
The study included 62 pregnant women diagnosed with preeclampsia at near-term (36-38+6 weeks, i.e., late-preterm and early-term). All study cases had been assigned for lower segment caesarean section (LSCS) under spinal anaesthesia. According to ACOG, preeclampsia was diagnosed with new-onset hypertension after 20 weeks of gestation accompanied by either new-onset proteinuria or new-onset of any of the following; thrombocytopenia, renal insufficiency, impaired liver function, pulmonary oedema, or headache unresponsive to medication [16].
Inclusion criteria were maternal age 20-40 years, gestational age ≥36 weeks, and singleton living healthy foetus. Women who had intrapartum surgical complications such as uterine artery injury or lower segment extension, IUFD, or cases with medical disorders such as severe anaemia or diabetes mellitus were excluded. Women with abnormal placentation, placenta abruption, liquor abnormalities, or anomalous foetuses were also excluded.
Informed consent was obtained from all patients after explaining the aim of the study. For all participants, full history was taken, followed by a complete physical examination and routine obstetric ultrasound to confirm the eligibility of the current pregnancy to participate in the study, as well as to confirm the gestational age. The routine preoperative laboratory tests were performed, including complete blood count, liver and kidney function tests, prothrombin time and prothrombin concentration.
On the day of the scheduled caesarean delivery, participants were randomly assigned using computer-generated random numbers into two equal groups; the ECC group (n= 31) in whom the umbilical cord was clamped within 15 seconds and the DCC group (n= 31) in whom the umbilical cord was clamped at 60 seconds. Caesarean sections were done under spinal anaesthesia by an experienced obstetrician, while recording the time between the delivery and cord clamping was the responsibility of a research staff member who attended all deliveries. For the DCC group, the research staff member recorded 60 seconds before asking the obstetrician to clamp the umbilical cord. During this period, their neonates were placed on the sterile drapes on the mother's legs at the level of the placenta until cord clamping was performed.
The attending neonatologist assessed all neonates in both groups for APGAR scores, jaundice, and the need for neonatal ICU admission. Neonatal haemoglobin and haematocrit were done 4 hours after delivery and then repeated after 24 hours. Serum bilirubin was done 12 hours after delivery and repeated on day 3 for follow-up.
The number of the operative towels and the blood volume in the suction unit were recorded. The maternal complete blood count (CBC) test was repeated after 12 hours. All mothers were observed for primary postpartum haemorrhage (PPH) and the need for blood transfusion for the first 24 hours. The estimated blood loss (EBL) was calculated by the following formula:
$$\mathrm{EBL}=\mathrm{EBV}\times \frac{\mathrm{Preoperative}\kern0.34em \mathrm{hematocrit}\hbox{-} \mathrm{Postoperative}\kern0.34em \mathrm{hematocrit}}{\mathrm{Preoperative}\kern0.34em \mathrm{hematocrit}},$$
where EBV is the estimated blood volume of the patient in mL= weight in kg × 85 [17].
The primary outcome was comparing the effect of DCC versus the ECC on maternal intraoperative blood loss, while the secondary outcomes were comparing both groups regarding the neonatal outcomes and the incidence of postpartum haemorrhage.
The sample size for each group of 28 achieves 70% power to detect a difference of 0.04 between the null hypothesis that both group means are 0.61 and the alternative hypothesis that the mean of group 2 is 0.57 with estimated group standard deviations of 0.05 and 0.07 and with a significance level (alpha) of 0.05 using a two-sided two-sample t-test [18]. The sample size increased by 10% to be 31 for each group to allow for dropouts.
Data were coded and entered using the statistical package for the Social Sciences (SPSS) version 26 (IBM Corp., Armonk, NY, USA). Data were summarized using mean and standard deviation for continuous quantitative variables; and frequencies (number of cases) and relative frequencies (percentages) for categorical variables. The independent samples t-test was used to compare groups regarding the numerical data. For comparing categorical data, a Chi-square test was performed, but Fisher's exact test was used instead when the expected frequency was less than 5. P values less than 0.05 were considered statistically significant.
Sixty-two pregnant women were diagnosed with preeclampsia, and being candidates for LSCS under spinal anaesthesia were finally included and followed up in this study. The flow of patients was summarised in Fig. 1. As previously defined, the patients were randomly and equally assigned into two groups; ECC (n= 31) and DCC (n= 31). Patients' baseline clinical characteristics and demographic data did not show any significant difference between these two groups Table 1.
Flow diagram of patients in the study
Table 1 Maternal demographic data and baseline clinical characteristics
As shown in Table 2, there was no noteworthy difference in maternal estimated blood loss between the two groups at the time of caesarean section (P=0.673) nor the rates of PPH during the first 24 hours (P=0.1). Moreover, biochemical examination revealed no significant difference between both groups regarding the post-delivery haemoglobin (P=0.154) and post-delivery haematocrit (P=0.092).
Table 2 Maternal outcomes in Caesarean section
When neonatal outcomes were analyzed, the haemoglobin and haematocrit values were significantly higher within the DCC group during the first day if compared to the ECC group (18.89 ± 1.55 versus 17.01 ± 1.70, P<0.001 for haemoglobin and 56.05 ± 3.91 versus 50.44 ± 4.34, P<0.001 for haematocrit) as well as during the second day (17.90 ± 1.08 versus 16.29 ± 1.41, P<0.001 for haemoglobin and 53.09 ± 3.64 versus 47.85 ± 3.77, P<0.001 for haematocrit). On the contrary, there was no significant difference between both groups regarding APGAR scores at the first minute (P=1) and after 5 minutes (P=0.114), day 1 serum bilirubin (P=0.561), and day 3 serum bilirubin (P=0.676). Additionally, there was no significant difference between the two groups regarding the rate of NICU admission (6.45% in the ECC group versus 12.9% in the DCC group, P=0.671), as shown in Table 3.
Table 3 Neonatal clinical characteristics
In our trial, the maternal and neonatal outcomes of DCC for 60 seconds during caesarean delivery for near-term pregnancies (36-38+6 weeks) complicated with preeclampsia were studied. To our knowledge, our study is the first randomized trial evaluating DCC versus ECC in caesarean delivery in pregnancies complicated by preeclampsia.
Traditionally, there were some concerns about the risk of increasing maternal blood loss due to prolongation of the operative time as a consequence of delayed wound closure in the DCC group, and that may predispose to uterine atony [8,9,10]. Therefore, a concern about the benefits of DCC during caesarean delivery was present, plus there were some barriers to its application [19]. However, recent studies revealed no significant increase in morbidity in terms of EBL or post-caesarean drop in maternal haemoglobin and haematocrit between ECC and DCC techniques [20]. In our study, we found that the DCC in near-term pregnancies (36-38+6 weeks) complicated with preeclampsia did not result in increased maternal blood loss compared to those who had ECC.
Similar to our study, a study on 39 women scheduled for caesarean delivery had DCC for 90 to 120 seconds and were compared with 112 historical controls who had immediate cord clamping. The authors found no difference in maternal postoperative/preoperative haemoglobin levels [21]. Ruangkit et al. paradoxically reported the mean EBL was higher in the ECC group, which is mostly not related to the technique in the umbilical cord management but most likely due to the significantly higher rate of caesarean section delivery in the ECC group compared to the DCC group in their study [22].
A particular strength in this study was the objective way of assessing blood loss in means of the preoperative versus the postoperative change in the haemoglobin and haematocrit levels. There is an agreement with Purisch et al. report of no significant difference in maternal blood loss using objective assessment methods, such as postoperative haemoglobin levels. They also found no substantial increase in uterotonic therapy or blood transfusion rates with caesarean delivery [23]. On the other hand, Rhoades et al. compared outcomes between the ECC versus DCC at term in a group of 196 women delivered by caesarean section. They found increased postpartum haemorrhage (> 1000 cc) in caesarean deliveries [24]. However, this is most probably related to the subjective ways of assessing blood loss.
A noteworthy increase in haemoglobin levels in the neonates of the DCC group at 4 then at 24 hours of life with no significant increase in jaundice is reported here in our study. Previous studies have assumed the disadvantage of ECC is losing the benefits of DCC. Purisch et al. reported a significant increase in neonatal haemoglobin levels at 24 to 72 hours of life could be achieved by the DCC technique in scheduled caesarean deliveries [23]. In addition, Mercer et al. reported an increase in neonatal haematocrit and haemoglobin at 24 to 48 hours with no increase in symptomatic polycythaemia, jaundice, or other adverse effects [25]. Our study also is in agreement with McDonald et al. (mean difference, 1.5 g/dL [95% CI, 1.21-1.78]), who also reported a significant increase in neonatal haemoglobin levels in the DCC group. In addition to the improved haematological status of neonates with DCC, no difference was reported in the recent studies between ECC and DCC regarding neonatal jaundice and phototherapy requirements [26, 27].
To the best of our knowledge, our study is the first to report maternal and neonatal outcomes with the different umbilical cord management techniques in preeclamptic patients. Another point of strength in our study was using objective, accurate methods of assessing the blood loss, unlike several studies using subjective methods to assess blood loss as postpartum haemorrhage [28]. Limitations of this study included the small sample size, which went back to patient selection based on near-term preeclampsia patients with singleton pregnancies scheduled for elective caesarean section. Therefore, our results may not be generalized to other situations with an emergency preterm delivery or cases who delivered vaginally. These cases warrant broader clinical trials to assess the effect of DCC as well. Another limitation is that longer durations of DCC have not been assessed in our trial as recommended in other previous studies [29]. Neonatal benefits were demonstrated with DCC in term neonates after 3 minutes and in preterm neonates after 30-180 seconds [30].
The American College of Obstetricians and Gynaecologists recommends the universal application of delayed umbilical cord clamping for infants for at least 30–60 seconds [5]. This made the DCC more widely used by obstetricians, but high-risk patients still need more randomized controlled trials to ensure safe maternal and neonatal outcomes. This study supports a basis that DCC can be safely used in preeclamptic mothers without unfavourable maternal outcomes.
Among near-term (36-38+6 weeks) singleton pregnant preeclamptic mothers scheduled for caesarean delivery, there was no significant difference in maternal blood loss between ECC and DCC groups. However, neonatal haemoglobin and haematocrit were significantly higher with delayed umbilical cord clamping during the first and second days after delivery.
The data that support the findings of this study are available from Kasr El-Ainy Hospital, but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission of Kasr El-Ainy Hospital.
McDonald SJ, Middleton P, Dowswell T, Morris PS. Effect of timing of umbilical cord clamping of term infants on maternal and neonatal outcomes. Evidence-Based Child Health Cochrane Rev J. 2014;9(2):303–97.
Rabe H, Gyte GM, Díaz-Rossello JL, Duley L. Effect of timing of umbilical cord clamping and other strategies to influence placental transfusion at preterm birth on maternal and infant outcomes. Cochrane Database Syst Rev. 2019;9(9):CD003248.
Mercer JS, Erickson-Owens DA, Deoni SCL, Dean DC, Collins J, Parker AB, et al. Effects of Delayed Cord Clamping on 4-Month Ferritin Levels, Brain Myelin Content, and Neurodevelopment: A Randomized Controlled Trial. J Pediatr. 2018 Dec;1(203):266–272.e2.
Nagano N, Saito M, Sugiura T, Miyahara F, Namba F, Ota E. Benefits of umbilical cord milking versus delayed cord clamping on neonatal outcomes in preterm infants: A systematic review and meta-analysis. PLoS One. 2018;13(8):e0201528.
American College of Obstetricians and Gynaecologists (ACOG). Delayed Umbilical Cord Clamping After Birth: ACOG Committee Opinion Summary, Number 814. Obstet Gynecol. 2020 Dec;136(6):1238–9.
Cunningham FG, Leveno KJ, Bloom SL, Dashe JS, Hoffman BL, Casey BM, et al. Cesarean delivery and peripartum hysterectomy. In: Williams Obstetrics [Internet]. 25th ed. New York: McGraw-Hill Education; 2018. Available from: accessmedicine.mhmedical.com/content.aspx?aid=1160776434.
Cunningham FG, Leveno KJ, Bloom SL, Dashe JS, Hoffman BL, Casey BM, et al. Obstetrical hemorrhage. In: Williams Obstetrics [Internet]. 25th ed. New York: McGraw-Hill Education; 2018. Available from: accessmedicine.mhmedical.com/content.aspx?aid=1160784056.
Doherty DA, Magann EF, Chauhan SP, O'BOYLE AL, Busch JM, Morrison JC. Factors affecting caesarean operative time and the effect of operative time on pregnancy outcomes. Aust N Z J Obstet Gynaecol. 2008;48(3):286–91.
Rottenstreich M, Sela HY, Shen O, Michaelson-Cohen R, Samueloff A, Reichman O. Prolonged operative time of repeat cesarean is a risk marker for post-operative maternal complications. BMC Pregnancy Childbirth. 2018;18(1):1–6.
Lalonde A, Daviss B, a., Acosta A, Herschderfer K. Postpartum hemorrhage today: ICM/FIGO initiative 2004–2006. Int J Gynecol Obstet. 2006;94(3):243–53.
Farrar D, Airey R, Law G, Tuffnell D, Cattle B, Duley L. Measuring placental transfusion for term births: weighing babies with cord intact. BJOG Int J Obstet Gynaecol. 2011;118(1):70–5.
Yücesoy G, Özkan S, Bodur H, Tan T, Çalışkan E, Vural B, et al. Maternal and perinatal outcome in pregnancies complicated with hypertensive disorder of pregnancy: a seven year experience of a tertiary care center. Arch Gynecol Obstet. 2005;273(1):43–9.
Christensen R, Henry E, Wiedmeier S, Stoddard R, Sola-Visner M, Lambert D, et al. Thrombocytopenia among extremely low birth weight neonates: data from a multihospital healthcare system. J Perinatol. 2006;26(6):348–53.
Friedman SA, Schiff E, Kao L, Sibai BM. Neonatal outcome after preterm delivery for preeclampsia. Am J Obstet Gynecol. 1995;172(6):1785–92.
Catarino C, Rebelo I, Belo L, Quintanilha A, Santos-Silva A. Umbilical cord blood changes in neonates from a preeclamptic pregnancy. Preconception Postpartum. 2012;2012:269–87.
American College of Obstetricians and Gynaecologists (ACOG). Gestational Hypertension and Preeclampsia: ACOG Practice Bulletin, Number 222. Obstet Gynecol. 2020 Jun;135(6):e237–60.
Maged AM, Helal OM, Elsherbini MM, Eid MM, Elkomy RO, Dahab S, et al. A randomized placebo-controlled trial of preoperative tranexamic acid among women undergoing elective cesarean delivery. Int J Gynecol Obstet. 2015;131(3):265–8.
Van Rheenen P, De Moor L, Eschbach S, De Grooth H, Brabin B. Delayed cord clamping and haemoglobin levels in infancy: a randomised controlled trial in term babies. Trop Med Int Health. 2007;12(5):603–16.
Anton O, Jordan H, Rabe H. Strategies for implementing placental transfusion at birth: a systematic review. Birth. 2019;46(3):411–27.
Qian Y, Ying X, Wang P, Lu Z, Hua Y. Early versus delayed umbilical cord clamping on maternal and neonatal outcomes. Arch Gynecol Obstet. 2019;300(3):531–43.
Chantry CJ, Blanton A, Taché V, Finta L, Tancredi D. Delayed cord clamping during elective cesarean deliveries: results of a pilot safety trial. Matern Health Neonatol Perinatol. 2018;4(1):1–7.
Ruangkit C, Leon M, Hassen K, Baker K, Poeltler D, Katheria A. Maternal bleeding complications following early versus delayed umbilical cord clamping in multiple pregnancies. BMC Pregnancy Childbirth. 2018;18(1):1–6.
Purisch SE, Ananth CV, Arditi B, Mauney L, Ajemian B, Heiderich A, et al. Effect of delayed vs immediate umbilical cord clamping on maternal blood loss in term cesarean delivery: a randomized clinical trial. JAMA. 2019;322(19):1869–76.
Rhoades JS, Wesevich VG, Tuuli MG, Macones GA, Cahill AG. Implementation and outcomes of universal delayed umbilical cord clamping at term. Am J Perinatol. 2019;36(03):233–42.
Mercer JS, Erickson-Owens DA, Collins J, Barcelos MO, Parker AB, Padbury JF. Effects of delayed cord clamping on residual placental blood volume, hemoglobin and bilirubin levels in term infants: a randomized controlled trial. J Perinatol. 2017;37(3):260–4.
Li J, Yang S, Yang F, Wu J, Xiong F. Immediate vs delayed cord clamping in preterm infants: A systematic review and meta-analysis. Int J Clin Pract. 2021;75(11):e14709.
Shao H, Gao S, Lu Q, Zhao X, Hua Y, Wang X. Effects of delayed cord clamping on neonatal jaundice, phototherapy and early hematological status in term cesarean section. Ital J Pediatr. 2021;47(1):115.
Hancock A, Weeks AD, Lavender DT. Is accurate and reliable blood loss estimation the'crucial step'in early detection of postpartum haemorrhage: an integrative review of the literature. BMC Pregnancy Childbirth. 2015;15(1):1–9.
World Health Organization. Guideline: delayed umbilical cord clamping for improved maternal and infant health and nutrition outcomes. World Health Organization; 2014. p. 28. https://apps.who.int/iris/handle/10665/148793Description. ISBN 9789241508209.
American College of Nurse-Midwives (ACNM). Position statements: optimal management of the umbilical cord at the time of birth [Internet]. 2021. Available from: https://www.midwife.org/ACNM-Library.
All participants gave their consent after being informed of the study's objective and design, and they were given the option to leave the study at any time.
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). This research received no specific grant from any funding agency.
Obstetrics and Gynaecology Department, Faculty of Medicine, Cairo University, Cairo, Egypt
Ahmed Rashwan, Ashraf Eldaly, Ahmed El-Harty, Moutaz Elsherbini & Marwa M. Eid
Reproductive Health Research Department, National Research Centre, 33 El-Buhouth St, Dokki, Cairo, 12622, Egypt
Mazen Abdel-Rasheed
Ahmed Rashwan
Ashraf Eldaly
Ahmed El-Harty
Moutaz Elsherbini
Marwa M. Eid
A.R., A.E., M.E., and M.M.E. designed, conducted, and supervised the study. A.E.H. conducted the study and analyzed the data. M.A.R., analyzed the data. All authors wrote and approved the manuscript.
Correspondence to Mazen Abdel-Rasheed.
The study protocol was approved by Kasr El-Ainy Ethical Committee under registration number (MD-74-2019). All methods were carried out in accordance with relevant guidelines and regulations. Informed consent was obtained from all participants.
None of the authors has financial or other conflicts of interest.
Importance: Delayed umbilical cord clamping is recommended by the American College of Obstetricians and Gynaecologists in term neonates for at least 30 to 60 seconds after birth. There are no published data regarding the safety of this procedure in caesarean delivery in preeclamptic mothers and its impact on both maternal and neonatal outcomes.
Rashwan, A., Eldaly, A., El-Harty, A. et al. Delayed versus early umbilical cord clamping for near-term infants born to preeclamptic mothers; a randomized controlled trial. BMC Pregnancy Childbirth 22, 515 (2022). https://doi.org/10.1186/s12884-022-04831-8
Umbilical cord clamping
Neonatal jaundice
Neonatal haemoglobin
|
CommonCrawl
|
Anatomy of a STARK
Part 1: STARK Overview
October 28, 2021 by Alan Szepieniec
STARK Overview
STARKs are a class of interactive proof systems, but for the purpose of this tutorial it's good to think of them as a special case of SNARKs in which
hash functions are the only cryptographic ingredient,
arithmetization is based on AIR (algebraic intermediate representation 1), and reduces the claim about computational integrity to one about the low degree of certain polynomials
the low degree of polynomials is proven by using FRI as a subprotocol, and FRI itself is instantiated with Merkle trees 2;
zero-knowledge is optional.
This part of the tutorial is about explaining the key terms in this definition of STARKs.
Interactive Proof Systems
In computational complexity theory, an interactive proof system is a protocol between at least two parties in which one party, the verifier, is convinced of the correctness of a certain mathematical claim if and only if that claim is true. In theory, the claim could be anything expressible by mathematical symbols, such as the Birch and Swinnerton-Dyer conjecture, $\mathbf{P} \neq \mathbf{NP}$, or "the fifteenth Fibonacci number is 643617." (In a sound proof system, the verifier will reject that last claim.)
A cryptographic proof system turns this abstract notion of interactive proof systems into a concrete object intended for deployment in the real world. This restriction to real world applications induces a couple of simplifications:
The claim is not about a mathematical conjecture but concerns the integrity of a particular computation, like "circuit $C$ gives output $y$ when evaluated on input $x$", or "Turing machine $M$ outputs $y$ after $T$ steps". The proof system is said to establish computational integrity.
There are two parties to the protocol, the prover and the verifier. Without loss of generality the messages sent by the verifier to the prover consist of unadulterated randomness and in this case (so: almost always) the proof system can be made non-interactive with the Fiat-Shamir transform. Non-interactive proof systems consist of a single message from the prover to the verifier.
Instead of perfect security, it is acceptable for the verifier to have a nonzero but negligibly small false positive or false negative rate. Alternatively, it is acceptable for the proof system to offer security only against provers whose computational power is bounded. After all, all computers are computationally bounded in practice. Sometimes authors use the term argument system to distinguish the protocol from a proof system that offers security against computationally unbounded provers, and argument for the transcript resulting from the non-interactivity transform.
There has to be a compelling reason why the verifier cannot naïvely re-run the computation whose integrity is asserted by the computational integrity claim. This is because the prover has access to resources that the verifier does not have access to.
When the restricted resource is time, the verifier should run an order of magnitude faster than a naïve re-execution of the program. Proof systems that achieve this property are said to be succinct or have succinct verification.
Succinct verification requires short proofs, but some proof systems like Bulletproofs or Aurora feature compact proofs but still have slow verifiers.
When the verifier has no access to secret information that is available to the prover, and when the proof system protects the confidentiality of this secret, the proof system satisfies zero-knowledge. The verifier is convinced of the truth of a computational claim while learning no information about some or all of the inputs to that computation.
Especially in the context of zero-knowledge proof systems, the computational integrity claim may need a subtle amendment. In some contexts it is not enough to prove the correctness of a claim, but the prover must additionally prove that he knows the secret additional input, and could as well have outputted the secret directly instead of producing the proof.3 Proof systems that achieve this stronger notion of soundness called knowledge-soundness are called proofs (or arguments) of knowledge.
A SNARK is a Succinct Non-interactive ARgument of Knowledge. The paper that coined the term SNARK used succinct to denote proof system with efficient verifiers. However, in recent years the meaning of the term has been diluted to include any system whose proofs are compact. This tutorial takes the side of the original definition.
The acronym STARK stands for Scalable Transparent ARgument of Knowledge. Scalable refers to the fact that that two things occur simultaneously: (1) the prover has a running time that is at most quasilinear in the size of the computation, in contrast to SNARKs where the prover is allowed to have a prohibitively expensive complexity, and (2) verification time is poly-logarithmic in the size of the computation. Transparent refers to the fact that all verifier messages are just publicly sampled random coins. In particular, no trusted setup procedure is needed to instantiate the proof system, and hence there is no cryptographic toxic waste. The acronym's denotation suggests that non-interactive STARKs are a subclass of SNARKs, and indeed they are, but the term is generally used to refer to a specific construction for scalable transparent SNARKs.
The particular qualities of this construction are best illustrated in the context of the compilation pipeline. Depending on the level of granularity, one might opt to subdivide this process into more or fewer steps. For the purpose of introducing STARKs, the compilation pipeline is divided into four stages and three transformations. Later on in this tutorial there will be a much more fine-grained pipeline and diagram.
The input to the entire pipeline is a computation, which you can think of as a program, an input, and an output. All three are provided in a machine-friendly format, such as a list of bytes. In general, the program consists of instructions that determine how a machine manipulates its resources. If the right list of instructions can simulate an arbitrary Turing machine, then the machine architecture is Turing-complete.
In this tutorial the program is hardcoded into the machine architecture. As a result, the space of allowable computations is rather limited. Nevertheless, the inputs and outputs remain variable.
The resources that a computation requires could be time, memory, randomness, secret information, parallelism. The goal is to transform the computation into a format that enables resource-constrained verifier to verify its integrity. It is possible to study more types of resources still, such as entangled qubits, non-determinism, or oracles that compute a given black box function, but the resulting questions are typically the subject of computational complexity theory rather than cryptographical practice.
Arithmetization and Arithmetic Constraint System
The first transformation in the pipeline is known as arithmetization. In this procedure, the sequence of elementary logical and arithmetical operations on strings of bits is transformed into a sequence of native finite field operations on finite field elements, such that the two represent the same computation. The output is an arithmetic constraint system, essentially a bunch of equations with coefficients and variables taking values from the finite field. The computation is integral if and only if the constraint system has a satisfying solution -- meaning, a single assignment to the variables such that all the equations hold.
The STARK proof system arithmetizes a computation as follows. At any point in time, the state of the computation is contained in a tuple of $\mathsf{w}$ registers that take values from the finite field $\mathbb{F}$. The machine defines a state transition function $f : \mathbb{F}^\mathsf{w} \rightarrow \mathbb{F}^\mathsf{w}$ that updates the state every cycle. The algebraic execution trace (AET) is the list of all state tuples in chronological order.
The arithmetic constraint system defines at least two types of constraints on the algebraic execution trace:
Boundary constraints: at the start or at the end of the computation an indicated register has a given value.
Transition constraints: any two consecutive state tuples evolved in accordance with the state transition function.
Collectively, these constraints are known as the algebraic intermediate representation, or AIR. Advanced STARKs may define more constraint types in order to deal with memory or with consistency of registers within one cycle.
Interpolation and Polynomial IOPs
Interpolation in the usual sense means finding a polynomial that passes through a set of data points. In the context of the STARK compilation pipeline, interpolation means finding a representation of the arithmetic constraint system in terms of polynomials. The resulting object is not an arithmetic constraint system but an abstract protocol called a Polynomial IOP.
The prover in a regular proof system sends messages to the verifier. But what happens when the verifier is not allowed to read them? Specifically, if the messages from the prover are replaced by oracles, abstract black-box functionalities that the verifier can query in points of his choosing, the protocol is an interactive oracle proof (IOP). When the oracles correspond to polynomials of low degree, it is a Polynomial IOP. The intuition is that the honest prover obtains a polynomial constraint system whose equations hold, and that the cheating prover must use a constraint system where at least one equation is false. When polynomials are equal, they are equal everywhere, and in particular in random points of the verifier's choosing. But when polynomials are unequal, they are unequal almost everywhere, and this inequality is exposed with high probability when the verifier probes the left and right hand sides in a random point.
The STARK proof system interpolates the algebraic execution trace literally -- that is to say, it finds $\mathsf{w}$ polynomials $t_i(X)$ such that the values $t_i(X)$ takes on a domain $D$ correspond to the algebraic execution trace of the $i$th register. These polynomials are sent as oracles to the verifier. At this point the AIR constraints give rise to operations on polynomials that send low-degree polynomials to low-degree polynomials only if the constraints are satisfied. The verifier simulates these operations and can thus derive new polynomials whose low degree certifies the satisfiability of the constraint system, and thus the integrity of the computation. In other words, the interpolation step reduces the satisfiability of an arithmetic constraint system to a claim about the low degree of certain polynomials.
Cryptographic Compilation with FRI
In the real world, polynomial oracles do not exist. The protocol designer who wants to use a Polynomial IOP as an intermediate stage must find a way to commit to a polynomial and then open that polynomial in a point of the verifier's choosing. FRI is a key component of a STARK proof that achieves this task by using Merkle trees of Reed-Solomon Codewords to prove the boundedness of a polynomial's degree.
The Reed-Solomon codeword associated with a polynomial $f(X) \in \mathbb{F}[X]$ is the list of values it takes on a given domain $D \subset \mathbb{F}$. Consider without loss of generality domains $D$ whose cardinality is larger than the maximum allowable degree for polynomials. These values can be put into a Merkle tree, in which case the root represents a commitment to the polynomial. The Fast Reed-Solomon IOP of Proximity (FRI) is a protocol whose prover sends a sequence of Merkle roots corresponding to codewords whose lengths halve in every iteration. The verifier inspects the Merkle trees (specifically: asks the prover to provide the indicated leafs with their authentication paths) of consecutive rounds to test a simple linear relation. For honest provers, the degree of the represented polynomials likewise halves in each round, and is thus much smaller than the length of the codeword. However for malicious provers this degree is one less than the length of the codeword. In the last step, the prover sends a non-trivial codeword corresponding to a constant polynomial.
There is a minor issue the above description does not capture: how does the verifier query a committed polynomial $f(X)$ in a point $z$ that does not belong to the domain? In principle, there is an obvious and straightforward solution: the verifier sends $z$ to the prover, and the prover responds by sending $y=f(z)$. The polynomial $f(X) - y$ has a zero in $X=z$ and so must be divisible by $X-z$. So both prover and verifier have access to a new low degree polynomial, $\frac{f(X) - y}{X-z}$. If the prover was lying about $f(z)=y$, then he is incapable of proving the low degree of $\frac{f(X) - y}{X-z}$, and so his fraud will be exposed in the course of the FRI protocol. This is in fact the exact mechanism that enforces the boundary constraints; a slightly more involved but similar construction enforces the transition constraints. The new polynomials are the result of dividing out known factors, so they will be called quotients and denoted $q_i(X)$.
At this point the Polynomial IOP has been compiled into an interactive concrete proof system. In principle, the protocol could be executed. However, it pays to do one more step of cryptographic compilation: replace the verifier's random coins (AKA. randomness) by something pseudorandom -- but deterministic. This is exactly the Fiat-Shamir transform, and the result is the non-interactive proof known as the STARK.
This description glosses over many details. The remainder of this tutorial will explain the construction in more concrete and tangible terms, and will insert more fine-grained components into the diagram.
0 - 1 - 2 - 3 - 4 - 5 - 6
Also, algebraic internal representation. 2: Note that FRI is defined in terms of abstract oracles which can be queried in arbitrary locations; a FRI protocol can thus be compiled into a concrete protocol by simulating the oracles with any cryptographic vector commitment scheme. Merkle trees provide this functionality but are not the only cryptographic primitive to do it. 3: Formally, knowledge is defines as follows: an extractor algorithm must exist which has oracle access to a possibly-malicious prover, pretends to be the matching verifier (and in particular reads the messages coming from the prover and sends its own via the same interface), has the power to rewind the possibly-malicious prover to any earlier point in time, runs in polynomial time, and outputs the witness. STARKs have been shown to satisfy this property, see section 5 of the EthSTARK documentation.
|
CommonCrawl
|
Discounted Cash Flow Calculator
Calculate the discounted present value (DPV) of an investment using the discounted cash flow (DCF) model.
Discount Rate:
Growth Period:
Terminal Rate:
Terminal Period:
Discounted Present Value (DPV):
Growth Value:
Terminal Value:
Total Value:
Learn how we calculated this below
Add this calculator to your site
How to Calculate a Discounted Cash Flow Analysis
Discount Rate and WACC
Discounted Cash Flow Formula
Discounted Cash Flow Analysis Example
Discounted Cash Flow Limitations
Discounted Cash Flow vs. Net Present Value
Joseph Rich, MS in Finance
Joseph Rich holds a Bachelor's degree in economics and Master's degree in finance, and specializes in economics and investing analysis.
Monica Greer, PhD in Economics
Monica Greer holds a PhD in economics from University of Kentucky, an MA in economics from Indiana University, and a BBA in finance from the University of Kentucky. She is currently a senior quantitative analyst and has published two books on cost modeling.
Save on Pinterest Save Share on Facebook Share Share on Twitter Tweet
Discounted cash flow (DCF) analysis is a method that can be used to value a particular investment based on its expected future cash flows. This can be used for valuing a stock or a particular project a company is considering.
In the case of a stock, future cash flows would be dividends. If a company is considering whether or not to invest in a project, the cash flows can be cash coming into the business, such as with a machine that improves the product and increases revenue.
Cash flows can also be in the form of savings, such as with video conferencing technology that may reduce the expenses related to office space.
A DCF analysis takes all future cash flows and discounts them using the discount rate or weighted average cost of capital (WACC). An investor/company can choose whether a discount rate or WACC is most appropriate for their analysis.
The discount rate and weighted average cost of capital are the rates that investors and businesses use to discount future cash flows.
The discount rate is the rate that is used for investments of similar risk. If an investor typically earns 8% on his investments, that is the discount rate he will use in evaluating potential new investments.
The level of risk is different for different types of investments, so using a discount rate that does not align with the particular investment could provide misleading results.
The WACC is the weighted average of the cost of debt and the cost of equity. This is the return that business owners would require their business to make with an investment. If the investment doesn't return at least as much as its WACC, the investment should be rejected.
The discounted cash flow formula below shows how the discount rate and WACC are applied.
The discounted cash flow formula is shown below:[1]
DCF = \frac{CF_{1}}{(1 + r)^{1}} + \frac{CF_{2}}{(1 + r)^{2}} + \cdots + \frac{CF_{n}}{(1 + r)^{n}}
[formula may scroll beyond screen]
DCF = discounted cash flow
CF = cash flow for a given period
r = discount rate
As the formula shows, each future cash flow is discounted by 1 plus the discount rate or WACC raised to the power of the time period. After each future cash flow is discounted, all the cash flows are summed up to arrive at the discounted cash flow.
For example, to help explain how the discounted cash flow calculation works, let's say an investor is deciding whether or not to buy a stock. The company will pay a $1.00 dividend next year. Based on historical data, he anticipates the dividend will then grow by 2% for 20 years and then by 1% for 20 years after that.
The investor will then plan on selling the stock. They typically make 8% on their investments, so this will be the discount rate.
Since this involves a period of 40 years, it is much simpler to plug these numbers into the calculator than work the formula by hand. You will need the cash flow and discount rate plus the following variables for the calculation:
Growth Rate: The rate at which the cash flow is initially projected to grow each year.
Growth Period: The number of years for this initial growth period.
Terminal Rate: The rate at which the cash flow is projected to grow each year after the initial growth period.
Terminal Period: The number of years the cash flow will grow at the terminal rate.
In this example, the cash flow will be $1. The discount rate is 8%. The growth rate is 2% and the growth period is 20 years. The terminal rate is 1% and the terminal period is 20 years.
After plugging these numbers into the discounted cash flow calculator above, we get a total DCF of $14.98. If the stock is trading for anything less than $14.98, the investor would want to purchase it because the present value of all future cash flows is greater than the price of the stock.
But, if it is trading for more than $14.98, the investor should not buy the stock.
Now let's look at an example of a company deciding whether or not it wants to move forward with a project. It anticipates making a $100,000 investment in a technology that will increase its revenues immediately by $25,000.
For the next 2 years after that, the increase in revenue will be 5% higher than the prior year. Then for the next 3 years, revenue will only be 2% higher than the previous year. The company has a WACC of 10%. Is this a smart investment?
Since this is only for 4 years, we can use either the discounted cash flow formula or the calculator above. Let's use the formula first, and then you can confirm in the calculator. The table below gives the results.
table showing the discounted cash flow over 4 years.
1 26,250.00 23,863.64
Total 139,851.82 105,512.62
Here again, we see that the investment is worth it. The DCF of $105.512.62 is greater than the initial investment of $100,000. If the DCF had been less than the initial investment, the company would not have gone through with the technology investment.
To find the breakeven point, use our IRR calculator. It shows what the discount rate would need to be to make the cash flows equal to the initial investment.
While the DCF analysis is a helpful tool for planning purposes, there are a handful of limitations.
First, estimating forecasts, such as future cash flows and growth rates, far in advance is very difficult. So while it does provide an investor or business a framework to make a decision, whether or not the correct decision was made won't be known until the end of the time period.
Also, not all companies pay a dividend. Newer companies typically won't pay a dividend and instead will reinvest income back into the business. In this case, the DCF model computes a value of $0 for these companies. We know that newer companies aren't worth $0, so a DCF analysis can't be used for them.
Negative amounts can't be used in the DCF analysis either. If a business expects to have to repair a machine after a certain number of years, this can't be factored into the analysis. This limitation can also apply to an investment in which a business may reduce its dividend in a recession.
The discounted cash flow is the sum of all the discounted cash flows. The net present value (NPV) is what is left over after the initial investment is made. So the NPV = DCF – initial investment.
DCF by itself doesn't give any direction on the decision to be made. The NPV does. If the NPV of a project or investment is greater than $0, the investment should generate a return.[2]
Under these assumptions, a return of at least the discount rate or WACC will be achieved. There are other calculators to calculate the rate of return, such as our ROI calculator and rate of return calculator.
If the NPV is less than $0, the investment should not be made. In an earlier example, we had a business trying to decide if it should invest $100,000 in new technology. The DCF was $105,512.62. Therefore, the NPV was positive $5,512.62 and the business should have made the investment.
Our NPV calculator is similar to the discounted cash flow calculator, but each of the cash flows needs to be entered. If there is only one future cash flow, then you can use a present value calculator, which will calculate what the value of that future cash flow is today, discounted at a certain rate for a period of time.
The larger the NPV, the better. This provides an additional margin for error in the forecasts.
Cross-Price Elasticity Calculator
Price Elasticity of Supply Calculator
Payback Period Calculator
Depreciation Calculator
Commission Calculator
Business Finance Financial Calculators
Paiano, F., Introduction to Investments - 4.3: The Discounted Cash Flow Model, LibreTexts Business, July 27, 2022, https://biz.libretexts.org/Bookshelves/Finance/Introduction_to_Investments_(Paiano)/02%3A_Chapter_2/04%3A_Fundamental_Analysis-_Valuation_Models/4.03%3A_New_Page
Vo, E., Is Discounted Cash Flow the Same as Net Present Value?, The Hartford, September 14, 2022, https://sba.thehartford.com/finance/cash-flow/discounted-cash-flow-versus-net-present-value/
|
CommonCrawl
|
High-precision full quaternion based finite-time cascade attitude control strategy considering a class of overactuated space systems
A. H. Mazinan1
Human-centric Computing and Information Sciences volume 5, Article number: 27 (2015) Cite this article
A high-precision full quaternion based finite-time three-axis cascade attitude control strategy is considered in the present research with respect to state-of-the-art to deal with a class of overactuated space systems. The main idea behind the subject is to design a new quaternion based proportional derivative approach, which is realized along with the linear quadratic regulator method. In a word, the control technique proposed here is organized based upon an inner closed loop control to handle the angular rates in the three axes and the corresponding outer closed loop to handle the rotational angles in the same three axes, as well. It aims us to cope with the present complex and complicated systems, in the productive and constructive manner, in a number of programmed space missions such as orbital, communicational, thermal and so on maneuvers. It can be shown that the proposed cascade control strategy is organized in association with a set of pulse-width pulse-frequency modulators to drive a number of on–off reaction thrusters. It should be noted that these ones could significantly be increased w. r. t. the investigated control efforts, in order to provide overall accurate performance of the present space systems. There is currently a control allocation realization to complete the process of the approach presentation and organization. At last, the investigated results are presented in comparison with some potential benchmarks to guarantee and verify the approach performance.
With the development of space technologies and with the rapidly growing information available on the related literatures, proposing the new insights in the area of system modeling and control with respect to state-of-the-art are a challenging issue for potential researchers. As is the case, the present research attempts to consider the new solutions regarding a class of overactuated space systems for the purpose of making the new contribution in this area with a focus on system modeling and control. With this purpose, at first, a cascade control strategy including two closed loops is considered to be designed based upon the full quaternion based three-axis finite-time attitude control approach. It should be noted that the first one as outer closed control loop is realized along with a new quaternion based PD approach, organized based upon the LQR technique as QPDLQR approach to handle the rotational angles in the three axes, while the corresponding inner closed loop control is realized to handle the angular rates in the same three axes for the purpose of driving the present complicated space system, in a better performance. The proposed strategy is investigated in association with a set of PWPF modulators to handle a number of on–off thrusters, where these ones could significantly be increased w. r. t. the resulted control efforts to provide overall accurate system performance. The proposed control technique can now be completed provided that the control allocation is realized to finalize the process of the approach organization.
Regarding the background of the research, in their brief forms, Zheng et al. suggest an autonomous attitude coordinated control for a space system [1]. Yang et al. propose nonlinear attitude tracking control for space system [2]. In the Du et al. research, an attitude synchronization control for a class of flexible space system is proposed to deal with the problem of attitude synchronization for a class of flexible space system [3]. Lu et al. research is to deal with an adaptive attitude tracking control for rigid space system with finite-time convergence [4]. Yang et al. review space system attitude determination and control using quaternion based method [5]. Zou et al. work is presented based upon an adaptive fuzzy fault-tolerant attitude control of space system [6]. Cai et al. work is to deal with the leader-following attitude control of multiple rigid space system systems [7]. Hereinafter, Kuo et al. work is presented in the area of attitude dynamics and control of miniature space system via pseudo-wheels, once Zhang et al. research is given in attitude control of rigid space system with disturbance generated by time varying exo-systems [8, 9]. Katzakis et al. illustrate extending plane-casting for the purpose of dealing with a six-DOF system [10]. Erdong et al. propose robust decentralized attitude coordination control of space system formation [11]. Lu et al. have proposed a design of control approach for rigid space system attitude tracking with actuator saturation, where Pukdeboon et al. have suggested an optimal sliding mode controller for attitude tracking of space system via Lyapunov function [12, 13]. Afterwards, time-varying sliding mode control in the area of rigid space system attitude tracking is presented by Yongqiang et al., while adaptive sliding mode control with its application to six-DOF relative motion of space system under input constraint is given by Wu et al. [14, 15]. Furthermore, the realization of attitude control of space system is presented by Butyrin et al. [16].
Regarding the control allocation research, Johansen et al. present a survey to address this issue [17]. Zaccarian has proposed dynamic allocation for input redundant control systems [18]. Servidia's research is to deal with control allocation for gimbaled/fixed thrusters [19]. Yeh presents an approach to sliding-mode adaptive attitude controller design with its application to space systems with thrusters [20].
As are obvious, the whole of above-referenced investigations along with other related potential ones are all tried to address some efficient methods to deal with this complicated space system. In the same way, the proposed control approach is now made another new effort, while its main differences w. r. t. these considered methods are given in the approach's structure and integration as well as their corresponding results.
The rest of the manuscript is organized as follows: the proposed cascade attitude control strategy is first given in "The proposed cascade attitude control strategy" section. The simulation results are then given in "The simulation results" section. Finally, the research concludes in "Conclusion" section.
The proposed cascade attitude control strategy
The schematic diagram of the proposed high-precision control strategy is first illustrated in Fig. 1. This cascade attitude control approach is organized based upon two closed loops including the inner and the outer loops. As are obvious, the inner loop consists of (1) the LPC approach, (2) the PWPF modulator, (3) the CA and finally (4) the dynamics of the space system. Hereinafter, the outer loop consists of (1) the QPDLQR approach and (2) the kinematics of the space system including the QMG, the QMG2QV and finally the QV2RA, respectively. These ones are designed to present the quaternion vector regarding the system under control in the form of three-axis rotational angles to be used in the process of referenced commands tracking. Also, the rest of the modules employed in the strategy consists of the DCM, the 3DRG, the DCM and finally the UADG.
The schematic diagram of the proposed control strategy
In one such case, the DCM module is realized to convert the referenced commands information from the degree to its radian form, while the iDCM module is correspondingly realized to convert the present information from radian to its degree form. The 3DRG module is also designed to apply to the approach as the desired referenced commands inputs and finally the UADG module is employed to be able to consider the approach performance, in such real situations, in the presence of uncertainties and disturbances. Some of the subsystems are now presented in the proceeding sub-sections.
The QPDLQR and LPC approaches
Regarding the QPDLQR approach, the space system under control that is represented in the proceeding subsection can be dealt with in the outer loop to track the rotational angles, i.e. \( \varphi_{s} ,\theta_{s} \) and \( \psi_{s} \) based upon its referenced rotational angles, i.e. \( \varphi_{r} ,\theta_{r} \) and \( \psi_{r} \), respectively. The whole of control coefficients are acquired via the well-known LQR technique to optimize its performance index. Here, the QPDLQR approach is realized along with the linear state space model of the present system, given by the following
$$ \left\{ {\begin{array}{*{20}c} {\dot{\varvec{X}} = \varvec{AX} + \varvec{B}u,\quad u = \frac{{\tau_{i} }}{{I_{i} }}} \\ {\varvec{A} = \left[ {\begin{array}{*{20}c} 0 & 1 \\ 0 & 0 \\ \end{array} } \right] ,\quad \varvec{B} = \left[ {\begin{array}{*{20}c} 0 \\ 1 \\ \end{array} } \right]} \\ \end{array} } \right. $$
where \( \varvec{X} \) is taken as the state vector and \( \varvec{u} \) is organized based upon \( \tau_{i} \) and \( I_{i} \), i.e. the \( ith;\;i = x, y \) and \( z \)-axis torque and the corresponding moments of inertial regarding the same space system. In this way, the performance index is realized as
$$ \left\{ {\begin{array}{*{20}l} {V = \mathop \smallint \limits_{0}^{\infty } \left( {x_{1}^{2} + x_{2}^{2} + \frac{1}{{c^{2} }}u^{2} } \right)} \\ {\varvec{Q} = \left[ {\begin{array}{*{20}c} 1 & 0 \\ 0 & 1 \\ \end{array} } \right]\varvec{ }, \quad R = \frac{1}{{c^{2} }}} \\ \end{array} } \right. $$
The three-axis control efforts concerning the QPDLQR approach; \( \varvec{u} = - \varvec{KX} = - k_{{p_{i} }} (x_{1} - x_{0} ) - k_{{d_{i} }} x_{2} \), are designed to optimize the present performance index. In one such case, by supposing \( \varvec{P} \) as positive definite matrix, the Riccati equation, i.e. \( \varvec{A}^{\varvec{*}} \varvec{P} + \varvec{PA} - \varvec{PBR}^{ - 1} \varvec{B}^{\varvec{*}} \varvec{P} + \varvec{Q} = 0 \) can be dealt with to calculate \( \varvec{P} = \left[ {\begin{array}{*{20}c} {\sqrt {1 + \frac{2}{c}} } & {\frac{1}{c}} \\ {\frac{1}{c}} & {\frac{1}{c}\sqrt {1 + \frac{2}{c}} } \\ \end{array} } \right] \). Now, the QPDLQR approach coefficients may be resulted through \( \varvec{K} = \varvec{R}^{ - 1} \varvec{B}^{\varvec{*}} \varvec{P} = \left[ {\begin{array}{*{20}c} c & {\sqrt {c^{2} + 2c} } \\ \end{array} } \right] \). Subsequently, the QPDLQR approach coefficients are resulted through
$$ k_{{p_{i} }} = c\frac{{I_{i} }}{{\tau_{i} }} ,k_{{d_{i} }} = \sqrt {c^{2} + 2c} \frac{{I_{i} }}{{\tau_{i} }};\quad i = x, y,z $$
Finally, the control efforts concerning the QPDLQR approach are finally rewritten based upon the quaternion errors, i.e. \( q_{\mu e} ; \;\mu = 1, \;2, \;3 \) in association with the angular rates in the three axes, i.e. \( \omega_{si} ;\;i = x, y \) and \( z \) by the following
$$ \left( {\begin{array}{*{20}c} {\tau_{x} } \\ {\tau_{y} } \\ {\tau_{z} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} { - T(k_{px} q_{1e} + k_{dx} \omega_{sx} )} \\ { - T(k_{py} q_{2e} + k_{dy} \omega_{sy} )} \\ { - T(k_{pz} q_{3e} + k_{dz} \omega_{sz} )} \\ \end{array} } \right) $$
where by using \( \varvec{q}_{\varvec{e}} = \varvec{q}_{\varvec{r}} \varvec{q}_{\varvec{s}} \), its expanded form can be written by
$$ \left[ {\begin{array}{*{20}c} {q_{1e} } \\ {q_{2e} } \\ {q_{3e} } \\ {q_{4e} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {q_{4r} } & {q_{3r} } & { - q_{2r} } & { - q_{1r} } \\ { - q_{3r} } & {q_{4r} } & {q_{1r} } & { - q_{2r} } \\ {q_{2r} } & { - q_{1r} } & {q_{4r} } & { - q_{3r} } \\ {q_{1r} } & {q_{2r} } & {q_{3r} } & {q_{4r} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {q_{1s} } \\ {q_{2s} } \\ {q_{3s} } \\ {q_{4s} } \\ \end{array} } \right] $$
Moreover, it is needed to note that the parameter \( T \) as the thruster's level is discussed in the proceeding sub-section entitled The CA scheme realization. Moreover, regarding the LPC approach, the outcomes are the same as the QPDLQR approach, while the three-axis derivative control terms can completely be ignored to calculate, in its brief form.
The PWPF realization
The PWPF modulator is employed, in so many environments, such as space system. It is realized due to its advantages over other types of modulators. It consists of a first order lag filter along with a Schmitt trigger inside a negative feedback loop. The various modulation methods are used to relate between the level of required torque, the width and the frequency of pulses, due to the fact that reaction control approaches do not possess the linear relationship between the input to the control approach and its output torque. It can be shown that in order to shape the non-linear output of on–off thrusters into linear request output, a set of thruster control methods can be exploited. The most frequently used method is known as the PWPF modulator. Others like Schmitt trigger control, pseudo rate modulator, integrated pulse frequency modulator and pulse width modulator are also realized to shape the output of constant thrusters. A deep consideration can be performed to find the relationships between the static characteristics of the PWPF modulator along with its parameters selection.
The CA scheme realization
The torque in the three axes including \( \tau_{x} \), \( \tau_{y} \) and \( \tau_{z} \) and the corresponding thruster's level including \( T_{i} ;\;i = 1, 2, \ldots , n \) can first be presented by the following
$$ \left[ {\begin{array}{*{20}c} {T_{1} } \\ {T_{2} } \\ {\begin{array}{*{20}c} \vdots \\ {T_{n} } \\ \end{array} } \\ \end{array} } \right] = \varvec{E}^{ + } \left[ {\begin{array}{*{20}c} {\tau_{x} } \\ {\tau_{y} } \\ {\tau_{z} } \\ \end{array} } \right] $$
In such a case, the relation between \( \varvec{E} \) and \( \varvec{E}^{ + } \) can easily be presented through \( \varvec{E}^{ + } = \varvec{E}^{\varvec{T}} (\varvec{EE}^{\varvec{T}} )^{ - 1} \), as well. Now, by supposing the number of thrusters to be eight, the following above-mentioned matrices could be resulted
$$ E = \left[ {\begin{array}{*{20}c} 0 & 0 & 0 & 0 & { - R} & { - R} & R & R \\ { - R} & 0 & R & 0 & 0 & L & 0 & { - L} \\ 0 & R & 0 & { - R} & L & 0 & { - L} & 0 \\ \end{array} } \right] $$
Here, \( R \) and \( L \) are taken as thruster's arm and its thruster's length, respectively. Due to the fact that \( T_{i} ;\;i = 1, 2, \ldots , n \) in Eq. (6) are in need of a sequence of binary information, a relay, i.e. \( f_{on/off} \) could be realized. In one such case, the produce of binary information for the whole of on–off thrusters are truly guaranteed, although the parameters \( \tau_{x} \), \( \tau_{y} \) and \( \tau_{z} \) may be changed to \( \tau_{{x_{e} }} \), \( \tau_{{y_{e} }} \) and \( \tau_{{z_{e} }} \), namely efficient torques. The relation of the present torques in the three axes and its efficient ones is presented by
$$ \left[ {\begin{array}{*{20}c} {\tau_{{x_{e} }} } \\ {\tau_{{y_{e} }} } \\ {\tau_{{z_{e} }} } \\ \end{array} } \right] = \varvec{E }f_{on/off} \left( {\varvec{E}^{ + } \left[ {\begin{array}{*{20}c} {\tau_{x} } \\ {\tau_{y} } \\ {\tau_{z} } \\ \end{array} } \right]} \right) $$
It should be noted that this \( f_{on/off} \) relay hysteresis; \( \varepsilon \), could be optimized, in order to present the efficient thrusts in association with the corresponding ones.
The dynamics and kinematics of the space systems
Regarding the dynamics of the space systems, according to the Newton's second law, the summation of the external moments acting on the body can be equal to the time rate of change of the angular momentum in the inertial frame (\( D^{I} (h_{B}^{BI} ) = m_{B} \)); Now, transferring the rotational time derivative to the body frame B can be written
$$ D^{I} (I_{B}^{B} \omega^{BI} ) + \varOmega^{BI} I_{B}^{B} \omega^{BI} = \sum {m_{B} } $$
where \( I_{B}^{B} \) is taken as space system's moment of inertia, \( \omega^{BI} \) is taken as space system's angular rate, relative to the inertial coordinate system and \( \varOmega^{BI} \) is taken as its skew symmetric matrix. Picking body coordinate \( \left. \, \right]^{B} \), the closed-form results can be presented by the following
$$ [I_{B}^{B} ]^{B} \left[ {\frac{{d\omega^{BI} }}{dt}} \right]^{B} + [\varOmega^{BI} ]^{B} [I_{B}^{B} ]^{B} [\omega^{BI} ]^{B} = \left[ {\sum {m_{B} } } \right]^{B} $$
Now, the quaternion feedback method can be realized in the attitude dynamics, once its time derivative ones are presented as
$$ \{ \dot{\varvec{q}}\} = \frac{1}{2}\left\{ {\begin{array}{*{20}c} 0 & { - [\overline{{\omega^{BE} }} ]} \\ {[\overline{{\omega^{BE} }} ]} & { - [\overline{{\varOmega^{BE} }} ]} \\ \end{array} } \right\}\{ \varvec{q}\} $$
where \( \{ \varvec{q}\} = \{ \begin{array}{*{20}c} {q_{0} } & {\left[ {\dot{q}} \right]^{T} } \\ \end{array} \}^{T} \) is taken as an attitude quaternion that represents the attitude of the space system, relative to the local-level coordinate system.
Regarding the kinematics of the space systems, the angular rates in the three axes are taken as \( p = \omega_{sx} , \;q = \omega_{sy} ,\; r = \omega_{sz} \), while \( \varphi_{s} , \theta_{s} , \psi_{s} \) are correspondingly taken as the rotational angles (Euler angles). And also \( \tau_{i} , I_{ii} ;\;i = x, y, z \) are taken as the system torque inputs and the moments of inertia, respectively, in the same axes. Subsequently, the following nonlinear state space model of the system is resulted by the following
$$ \left\{ {\begin{array}{*{20}l} {\dot{p} = \frac{{\tau_{x} }}{{I_{xx} }} - \frac{{(I_{zz} - I_{yy} )}}{{I_{xx} }}qr} \hfill \\ {\dot{q} = \frac{{\tau_{y} }}{{I_{yy} }} - \frac{{(I_{xx} - I_{zz} )}}{{I_{yy} }}pr} \hfill \\ {\dot{r} = \frac{{\tau_{z} }}{{I_{zz} }} - \frac{{(I_{yy} - I_{xx} )}}{{I_{zz} }}pq} \hfill \\ {\dot{\varphi }_{s} = p + (\tan \theta_{s} \sin \varphi_{s} )q + (\tan \theta_{s} \cos \varphi_{s} )r} \hfill \\ {\dot{\theta }_{s} = (\cos \varphi_{s} )q - (\sin \varphi_{s} )r} \hfill \\ {\dot{\psi }_{s} = \left( {\frac{{\sin \varphi_{s} }}{{\cos \theta_{s} }}} \right)q + \left( {\frac{{\cos \varphi_{s} }}{{\cos \theta_{s} }}} \right)r} \hfill \\ \end{array} } \right. $$
The simulation results
The outcomes acquired through a number of simulation programs are presented in this section to consider the applicability of the strategy investigated here. The information regarding the space system and also both control loops are now tabulated in Table 1.
Table 1 The parameters regarding the proposed control strategy
The outer loop results
In such a case, the tracking of three-axis rotational angles are illustrated in Figs. 2, 3 and 4, while the corresponding tracking errors are illustrated in Fig. 5, respectively. In one such case, the initial three-axis attitude of the system is given as 0, 0 and 0 deg., respectively, where the referenced commands are abruptly varied w. r. t. time, respectively. These results indicate that the strategy proposed here is able to control the three-axis rotational angles at each instant of time, where each one of them is behaved, in its different way.
The x-axis rotational angle tracking information in the outer loop
The y-axis rotational angle tracking information in the outer loop
The z-axis rotational angle tracking information in the outer loop
The three-axis rotational angle tracking errors information in the outer loop
The quaternion vector tracking information is illustrated in Figs. 6, 7, 8 and 9, respectively. The significance of these outcomes is the same as the tracking information illustrated in the three-axis rotational angles, correspondingly.
The 1st element of the quaternion vector tracking information in the outer loop
The 2nd element of the quaternion vector tracking information in the outer loop
The 3th element of the quaternion vector tracking information in the outer loop
The inner loop results
The angular rate information is presented in Figs. 10, 11 and 12, respectively. These results are meaningful versus the three-axis tracking information, illustrated as rotational angles. The outcomes indicate that the strategy proposed here is well behaved to deal with the whole of angular rates in the three-axes to approach to be zero in the small amount of time in correspondence with the three-axis rotational angles.
The x-axis angular rate information in the inner loop
The y-axis angular rate information in the inner loop
The z-axis angular rate information in the inner loop
The verification of the results
The verification of the investigated outcomes is finally analyzed by considering two potential benchmarks, published in recent years. There are the following criteria to be considered in Table 2 including (1) the maximum three-axis rotational angles errors in steady state, (2) the maximum three-axis angular rates errors in steady state and finally (3) the trajectory convergence time. As a deduction matter, the results indicate that the proposed approach is now well behaved in line with both benchmarks concerning the items (1) and (3), while the Butyrin approach is well behaved regarding the item (2), as well.
Table 2 The verification of the proposed control strategy performance w. r. t. the corresponding benchmarks
The present research addresses the new insights concerning a class of overactuated space systems to make the new contribution in this area with a focus on system modeling and control. It introduces a new high-precision cascade control strategy including the inner and the corresponding outer loops that are handled via the LPC and the QPDLQR approaches, respectively. It is shown that the inner closed loop of the proposed control strategy is designed based upon a set of pulse-width pulse-frequency modulator to deal with a number of on–off thrusters as system actuators for the purpose of handling the rotational angles of the system under control in the three axes. The outer closed control loop of the proposed control strategy is also designed to drive the angular rates in the same three axes for the purpose of dealing with the present complicated space system, in a better performance. The acquired results and the structure of the proposed control strategy are taken into consideration as the state-of-the-art outcomes. Moreover, the investigated results are completely considered to be verified through a number of potential benchmarks, employed in this research. In the sequel, the research is useful to organize space programmed mission including orbital, communication, thermal and other related ones maneuvers in the real situations.
QMG:
quaternion matrix generation
QMG2QV:
quaternion matrix generation conversion to the corresponding quaternion vector
QV2RA:
quaternion vector conversion to rotational angles
QPDLQR:
quaternion based proportional derivative linear quadratic regulator
PD:
proportional derivative
LPC:
linear proportional control
PWPF:
pulse-width pulse-frequency
control allocation
3DRG:
three-axis desired referenced commands generator
data conversion module
iDCM:
inverse data conversion module
UADG:
uncertainties and disturbances generator
DOF:
degrees of freedom
Zheng Z, Song S (2014) Autonomous attitude coordinated control for space system formation with input constraint, model uncertainties, and external disturbances. Chin J Aeronaut 27(3):602–612
Yang H, You X, Xia Y, Liu Z (2014) Nonlinear attitude tracking control for space system formation with multiple delays. Adv Space Res 54(4):759–769
Haibo D, Li S (2014) Attitude synchronization control for a group of flexible space system. Automatica 50(2):646–651
Kunfeng L, Xia Y (2013) Adaptive attitude tracking control for rigid space system with finite-time convergence. Automatica 49(12):3591–3599
Yang Y (2012) Space system attitude determination and control: quaternion based method. Ann Rev Control 36(2):198–219
Zou A-M, Kumar KD (2011) Adaptive fuzzy fault-tolerant attitude control of space system. Control Eng Prac 19(1):10–21
Cai H, Huang J (2014) The leader-following attitude control of multiple rigid space system systems. Automatica 50(4):1109–1115
MATH MathSciNet Article Google Scholar
Kuo Y-L, Tsung-Liang W (2012) Open-loop and closed-loop attitude dynamics and controls of miniature space system using pseudowheels. Comput Math Appl 64(5):1282–1290
Zhang X, Liu X, Zhu Q (2014) Attitude control of rigid space system with disturbance generated by time varying exosystems. Commun Nonlinear Sci Numer Simul 19(7):2423–2434
Katzakis N, Teather RJ, Kiyokawa K, Takemura H (2015) INSPECT: extending plane-casting for 6-DOF control. Human-centric Comput Inf Sci 5:22
Erdong J, Xiaolei J, Zhaowei S (2008) Robust decentralized attitude coordination control of space system formation. Syst Control Lett 57(7):567–577
MATH Article Google Scholar
Kunfeng L, Xia Y, Mengyin F (2013) Controller design for rigid space system attitude tracking with actuator saturation. Inf Sci 220(20):343–366
MATH Google Scholar
Pukdeboon C, Zinober ASI (2012) Control Lyapunov function optimal sliding mode controllers for attitude tracking of space system. J Franklin Inst 349(2):456–475
Yongqiang J, Xiangdong L, Wei Q, Chaozhen H (2008) Time-varying sliding mode controls in rigid space system attitude tracking. Chin J Aeronaut 21(4):352–360
Wu J, Liu K, Han D (2013) Adaptive sliding mode control for six-DOF relative motion of space system with input constraint. Acta Astronautica 87:64–76
Butyrin SA, Makarov VP, Mukumov RR, Somov Y, Vassilyev SN (1997) An expert system for design of space system attitude control systems. Artif Intell Eng 11(1):49–59
Johansen TA, Fossen TI (2013) Control allocation—a survey. Automatica 49(5):1087–1103
Zaccarian L (2009) Dynamic allocation for input redundant control systems. Automatica 45(6):1431–1438
Servidia PA (2010) Control allocation for gimballed/fixed thrusters. Acta Astronautica 66(3–4):587–594
Yeh FK (2010) Sliding-mode adaptive attitude controller design for space systems with thrusters. IET Control Theory Appl 4(7):1254–1264
The corresponding author would like to express the best and the warmest regards to the respected Editors of "Human-centric Computing and Information Sciences", Springer Publisher, as well as the whole of respected potential anonymous reviewers, for suggesting their impressive, constructive, desirable and technical comments on the present investigation. Moreover, Dr. Mazinan sincerely appreciates the Islamic Azad University (IAU), South Tehran Branch, Tehran, Iran for sufficient supports in the process of research investigation and organization that is carried out under contract with Research Department. At last, special thanks to Mrs. Maryam Aghaei Sarchali, Mohadesh Mazinan and also Mohammad Mazinan for the efficient assistance and patience, in the procedure of realizing the present research.
Compliance with ethical guidelines
Competing interests The authors declare that they have no competing interests.
Department of Control Engineering, Faculty of Electrical Engineering, South Tehran Branch, Islamic Azad University (IAU), No. 209, North Iranshahr St, P.O. Box 11365/4435, Tehran, Iran
A. H. Mazinan
Correspondence to A. H. Mazinan.
Mazinan, A.H. High-precision full quaternion based finite-time cascade attitude control strategy considering a class of overactuated space systems. Hum. Cent. Comput. Inf. Sci. 5, 27 (2015). https://doi.org/10.1186/s13673-015-0047-9
Received: 10 June 2015
High-precision full quaternion based control strategy
Proportional derivative linear quadratic regulator approach
Overactuated space systems
Pulse-width pulse-frequency modulator
|
CommonCrawl
|
What's the difference of two squares? What's the difference of the difference of two squares? In this episode we're looking for patterns in sequences, and patterns across different sequences.
$$(1+x)^3=1+3x+3x^2+x^3$$
Some people in the live chat seemed surprised that we could multiply out $(1+x)^3$ so easily. This is partly experience, and partly a bit of theory called the Binomial Theorem. In fact, there's a link between multiplying out brackets and Pascal's triangle (this is the Pascal's triangle season, it seems!)
Find out more at Wikipedia | Binomial Theorem.
Telescoping Sums
These got a mention in chat, and they're really cool. The idea is that if we're adding together a sum like
$$(a-b)+(b-c)+...+(x-y)+(y-z)$$
then we can simplify that to $(a-z)$. Pretty obvious in that example, but here's a better illustration of the technique; suppose we want to add together all the values of $(n^2-n)^{-1}$ from $n=2$ to $n=6$. Writing it out in full, that's
$$\frac{1}{2}+\frac{1}{6}+\frac{1}{12}+\frac{1}{20}+\frac{1}{30}.$$
It's not obvious how this could telescope down like the previous example. Here's the trick; we can use the identity
$$\frac{1}{n^2-n}=\frac{1}{n-1}-\frac{1}{n}$$
on each term. Now our sum becomes
$$\left(\frac{1}{1}-\frac{1}{2}\right)+\left(\frac{1}{2}-\frac{1}{3}\right)+\left(\frac{1}{3}-\frac{1}{4}\right) +\left(\frac{1}{4}-\frac{1}{5}\right) +\left(\frac{1}{5}-\frac{1}{6}\right),$$
which collapses down to $1-\frac{1}{6}$.
Sometimes this is handy when we want to calculate a sum "to infinity" of all the terms of a sequence; here we can deduce that
$$\sum_{n=2}^{N}\frac{1}{n^2-n}=1-\frac{1}{N}$$
$$\sum_{n=2}^{\infty}\frac{1}{n^2-n}=1.$$
(Why "telescoping"? The thing to imagine is one of those old metal telescopes with lots of sections that fit inside each other so that it can collapse down small.)
There's a similar calculation in a solution for a geometry problem hosted at AOPS Wiki | 2016 AMC 12B Problem 21
Pi squared over six
At the end of the livestream, I told you a few results with no proof at all. One of them was the eye-catching fact that
$$ \sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$$
What's that sum got to do with $\pi$? Where are the circles? I can't do better than link you to this Numberphile video (19 minutes, uses geometry to find the value of the sum).
For a more general method that can also be used for the $n^{-4}$ sum, see S3ep8 of the OOMC.
Why do we care about adding powers?
See the further reading from S3ep3 for an application of sums of powers to dice-rolling. That has an application to Dungeons & Dragons, of all things.
Why do we care about Dungeons & Dragons?
I just think it's neat.
Discrete calculus
Thanks Miles for the link to the video on the Mathologer channel (47 minutes of discrete calculus, includes something called "the Master formula")
If you want to get in touch with us about any of the mathematics in the video or the further reading, feel free to email us on oomc [at] maths.ox.ac.uk
Please contact us for feedback and comments about this page. Last updated on 27 Jan 2023 17:36.
|
CommonCrawl
|
IPv6 Is a Total Nightmare — This is Why
2020-09-06 34 min read Networking Rants Teknikal_Domain Unable to load comment count
New and improved!
Address Space
Allocation Issues
Address Representation
Header Changes
Some Extension Headers
DHCP and Router Advertisements
RS / RA
Prefix Delegation
SLAAC / APIPA
ARP / NDP
One Other Random Remark
So this is just going to be a total rant. IPv6 is, in theory, a solution to many things, including the dwindling IPv4 address space. IPv6 was officially a draft in 1997, and became a real Internet Standard in 2017. And, quite frankly, it's one of those things that, in my opinion, just adds too much hassle for not enough benefit.
Take 2 this time. More facts. Clarified points, same worthless opinions.
Before anything, this is basically just an opinionated comparison, not everything I mention is something I dislike (as you'll see), I'm just listing the comparison points off as I remember them.
Okay, yes, IPv6's address space is massive. IPv4 uses 32-bit addresses, allowing for 4,294,967,296 total addresses. IPv6 uses 128-bit addresses, meaning… 340,282,366,920,938,463,463,374,607,431,768,211,4561 addresses. The entire v4 space could fit into the v6 space $7.922816251426434 \times 10^{28}$2 times over. The default allocation to people, a /64, meaning the first half of the addresses is fixed and the entire second half is the part unique to the network, means that most people have 18,446,744,073,709,551,6163 addresses to play with at home. Compared to v4, where most networks are around /24, meaning the first three groupings are fixed, you get… 254 addresses. Again, context, a standard /64 allocation can, again, fit the entire v4 address space inside itself 4,294,967,296 times. (Yes, that's the number of addresses in the IPv4 address space. That's what happens when you divide a power by half of that power.)
IPv4 addresses are, well, sparse, given that most high-level authorities have already run out of addresses, but because CIDR and NAT are a thing, we've really started compacting down our usage. My entire house of easily over 200 IPs takes up… 2, according to the rest of the world.
Now, make no mistake I am fully in support of a larger address space. I, personally, would have seen if a 64-bit (18,446,744,073,709,551,616) address space would have worked before the explosion of numbers that is 128-bits happened, as you'll see. Just for some comparisons, The current world population as of now, October 2021, is 7.9 Billion. None of what I'm about to do will be affected significantly either way, so let's estimate that as a flat 7,900,000,000. With a 64-bit addressing scheme, that leaves room for every human on the planet to have 2,335,030,895 different networked devices on the internet. With 128 bits, that jumps to about… 43,073,717,331,764,362,463,718,304,738. Every human on the planet would need to take up that many unique IP addresses to completely fill a 128-bit scheme. Safe to say we really won't need to expand. I think this is overkill, I think this is way overkill, but I also watch(ed) enough MythBusters to know the phrase "if it's worth doing, it's worth overdoing" all too well.
One thing that people have criticized IPv4 for is that the allocations are just horrible. For example, anything starting with 127 is localhost. normally this is 127.0.0.1, but anything from 127.0.0.0 to 127.255.255.255 all mean the exact same thing. That's 16,777,216 addresses all literally for localhost. By numbers, 0.39% of the address space, but just keep this in mind.
Similarly, anything starting with a 0, is effectively "current network" (only valid as source), again, another 16 million addresses.
There's also multiple blocks for private networks, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. In total, 17,891,328 addresses.
Compared to IPv6, yes, IPv6 is much better… in theory. There's only one loopback address, ::1. At the same time… fc00::/7 (26,58,455,991,569,831,745,807,614,120,560,689,152 addresses) is the private address space (more on that later), fe80::/10 (332,306,998,946,228,968,225,951,765,070,086,144 addresses) is the local address space, and ff00::/8 (1,329,227,995,784,915,872,903,807,060,280,344,576 addresses.) is multicast. yes, do you see a recurring pattern? Even though the entire "special" address assignments are exactly 1.271% of the entire IPv6 address space, we're still allocating giant swathes of addresses. History repeats itself, you can see that right here.
And I will admit, that multicast in IPv6 is special since some bits in the address are special flags, and one form of multicast actually includes a response node's address, so it's not just an arbitrary number, but… come on, that's a little uncalled for, having that much space.
You know what? Let's get some charts in here.
To give you an idea of how huge this is, let's show those IPv6 allocations, according to Wikipedia, as a table:
Allocation range
Address count
Unspecified (::) 1
Localhost (::1) 1
IPv4 mapped (::ffff:0:0/96) 4,294,967,296
IPv4 translated (::ffff:0:0:0/96) 4,294,967,296
Discard prefix (100::/64) 18,446,744,073,709,551,616
Global 4/6 translation (64:ff9b::/96) 4,294,967,296
Private 4/6 translation (64:ff9b:1::/48) 1,208,925,819,614,629,174,706,176
Teredo tunneling (2001:0000::/32) 79,228,162,514,264,337,593,543,950,336
ORCHIDv2 (2001:20::/28) 1,267,650,600,228,229,401,496,703,205,376
Documentation reserved (2001:db8::/32) 79,228,162,514,264,337,593,543,950,336
6to4 addressing (Deprecated) (2002::/16) 5,192,296,858,534,827,628,530,496,329,220,096
Private networks (fc00::/7) 2,658,455,991,569,831,745,807,614,120,560,689,152
Link-local (fe80::/64) 18,446,744,073,709,551,616
Multicast (ff00::/8) 1,329,227,995,784,915,872,903,807,060,280,344,576
Free addresses 336,289,489,210,617,046,797,563,476,281,328,140,286
Only the "Private networks" and "Multicast" blocks there are large enough to show on the pie chart, and I set the threshold at 0.1%. We're already off to a good start, but here's a few things to consider:
Currently, 268 million IPv4 addresses are lost to history, in the former private class-E network block, 240.0.0.0/4. That's 6% of the entire space, gone. Old allocations start whittling down the space you have.
IPv6 is showing some signs of this already as well: An entire /16 is considered deprecated because of 6to4, and there's even two ways of encapsulating an IPv4 address within an IPv6 address, just one has an extra zero in it. Yes, we have more allocations than most people can reasonably count (and more than most modern, 64-bit CPUs can do math with easily, since numbers bigger than the register width need some special care to math), but it seems the lack of transition mechanisms is already showing some issues by locking off entire sections.
We all know what an IPv4 address looks like, right? Four dotted-decimal grouping in the range from 0–255. For example, 192.168.5.225. IPv6 uses eight groupings of four hex digits, colon-separated. For example, 2607:f0d0:1002:0051:0000:0000:0000:0004. That's… very unwieldy, so we have a few shortening rules. Any zeros that lead the group can be dropped, giving us this: 2607:f0d0:1002:51:0:0:0:4. And since that is still repetitive, you can replace exactly one sequence of more than one group of all zeros with an empty: 2607:f0d0:1002:51::4. For the record this is why the loopback address is ::1. The full address is 0000:0000:0000:0000:0000:0000:0000:0001. Even with those methods, they're still much longer, harder to remember, and harder to even say than IPv4 addresses. To some point this is inevitable — if you have 128 bits of information to represent, you, well, have to do that. To give some credit here, this scheme, on paper, is nice, and it's, honestly, the best thing I think could be thought of without some stupidly crazy ideas, like using base64, which would need… 24 characters, counting padding. But saying your IP address is MTI4Yml0ID0gMTZjaGFycw== is not only nonsensical, but that's, admittedly, less memorable and more prone to error.
So really, while we're doing the best I think we reasonably can with 128 bits of data to represent, textually, legibly, in a manner that's not prone to entry errors, I will still add a minor fault here: as good as it is, it's still unwieldy. I know this is IPv6, meaning that version 5 was skipped, part of me wonders if 64-bit addressing was ever considered, and assuming it was, why was it rejected?
And remember that this address violates the URL spec, since the : character is specifically to be used to separate the host portion (e.g., google.com) from the port to connect to (assuming nonstandard). As an example, I can reach my torrent client via http://192.168.5.43:9091. See that : there? Because Transmission listens on port 9091, not port 80. How do we fix this? Well, by breaking it again, naturally. To connect to a raw IPv6 address, you wrap it in square brackets, more characters that are disallowed by the specification, but now they're just de facto standard since every URL parsing library (that's updated) is going to have to handle them! To connect to 2607:f0d0:1002:51::4 directly, that's http://[2607:f0d0:1002:51::4]/ Why is this a thing?!.
Admittedly, IPv6 kinda relies on DNS since… just about everything uses DNS, and of course, actual names are more memorable than 32 hexadecimal digits, but DNS isn't magic. Unless you have your own DNS server (actually not that hard) that's configured, you're still manually typing addresses. Of course if you have, say, pfSense managing your network, every static DHCP lease will be registered in DNS, but it has to take a DHCP lease. And if this device doesn't… well, I hope you don't mind typing that out by hand to connect so you can configure it. For everyone I know that responds to the criticism of IPv6 addresses being long and un-memorable with "just use DNS," please remember that is not always an option. Not every single internal or ad-hoc network has (split-horizon) DNS set up, working, configured for everything, and with all addresses populated. Unless you have some system, like, as I just said, pfSense, that's automatically adding entries, then somewhere you're going to have to be handling addresses. If you're making firewall rules, somewhere, you're going to be handling addresses. A fundamental part of basically any layer-3 network protocol is that, at some point, you will be handling addresses. "Just use DNS" is such a simplistic answer that it forgets all of the nuance and complexity with actually working with networking.
Even better, rDNS. rDNS, or Reverse DNS, is where a DNS query is performed with an IP address that returns the hostname associated, in a PTR record. For example, the IPv4 that google.com resolves to, 216.58.192.142, can be queried as 142.192.58.216.in-addr.arpa to get it's "real" name, a PTR record for ord36s01-in-f142.1e100.net. With dig, specifying a -x and then the IP will convert it to the correct format. And if you look close, the query name is the IP, backwards, with in-addr.arpa at the front. It's backwards because of the hierarchical nature of DNS, which runs right to left, the opposite of IPs. Of course, there's also rDNS for IPv6:
$ dig -x 2607:f0d0:1002:51::4
; <<>> DiG 9.11.3-1ubuntu1.12-Ubuntu <<>> -x 2607:f0d0:1002:51::4
; EDNS: version: 0, flags:; udp: 65494
;4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.5.0.0.2.0.0.1.0.d.0.f.7.0.6.2.ip6.arpa. IN PTR
4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.5.0.0.2.0.0.1.0.d.0.f.7.0.6.2.ip6.arpa. 3600 IN PTR 4000.0000.0000.0000.1500.2001.0d0f.7062.ip6.static.sl-reverse.com.
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Sun Aug 30 00:52:50 EDT 2020
That is insane. The IPv6 rDNS TLD is just ip6.arpa, and the IP part is… every single hex digit, reversed.
2607f0d0100200510000000000000004
4000000000000000150020010d0f7062
One of the core pillars of IPv6 is that most of the processing of traffic should happen at the endpoints — routers don't do much besides read, and forward, with very little actual data processing. The IPv6 header while being gargantuan in comparison (because the giant addresses), is also much simpler. A fixed version code (6), the traffic class (DiffServ + ECN), a flow label which is effectively a random value that's constant for every packet that's part of the same logical connection, length, type of next header, and a TTL (then addresses, obviously). That is it. One interesting note: the TTL field is now called the "Hop limit" because it specifies the number of nodes a given packet can be forwarded to (each called a "hop", obviously) before it's dropped. In IPv4, this field was the time in seconds the packet could live, always rounded up to a minimum of… one second. In practice, this meant that the TTL was a hop limit, but according to spec it was actually time based, just that nothing (usually) moved slow enough for that to be important. Another change that's, well, I have nothing specific to say but I might as well point it out as long as I'm here, if the final destination receives a packet with a hop limit of 0, it will process the packet. In IPv4, the packet would be received, the TTL decremented, and if that left it at expiry, it was dropped, and an ICMP message was fired off in response.
The IPv4 header contains 13 fields plus optional sections, and the IPv6 header contains a flat 8. Of course, you may actually need other details, and that's where the next header field comes in. Instead of packing all sorts of options into the standard, global header, IPv6 uses additional header extensions that can get tacked on one after another to provide that information, say for fragmentation or IPsec.4 The value in the next header field identifies what's going to come next, and the final header in the line will use this field to indicate the contained protocol, be it TCP, UDP, or something else.
This has some benefits, mostly from the ease-of-processing department. The addresses are also aligned to a word boundary, meaning they should be able to be loaded into processor registers rather quickly. This, again, is something I can commend, somewhat. I can see the beauty in having a bunch of smaller headers daisy-chaining onto each other, as compared to one big variable header that packs in everything. I do also think that, to some extent, this was a design that almost made itself necessary, with how large those addresses are, they made it almost mandatory to split everything up, and as long as they're going that route, might as well take it all the way.
Also note there's no checksum anymore. I mean, just about every device uses Ethernet, which has a Frame Check Sequence (FCS), UDP has a checksum option, TCP has a required checksum… If we want to simplify packet processing, drop the sum. And really, it does make sense, in a way. Even without that one, that means that most programs have two checksums: the Ethernet FCS as the frame gets transmitted point to point, and the transport layer checksum, making sure the entire packet is still valid. Also, the IPv4 checksum also included the TTL (max number of hops in the path before the packet is dropped), meaning that at every stop along the way, the checksum had to be recalculated.
As a final point, this also does bring a change to another protocol: UDP. With IPv4, the UDP checksum field can (and often would) be left blank as all zeros, meaning "no checksum". In IPv6, this is now disallowed, and a all zero checksum will still be checked… and then found invalid. All UDP packets over IPv6 must have a valid checksum calculated.
Let's add some examples here. I'm fairly certain the two most commonly-known extra headers are the AH and ESP headers, used for IPsec, the first one literally being called "Authentication Header." (the latter is "Encapsulating Security Payload".)
There's also two 'options' headers, the hop-by-hop options, which any node may inspect or change, and the destination options header, who's only intended recipient is, well, the destination, not potentially any device it passes through. Both of these are identical in format, and just specify some space for type-length-value options to go.5
There's also a fragmentation header, for if a packet had to be fragmented. I'll cover that shortly, but know that it holds basically the same information as the IPv4 header: identification for reassembly, offset into the full payload, and a flag to specify if more follow.
So every link between devices has an MTU, the Maximum Transmission Unit. For normal Ethernet links, minus the frame overhead, this is 1500 bytes. If your equipment supports jumbo frames, that's closer to 9000 bytes. Well not all links are equal. Some devices might have a high-MTU link on one end, and a lower MTU link on the other. For example, my router might support jumbo frames internally, but the WAN side doesn't allow that. To deal with, this, we have fragmentation.
If a router is unable to forward a frame due to MTU differences, it will, if allowed, split the packet into multiple chunks, using the More Fragments flag and the Fragment Offset field of the IPv4 header, and send the packet in multiple frames piece by piece, which the other end can reassemble. Note that I said "if allowed." There is a Don't Fragment flag in the IPv4 header, and if this is set by the sending device, a router that cannot support a packet of that size will send back an ICMP "packet too big" message which bounces back along the chain. Any node in the network path can perform this fragmentation, meaning that for sending data, the only MTU you care about is the MTU of the device you're directly connected to; your own link.
IPv6 does not allow this. Intermediate routers are not allowed to fragment a packet, and instead will send back an ICMP error. If you're going to fragment a packet (which is also heavily discouraged), then the sending device can add a fragmentation header extension, as mentioned above. So, either packets are not fragmented at all, or packets are fragmented from the originating device. IPv6 also expects senders to perform Path MTU discovery, by actually listening to ICMP packet too big messages, which contains the MTU of that node. The sender is expected to read this, and then adjust accordingly, repeating this in a loop until the packet can pass just fine. Alternatively… don't exceed the IPv6 minimum MTU, 1280 bytes.
If you know me, you know I always say that computer networking is a miracle that only holds together by duct tape, prayers of engineers, and dumb luck. Nowhere does this hold true more than the introduction of IPv6 into a network, where just about everyone that I talk to has flat out disabled IPv6 on everything that gives them the option — it's just way too much headache to have to deal with it all, and if it's enabled, 90% of your new network problems become the fact that devices are now doing unexpected IPv6 things that you never thought of.
But besides that point, if you want to run a network with IPv6, you're likely going to have to operate both 4 and 6 just because IPv4 is still going strong. So that means for every firewall rule that involves a specific host or network, you need two: one for the IPv4 block, and one for the IPv6 block. And heaven forbid if you change one and forget the other.
Does not exist. Generally when you get an IP address, that address will be globally routable — anyone can access it, from anywhere, Hollywood style.
However, if you want a private network, there is one prefix for that: anything from fc00:0000:0000:0000:0000:0000:0000:0000 to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff (fc00::/7) is considered non-routable for private networks. Technically the first bit here you're allowed to modify should always be 1, meaning your actually RFC compliant range is fd00::/8, which is fd00:0000:0000:0000:0000:0000:0000:0000 to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff. half the addresses, but still plenty. Yes, the actual spec is more complicated and defines a few parts in the "network" area, but… well, you get the point.
So here's the question: Say you've done that. How do you route packets to these private IPs? The answer is Network Address Prefix Translation. Wait… what?
Yes. NAT is an IPv4 thing. NPT is an IPv6 thing. With IPv4, you can scan for packets and, if they match certain criteria (say, going to a known address, like your WAN address, on a known port), swap the destination (or source) address with a new one. This is how I am using just one IP for all my services: the destination port decides what server your request gets routed to. In this sense, all unknown traffic is dropped, and traffic that I have NAT rules for are also allowed pass the firewall. This is a "default drop" system. Nothing gets through unless I say so.
IPv6 uses NPT, where you can transform one prefix into another prefix.
Say, for example, I have a host at the private address of fd2c:a7c6:2aae:ef93::41. I could then add an NPT rule for to transform fd2c:a7c6:2aae:ef93:: into 2607:f0d0:1002:51::. This is effectively a 1:1 mapping, meaning that it works both ways, both inbound and outbound will be translated.
For this, I could then, say, advertise (with an AAAA record, perhaps) the public IP of the server as 2607:f0d0:1002:51::41, and when the packet comes in…
ORIG DEST: 2607:f0d0:1002:0051::41
PREFIX: |||||||||||||||||||
REPLACE: fd2c:a7c6:2aae:ef93
NEW DEST: fd2c:a7c6:2aae:ef93::41
Which means I need a different IP for every different destination, and, additionally, I'm also giving away some details about the internal structure of my network! You may not know the prefix but you'll know the exact subnet address since I'm only translating a prefix!!!
That "different IPs" bit may sound a bit… duh, then remember that for some systems I run (like a VCS that has a web interface, and SSH), the port number alone is what decides the destination, you could even still go to the same domain name and it counts. With NPT, you cannot do this, you'd have to have an additional device like a layer 7 proxy (like HAProxy) to take in everything and send it to the correct destination, meaning I need a dedicated host to do the thing that IPv4 NAT could already do natively!
And it gets better. Remember, this will blindly just swap prefixes in and out. With IPv4 and NAT, you, technically, bear with me because I'm about to make some stupid claims, have two layers of security: First is the firewall, any packets that get dropped get dropped. Second is NAT. While NAT should not be how you secure things, its effect is actually pretty substantial: even if a packet comes in that the firewall should have caught, without a matching NAT rule, there's no way of knowing what host to actually send it to, and it'll just hit the firewall itself, likely generating either a useless message, or just a TCP RST. NPT removes this, meaning the only thing with any say in what comes and goes is the firewall. Now, a proper firewall will adhere to some basic security guidelines… like packets not matching any rule will be dropped. However, coming from the IPv4 world where literally only the traffic that I told it how to pass could get through, that really sounds like a firewall mistake waiting to happen, somewhere.
But I'm not done yet. The entire practice is… just flat out discouraged. The pfSense manual even says that what I just said might also not work correctly, so, nice, but also, the entire point of IPv6 is that all nodes are globally routable, you don't need special private address spaces or translation of any kind, it just works. Like, really now, was the IETF watching some cybercrime flicks as they wrote the RFC? Every computer just by default accessible anywhere in the world unless you specifically firewall things (and/or let DEFAULT DROP RULE handle most of your traffic)? I get that even with IPv4 that's how things worked until you set up a subnet boundary, but here, in v6, it's either a firewall or no protection. This whole "end-to-end" focus nature really feels poorly thought out, from a techie perspective. And, as data shows, the kind of devices that do actively use IPv6 (mobile devices, mainly), are able to just zeroconf themselves perfectly, which is nice from a "just works" perspective, but like many things recently, the "well it needs to work seamlessly" side seriously clashes with the "actual useful functionality" side.
DHCP in IPv4 is a four step process:
Client sends out a DHCPDISCOVER packet from a source of 0.0.0.0 to the broadcast address
Server sends a DHCPOFFER to broadcast IP (but destination MAC) with the offered client IP, subnet mask, DNS servers, lease time, and other information
Client sends a DHCPREQUEST for that specific IP address
Server sends a DHCPACK with the same information in the offer, thus confirming the IP assignment
DHCP here also includes a lot of data in the form of numbered options, but here's some common values:
IP address (duh)
Subnet mask
Up to 4 DNS servers
Up to 2 WINS servers (deprecated)
Domain name (for the subnet)
Domain search list (domain name prefixes to try resolving hosts with)
Up to 2 NTP servers
Valid IP, hostname or URL for a TFTP server
Network LDAP server URI
PXE network boot (PXE compatible server IP, file names)
Some additional values like the default MTU (option 26) can also be sent. There's 254 valid options, as 0x00 is padding and 0xff marks the end of the message.
In just four UDP messages, a host just powering on can gain just about every bit of information that it may need.
DHCPv6… not so much. The protocol is split in two, and let's go over DHCPv6 first. This is, again, a four step process:
Client sends a SOLICIT from its link-local to the "All DHCP" multicast address, ff02::1:2
Server responds with an ADVERTISE with the client IP
Client sends a REQUEST for that IP.
Server gives a REPLY, and confirms the assignment.
DHCPv6 also has some extra data, too:
Network boot file URL
Some obsolete options like WINS servers are removed, but you'll notice some things like the network gateway are completely missing. Also the DHCP server just provides the local part of the address, it doesn't even give out the network prefix. But before we get into that, here's something fun: DHCP(v4) uses the MAC address of the client as it's identifier, IPs are leased to a particular MAC. DHCPv6 uses a DHCP Unique Identifier (DUID), which is usually the MAC address, with other things. There's four types: one for the MAC + timestamp (DUID-LLT, Link-Layer + Time), a unique enterprise number based DUID (DUID-EN), just the MAC (DUID-LL), and a UUID based one (DUID-UUID). IPs are leased out to DUIDs not MACs, and so it's actually really difficult to make reservations ahead of time. The easiest course of action is to wait for the client to grab a lease, then upgrade that to a static assignment. Oh, and yes, of course, here's an example MAC + time DUID: 00-01-00-01-18-BA-30-56-D8-9D-67-C9-FA-33. Are you noticing a pattern here with IPv6? Everything is just getting long, unwieldy, and, in my opinion, creates needless complexity in the interest of simplicity in one of the most ironic twists I've seen in computing and networking.
Also note that this also doesn't happen by default either. DHCP will only be used if SLAAC / RA permits. And while I'll get to SLAAC in a second, let's talk about the Neighbor Discovery Protocol, and Router Advertisements.
When an IPv6 capable host joins a network, it will send out Router Solicitation (hmm, I'm seeing "solicit" as a verb here a few times, I think I found everyone's new favorite word) message. Available IPv6 gateways that can forward frames will periodically send out Router Advertisement messages, or, if they see a solicitation, will immediately send out an advert.
The advertisements contain the M and O flags (hold on), a lifetime for which the advert should be considered valid, up to three DNS servers, a search list (same as DHCP), and the network prefix. The advert also contains a priority, one of low, normal, or high, for that particular router. Why only three? Beats me. And no, it's not usually a good idea to have more than one router on the same priority level. If you have more than three gateways on a network (because of course that's a thing you can do now), have fun. Anyways, the Managed and "Other stateful" flags control the behavior of hosts a bit more in-depth. There's even more flags concerning the prefix data, but at the end of the day, it all boils down to a list of modes you can pick from:
Router only : Just advertise the router as a valid gateway
Unmanaged : Network has no DHCP or other infrastructure, obtain everything through SLAAC
Managed : Network has DHCP, obtain all config through DHCPv6
Assisted : Network has DHCP, obtain config through DHCPv6 and SLAAC
Stateless DHCP : Get address through SLAAC, everything else (network options) through DHCPv6 (DHCP server keeps no state)
"Assisted" as far as I can tell means that clients will obtain one address through DHCPv6, and then also generate a link-local address as well.
One small footnote here to DHCPv6: prefix delegation. If a client asks, then the DHCPv6 server can give out entire prefixes of its own valid address space. This makes hierarchical DHCP possible, where your root server with the full /64 can hand out, say, /56s to requesters, who could hand out /48s to their requesters… And yes, you can have multiple delegations at once, because the options allow for a start and end range, and then the prefix length. Similar to the start and end of the plain address range. This is specifically an "if asked for" thing though, keep in mind. Only clients that, likely, are gateways for their own LANs are really going to ask for a delegation. It is pretty cool that the address space is large enough to handle hierarchical assignments like this.
Primer: SLAAC is the IPv6 version of APIPA.
The basics for both protocols is to, essentially, give a new host an IP address that works, without relying on other external mechanisms like DHCP. In IPv4, this is called Automatic Private IP Addressing, or APIPA. APIPA addresses occupy the 169.254.0.0/16 block, so anything from 169.254.0.0 to 169.254.255.255.6 A client will pick a random value in this range, run an ARP query to make sure it's free, then bind to it. In IPv4, APIPA is usually a last resort. And, if a new address is given, like a public address or one from one of the proper private reserved blocks, that address will overwrite the APIPA address, meaning it only exists as long as there's nothing that's giving a host an IP.
In IPv6, SLAAC (Stateless Address Auto Configuration) is always present, and the first resort. the fe80::/10 block is reserved for link-local, which, in practice, will actually run from fe80::1 to fe80::ffff:ffff:ffff:ffff, since the 54 bits between the prefix and the address are all zeros. This link-local address is used for everything else, like NDP (router solicitation!), DHCPv6, and the like. And unlike IPv4, the link-local address is always valid, even with a globally routable address. (Yes, IPv6 capable hosts will have multiple IPv6 addresses on one interface. That's not mildly confusing at all.) Note that the link-local address is either deterministic (MAC address based), or partially randomized, which should be stored, if possible, so that reboots don't cause the device to change addresses. Part of the SLAAC process (NDP) gives it the rest of the data it might need besides just a valid local address.
Oh, did… did you think I was done on this point? Nope! It's very likely that a host can generate the same IPv6 link local address for two different network interfaces. In terms of incoming data this doesn't matter, since, data received is data received. But for sending, the network driver needs to know what device to use. Thus enters the zone index. A zone index is either a string, or a numeric value. Numeric values are required to be supported, but many Unix-like systems allow you to specify the interface name itself textually. Example? fe80::1ff:fe23:4567:890a%eth2. So, did you know, this means that a full URL that uses every part available to it looks like this:
http://username:[email protected][fe80::1ff:fe23:4567:890a%eth2]:9091/transmission/web/?query=value#confirm
Be glad you can't hear what I'm shouting at my monitor right now.
The Address Resolution Protocol is the protocol used in IPv4 to translate link layer addresses (MAC addresses) into internet layer addresses (IP addresses). Even though the protocol is more generalized (it works on more than just Ethernet and IPv4, even Chaosnet has identifiers for ARP), in general, a client will ask a simple question: "Who has IP address X? Tell IP address Y." The ARP message contains fields for both the IP and MAC of both the sender and receiver. On receiving an ARP request for your IP, the host will respond with a "MAC address Z has IP address X" message.
This protocol relies heavily on broadcast. Well wouldn't you know it, IPv6 has no broadcast. There's an "all nodes" link-local multicast which does effectively the same thing, but it's technically not the same thing. So instead, we have the Neighbor Discovery Protocol. Router solicitations and advertisements are part of NDP, as well as neighbor solicitations and advertisements (again with the soliciting!), the only difference are that NAs aren't periodically sent out like RAs are. But besides that there's really nothing to it, other than the fact that we had to create an entirely new protocol to do the exact same job as an already existing protocol because someone removed broadcast, the second most common type of traffic. Need to find someone? Send out a Neighbor Solicitation, and wait for the responding Neighbor Advertisement.
Oh, and for the record, NDP is actually just a set of 5 different ICMPv6 messages.
Did you know that IPsec was not only originally developed for IPv6, but also planned to be a mandatory part of all implementations? Yeah… that got downgraded to "recommendation" pretty quickly. At least if you do implement IPv6 IPsec, you need to implement IKEv2 and certain ciphers, so there's a guaranteed level of compatibility.
Anyways, IPsec has found good deployment in IPv4 networks (I use IPsec as a VPN for my phone since OpenVPN sucks on it),7 and some parts of it really tell you just how IPv6-thinking the IETF was. I mean, the two main modes of operation, AH (Authentication and resistant to changes) and ESP (authentication and encryption), are actually IPv6 extension headers.
Hey at least IPsec wasn't a requirement on the wire, like HTTPS practically is today. Imagine somehow literally needing to configure IPsec credentials for ANY connection on the internet. We might get there some day, when we make a way to automatically configure IPsec in the same way that TLS "just works", but right now I'm glad that's nowhere near likely.
Enabling IPv6 on the LAN side of Sophos UTM 9 causes the WAN side to lose it's link. …Why? I can't even make my local network IPv6 for link-local communications because then the machine can't connect to the rest of the internet. I honestly cannot for the life of me begin to fathom a reason as to what could cause this, other than bad drivers.
Because yes, that is one serious problem here: bad drivers. IPv6, on paper, is wonderful, with some glaring holes in it. The major issue with it is the implementations, from what I've experienced. All the joys in the world can't hold up anything on a sloppy implementation. Or even better, just handling in general. Devices that lose their links when you enable IPv6 on one side and not the other, which, in a decently configured network, shouldn't actually cause a problem but yet it does (probably with router advertisements, now that I've brained it a bit longer). Programs that legit refuse to listen to IPv4 localhost unless you actually force the underlying VM to forget its entire existence (looking at you, certain unnamed Java-based log ingest and processing platform). Programs that take an instruction to bind to 0.0.0.0 as implicitly saying to bind to ::, not binding to 0.0.0.0 and making netstats output confusing until I stop telling it to only show IPv4 results… the thing I asked the program to use.
IPv6 is a revamped version of the Internet Protocol built for modern times, complete with a massive address space, simplified headers, less expectation of in-flight processing, and a few other benefits I haven't talked about. But in the pursuit of making things simpler and focusing on a more end-to-end approach like the early days of the internet, we've gotten something so monstrously complex compared to its predecessor that it writes its own death warrant. Nobody wants to undertake such a large responsibility if almost nobody uses it… yeah that's an apparently unexpected catch 22. But it's not the only one either — IPv6 really only makes sense if everything along the line supports IPv6, a lot of benefits get removed if you have to 6in4 tunnel, or at some point literally just downgrade to IPv4. And, again, that's not happening, so why bother? IPv6 in a mad dash to make a simple protocol really, in my opinion, threw out the baby with the bathwater. Parts need to be redesigned (NDP) to fit, the second most common addressing mode (broadcast) got removed in favor of massively expanding the least used addressing mode (multicast), for some reason…
And, okay, the lack of NAT, an intentional design decision, is where I draw the line and say this is not for me. Maybe one day when IPv4 really is being put into obsolescence I'll make the change, but there's no attempt to make the two compatible besides literally running dual stack — also meaning managing two networks instead of one. At the very least we could have been given some useful transition mechanisms, but… not really. There's just no point. But, that's intentional. It was, from what I can tell, a conscious decision to not add any transition mechanisms from 4 to 6, because 6 is different enough, and it didn't want to be tied down to the problems with IPv4 that it's trying to get rid of, so you get nothing… okay, you get a method to represent IPv4 addresses inside IPv6 addresses. Besides that, officially, you get nothing.
IPv6 wants to do a lot of good, it has the potential to do a lot of good, but it makes it so difficult to handle that, that it's just not worth it. The entire network and routing model is different enough that not much of your IPv4 knowledge will carry over, sure, it's still the Internet Protocol, but the name is about the only thing it really has in common at the actual network level. It genuinely, genuinely doesn't feel like IPv6 was something meant for consumers to handle, heck it doesn't even feel like something that most IT departments were meant to handle. It feels like something that wants to just revamp the backbone of the internet, be in the hands of ISPs and major companies who can afford to basically re-train their entire engineering staff to work with this, and have them be knowledgeable enough to have all the major communications on the world on IPv6, and leave little change to the rest of us, those that don't have the ability to handle that much headache.
Actually, you know what? That's the best way of putting it: IPv6 feels like it was made solely for computers, not people. The addresses are large because future proofing, and a computer likely doesn't care if it's 32 bits or 128, as long as it can load read them. A computer doesn't care if the numbers at play are unfathomably large, and if the underlying changes break a lot of existing knowledge, because they have no reason to. IPv6 is so forward-thinking, and so packed full of "enhancements" that it's a beautiful thing if you're a network driver, but a never-ending headache for a human, unless you're a human that has little to no knowledge of IPv4, because any attempt to try and find some common ground, to try and understand how one is like the other, will just leave you with more questions than answers, unless you're able to just stonewall off one from the other, acknowledge them as basically being similar in name only, and learning about the IPv6 way of doing things.
Really, that's what I say when I read about most of the things IPv6 has done — "Why? You had almost 0 real reason to do this and you did it anyways!" I'm really looking forward to seeing how much IPv6 changes in the future, if some of these decisions will get reversed, or if it'll just saunter on, further pushing itself into the dual catch 22 that causes some low amounts of adoption today.
I will address this though: For the mobile and non-tech-savvy market, IPv6 is huge, I believe the USA has roughly 33% adoption according to Google's traffic logs. Anything local network, or anything where you have no fancy equipment, just plug a computer or two into your modem and watch cat videos, IPv6 will configure itself, since your ISP put a lot of time and effort into getting it right, and then you can just have fun. But the moment someone with some networking smarts starts poking around, adding equipment, and building their own network, IPv6 is something that will almost immediately get thrown to the bottom of the garbage bin, never to be considered again, because you're not an ISP that can use the money it's gained from predatory business practices (ahem, Comcast) to pay a team of engineers to sort this one out, you're one person, in one home, with a few pieces of gear, and once you start reading how to do everything, you swap out your morning coffee for an alcohol of choice, gulp it down, and decide that maybe that's something best left untouched.
That's 340 undecillion, 282 decillion, 366 nonillion, 920 octillion, 938 septillion, 463 sextillion, 463 quintillion, 374 quadrillion, 607 trillion, 431 billion, 768 million, 211 thousand and 456. What is this, Scrooge McDuck's net worth?! ↩︎
Also known as 79,228,162,514,264,340,000,000,000,000, or 79 octillion, 228 septillion, 162 sextillion, 514 quintillion, 264 quadrillion, 340 trillion. ↩︎
18 quintillion, 446 quadrillion, 744 trillion, 73 billion, 709 million, 551 thousand and 616. ↩︎
The IPsec mode that just provides authentication but not encryption is called AH, or Authentication Header. And this explains why. IPsec (as you'll read later if you immediately jumped to this footnote) was originally meant to be a core part of IPv6, and you could authenticate your packets with an authentication header extension after the IPv6 header. ↩︎
The one that I see a lot of reference to is the jumbo payload option. Normal IPv6 packets have a maximum payload of 65 KiB, though this option extends that to 4 GiB - 1B, even though both far exceed most link's MTUs. ↩︎
The first and last 256 addresses, so anything starting with 169.254.0 and 169.254.255, are reserved for future use, and not to be used. Valid APIPAs then are anything between 169.254.1.0 and 169.254.254.254. ↩︎
Update: Actually Android broke this at one point in an update, and I've since changed to WireGuard. ↩︎
APIPA ARP Broadcast CIDR DHCPv6 Ethernet IPv4 IPv6 Multicast NAT NDP NPT SLAAC
|
CommonCrawl
|
confusion_between_b_and_h
Confusion between B and H
Reasons for the confusion
Systems of units
Advantages of the CGS system
Multiplicity of names
Similarity of names
Stan Zurek, Confusion between B and H, Encyclopedia-Magnetica.com, {accessed on 2019-07-16}
reviewed by Jeanete Leicht, 2014-12-19
Confusion between B and H - a problem recognised in the literature about the confusion between quantities and physical units of magnetic flux density B and magnetic field strength H .1) 2)3)4)
Both B and H are strictly defined in terms of measurement units as well as their physical meaning.5)6)7)
Electric current I generates magnetic field, in the form of magnetic field strength H, regardless the type of the surrounding medium
Electric current I produces around itself magnetic field strength H, whose amplitude is independent of the type of a continuous isotropic medium (regardless if it is non-magnetic, magnetic, non-linear, etc.)8)
For an infinitely long straight round wire it is:
$H = \frac{I}{2 ⋅ \pi ⋅ r}$
(A/m)
where: r - radius of a circle enclosing the current I, π - the mathematical constant.
Flux density B is a response of the medium to the applied excitation H. The relationship is defined by the magnetic permeability μ, such that:
$B = \mu ⋅ H$
Thus, B is related to the properties of the material and its relation to the applied excitation (e.g. electric current) can be highly non-linear.
If the medium is non-continuous or anisotropic then magnetic poles or a demagnetising field could be created, which themselves become sources of local excitation and they add to the source.
Some of the several possibly contributing reasons are described below.
Flux density B is a different physical quantity from magnetic field strength H
$B \neq H$
There are several internationally used system of units, which include the electromagnetic units. The most popularly adopted are SI units, in the so-called "rationalised" metre-kilogram-second system (MKS). But there are at least four based on centimetre-gram-second system (CGS)9)
In the previously used CGS system, the permeability μ was unitless10) and for free space it was mathematically true that μ = 1, so it could be written that B = H.
(because for free space the magnetisation term can be omitted).11)
Such notation is in itself confusing, because it makes more difficult to distinguish between the two different quantities, especially that the unit of B was gauss and of H was oersted, yet they could be equal to one another (in a similar sense as 1 inch = 25.4 mm).
For instance, in the CGS system the intrinsic induction Bi (equivalent of polarisation J in the SI system) is defined as Bi = B - H (even though B is given in gauss and H in oersteds).12)
Such equality is no longer true in the currently used SI system.
In theoretical physics 13) the CGS units are continued to be used alongside the MKS units. The approach used in the CGS system has some advantages when performing certain electromagnetic calculations (e.g. lack of the 4·π factor), even though a distinction has to be made between "electrostatic" and "electromagnetic" CGS units.14)15)
The CGS units are also used by convention, simply because of the historic reasons, especially in the area of permanent magnets, whose energies are often quoted in MGOe (mega-gauss-oersted).16)17)
The CGS units are still used in many applications in the USA, whereas Europe and other countries rely almost exclusively on the SI units.18)
There are several quantities and related magnetic terms: magnetic field strength, magnetic flux density, magnetisation, polarisation, magnetic flux and also magnetic field.
Under certain conditions, the practical differences between some of these quantities are small. For instance, for magnetically soft materials under low-amplitude excitation the difference between B and J is negligible for most practical purposes.
Also for non-magnetic materials it can be assumed that B and H have a linear relationship so if one is known then the other can be easily calculated. For those not proficient in the physics of magnetism such notation could suggest that the distinction might not be significant enough to differentiate between the quantities. However, the distinction has to be made even on the basis of units.
A common imprecise short-hand is to refer to H as "magnetic field" instead of the full name "magnetic field strength"19), but similarly for B, depending on the context.20)
However, in literature these terms are referred to by various names, and sometimes even with implied incorrect meaning. For instance, H can be referred to as "magnetic field" 21), "applied field"22) or "auxiliary field".23) But also B might be called as "magnetic field" 24) or "auxiliary field".25)
Different authors have various scientific background. From the viewpoint of their discipline some equations are more "important" than others, for instance due to frequency of application to solving various problems. Hence, either H or B can be treated as "more important" than the other.
But in theoretical physics often B is regarded as more fundamental, or indeed even "real". This is because magnetic forces are proportional to B not H.26)
Magnetic field strength
Magnetisation
1), 7), 8) David C. Jiles, Introduction to Magnetism and Magnetic Materials, Second Edition, CRC Press, 1998, ISBN 9780412798603, p. 5-11
2), 9), 14) Douglas L. Cohen, Demystifying Electromagnetic Equations: A Complete Explanation of EM Unit Systems and Equation Transformations, SPIE Press, 2001, ISBN 9780819442345, p. vii-viii
3) Physics Forums, In magnetism, what is the difference between the B and H fields? {accessed 5 Feb 2014}
4) https://www.researchgate.net/post/Difference_between_B_H_and_M_in_magnetics ResearchGate, Difference between B, H and M in magnetics, {accessed 5 Feb 2014}
5) Derived units expressed in terms of base units, SI brochure, {accessed 6 Jan 2014}
6) Units with special names and symbols; units that incorporate special names and symbols, SI brochure, {accessed 6 Jan 2014}
10) Alex Goldman, Handbook of Modern Ferromagnetic Materials, Springer, 1999, ISBN 9780412146619, p. 641
11) Nicola A. Spaldin, Magnetic Materials: Fundamentals and Applications, Cambridge University Press, 2010, ISBN 9781139491556, p. 14
12) Glossary of Magnet Terms, Dura Magnetics Inc. {accessed 2015-10-27}
13) S. N. Ghosh, Electromagnetic Theory and Wave Propagation, CRC Press, 2002, ISBN 9780849324307, p. 51
16) Edward P. Furlani, Permanent Magnet and Electromechanical Devices: Materials, Analysis, and Applications, Academic Press, 2001, ISBN 9780122699511, p. 54
17) Alex Goldman, Modern Ferrite Technology, Springer, 2006, ISBN 9780387294131, p. 227
18) Perry A. Holman, Magnetoresistance (MR) Transducers and how to use them as sensors, 1st edition, Honeywell International, 2004, p. 4, {accessed 15 Feb 2014}
19) Encyclopaedia Britannica, Magnetic field, {accessed 5 Feb 2014}
20) Encyclopaedia Britannica, Magnetism, {accessed 5 Feb 2014}
21) Ronald T. Merrill, M. W. McElhinny, Phillip L. McFadden (ed.), The Magnetic Field of the Earth: Paleomagnetism, the Core, and the Deep Mantle, Volume 63 of International geophysics series, ISBN 9780124912465, p. 136
22) Wei Gao, Zhengwei Li, Nigel M. Sammes, An Introduction to Electronic Materials for Engineers, World Scientific, 2011, ISBN 9789814293693, p. 207
23) John A. Camara, Power Reference Manual for the Electrical and Computer PE Exam, www.ppi2pass.com, 2010, ISBN 9781591263678, p. 23-5
24) S.K Gupta, Electro Magnetic Field Theory, Krishna Prakashan Media, ISBN 9788187224754, p. 3.49
25) Philips Technical Review, Volumes 11-12, Philips Research Laboratory, 1950, p. 66
26) Charles F. Stevens, The Six Core Theories of Modern Physics, MIT Press, 1996, ISBN 9780262691888, p. 85
Physics, Counter
confusion_between_b_and_h.txt · Last modified: 2019/06/03 18:28 (external edit)
|
CommonCrawl
|
Proof polynomial has only one real root.
I need to prove that this polynomial equation: $$x^5-(3-a)x^4+(3-2a)x^3-ax^2+2ax-a=0\quad\text{ for }\quad a\in(0,\frac{1}{2}).$$ has only one root. That it has one real root is obvious because it is of odd degree. But Descartes rules here fails to bound the number of roots to one.
calculus polynomials
Ambesh
AmbeshAmbesh
$\begingroup$ For $a=0$ it has a triple root at $x=0,\,$ i.e. there are three real roots. $\endgroup$ – gammatester Apr 4 '14 at 9:00
$\begingroup$ @gammatester True, but I think OP is not counting multiplicity of roots (that is, there is still only one root, $0$). $\endgroup$ – 5xum Apr 4 '14 at 9:03
$\begingroup$ Sorry, my mistake...I forgot a condition on $a$ that now has been added. $\endgroup$ – Ambesh Apr 4 '14 at 9:04
$\begingroup$ en.wikipedia.org/wiki/Sturm%27s_theorem $\endgroup$ – athos Dec 16 '14 at 2:34
Consider $$p(x) = x^5-(3-a)x^4+(3-2a)x^3-ax^2+2ax-a$$ First, we note that if $x< 0$, each term is negative, hence there are no negative roots. Also, $p(0) = -a < 0$. Further,
$$p'(x) = 5x^4-4(3-a)x^3+3(3-2a)x^2-2ax+2a$$
So it is sufficient to show that $p'(x) > 0$ for $x > 0, \; a \in (0, \frac12)$.
For this, note that by AM-GM inequality, $\frac12 ax^2+2a \ge 2ax$, so it is sufficient to show that: $5x^4+\frac12(18-13a)x^2 > 4(3-a)x^3$. By AM-GM we again have: $$5x^4+\tfrac12(18-13a)x^2 \ge 2\sqrt{\frac{5(18-13a)}2}x^3$$
So it is enough to show $5(18-13a) > 8(3-a)^2 \iff 8a^2+17a < 18$, which is true for $a \in (0, \frac12)$.
MacavityMacavity
$\begingroup$ The first time you apply AM-MG, shouldn't one of the $a$'s be rooted or squared? From that point onwards I'm lost. I don't see from where comes the next expression. $\endgroup$ – Ambesh Apr 4 '14 at 10:49
$\begingroup$ @Ambesh You're correct, I missed that. Edited that. $\endgroup$ – Macavity Apr 4 '14 at 10:55
I try to factorize your formula :
$x^5-(3-a)x^4+(3-2a)x^3-ax^2+2ax-a=0\quad\text{ for }\quad a\in(0,\frac{1}{2})$
$x^5-(3-a)x^4+(3-2a)x^3-ax^2+2ax-a=x^5+(3-a)(x^3-x^4)-ax^{3}-ax^{2}+2ax-a$
$=x^5+(1-a)(x^3-x^4)-ax^{3}-ax^{2}+2ax-a+2x^{3}-2x^{4}$
$=x^5-1+1+(1-a)(x^3-x^4)-a(x^{3}-x^{2})+2ax+2ax^{2}-a+2x^{3}-2x^{4}$ $=x^5-1+(1-a)(x^3-x^4)-a(x^{3}-x^{2})+2a(x-1)+2x^{3}(1-x)+1+a+2ax^{2}$ $=(x-1)(x^{4}+x^{2}+x+1+ax^{3}-ax^{2}+2a+2x^{2})+1+a+2ax^{2}$$=x(x^{4}+x^{2}+x+1+ax^{3}-ax^{2}+2a+2x^{2})+1+a-(x^{4}+x^{2}+x+1+ax^{3}-ax^{2}+2a+2x^{2}-2ax^{2})$ $=x(x^{4}+x^{2}+x+1+ax^{3}-ax^{2}+2a+2x^{2})-(x^{4}+x^{2}+x+ax^{3}-ax^{2}+a+2x^{2}-2ax^{2})$
$\Longrightarrow{x(x^{4}+(a-1)x^{3}+(3-2a)x^{2}+(3a-2)x+2a)=a}$
$\Longrightarrow{f(x)=x(x^{4}+(a-1)x^{3}+(3-2a)x^{2}+(3a-2)x+2a)}$
$\Longrightarrow{f^{'}(x)=0}$
$\Longrightarrow{-10x^{4}=2-4a>0}$
a contradiction shows that $x=0$ is its unique root!
Is it helpful?
kolmogorovkolmogorov
Not the answer you're looking for? Browse other questions tagged calculus polynomials or ask your own question.
If $f(x)$ and $f(x)-x$ have only one real root, then $f(f(x))-x$ has only one real root.
A doubt in finding the number of real roots of a given polynomial using derivative
Proof that every polynomial of odd degree has one real root
$P_n(x):=1+ \sum_{m=1}^n\dfrac{x^m}{m!}$ has no real root for even $n$ and exactly one real root for odd $n$
What are the conditions on $a$ such that the polynomial $x^4-2ax^2+x+a^2-a$ has four distinct real roots?
Prove that a polynomial of odd degree in $ \Bbb R[x]$ with no multiples roots must have an odd number of real roots.
Show that $X^3+X^2+1$ has only one real root
Can we prove that an odd degree real polynomial has a root from Descartes' Rule of Signs?
Lower bound on the number of real roots of a polynomial
Proof: A (real) polynomial of degree d has at most d (real) roots
|
CommonCrawl
|
The American Cyclopædia (1879)/Kansas (state)
←Kansas (tribe)
Kansas (state)
Kansas City→
Edition of 1879. Written by Eaton S. Drone. See also Kansas on Wikipedia, and the disclaimer.
1523400The American Cyclopædia — Kansas (state)
KANSAS, a western state of the American Union, the 21st admitted, lying between lat. 37° and 40° N., and lon. 94° 40′ and 102° W., bounded N. by Nebraska, E. by Missouri, S. by Indian territory, and W. by Colorado. A portion of the boundary on the northeast, adjoining Missouri, is formed by the Missouri river. The state has the general form of a rectangle, extending 410 m. E. and W. and about 210 m. N. and S., and containing 81,318 sq. m. It is divided into 104 counties, of which 31 in 1874 were unorganized, as follows: Allen, Anderson, Arapahoe, Atchison, Barbour, Barton, Bourbon, Brown, Buffalo, Butler, Chase, Cherokee, Cheyenne, Clark, Clay, Cloud, Coffey, Comanche, Cowley, Crawford, Davis, Decatur, Dickinson, Doniphan, Douglas, Edwards, Ellis, Ellsworth, Foote, Ford, Franklin, Gore, Graham, Grant, Greeley, Greenwood, Hamilton, Harper, Harvey, Hodgeman, Howard, Jackson, Jefferson, Jewell, Johnson, Kansas, Kearney, Kingman, Kiowa, Labette, Lane, Leavenworth, Lincoln, Linn, Lyon, Marion, Marshall, McPherson, Meade, Miami, Mitchell, Montgomery, Morris, Nemaha, Neosho, Ness, Norton, Osage, Osborne, Ottawa, Pawnee, Phillips, Pottawattamie, Pratt, Rawlins, Reno, Republic, Rice, Riley, Rooks, Rush, Russell, Saline, Scott, Sedgwick, Sequoyah, Seward, Shawnee, Sheridan, Sherman, Smith, Stafford, Stanton, Stevens, Sumner, Thomas, Trego, Wabaunsee, Wallace, Washington, Wichita, Wilson, Woodson, Wyandotte. The cities of Kansas, as reported by the federal census of 1870, were: Atchison, which had 7,054 inhabitants; Baxter Springs, 1,284; Emporia, 2,168; Fort Scott, 4,174; Lawrence, 8,320; Leavenworth, 17,873; Ottawa, 2,941; Paola, 1,811; Topeka, the capital, 5,790; and Wyandotte, 2,940. Kansas had 8,501 inhabitants in 1855, 107,206 in 1860, and 364,399 in 1870. Township and city assessors are required to make every year an enumeration of inhabitants. According to the state census of 1873, the number of inhabitants in the organized counties was 605,063; the population in the unorganized counties was estimated at 5,800, making the total population of the state 610,863, a gain of 246,464, or 67.63 per cent, in three years. Of the total population in 1870, 202,224 were males and 162,175 females; 316,007 were native and 48,392 foreign born; 346,377 were white, 17,108 colored, and 914 Indians. Of those of native birth, 63,321 were born in the state, 35,558 in Illinois, 13,073 in Iowa, 16,918 in Kentucky, 29,775 in Missouri, 18,557 in New York, 38,205 in Ohio, and 19,287 in Pennsylvania. Of the foreigners, 5,324 were natives of British America, 6,161 of England, 10,940 of Ireland, 1,274 of France, 12,774 of Germany, 4,954 of Sweden, and 1,328 of Switzerland. The density of population was 4.48 persons to a square mile. There were 72,493 families, with an average of 5.03 persons to each, and 71,071 dwellings, with an average of 5.13 persons to each. In the S. W. part of the state is a settlement of Mennonites. The increase of population from 1860 to 1870 was 239.9 per cent., a much larger gain during that period than is shown in any other state; the relative rank rose from 33 to 29. The number of male citizens 21 years old and upward was 99,069. There were in the state 108,710 persons from 5 to 18 years of age, and 95,002 males from 18 to 45. The total number attending school was 63,183; 16,369 persons 10 years of age and over were unable to read, and 24,550 could not write. Of the 105,680 male adults in the state, 8,894, or 8.42 per cent., were illiterate; and of the 69,645 female adults, 9,195, or 13.2 per cent., were illiterate. The number of paupers supported during the year ending June 1, 1870, was 361, at a cost of $46,475. Of the total number (336) receiving support June 1, 1870, 190 were natives and 146 foreigners. The number of persons convicted of crime during the year was 151. Of the total number (329) in prison June 1, 1870, 262 were of native and 67 of foreign birth. The state contained 128 blind, 121 deaf and dumb, 131 insane, and 109 idiotic. Of the total population 10 years of age and over (258,051), there were engaged in all occupations 123,852 persons; in agriculture, 73,228, including 21,714 agricultural laborers and 50,820 farmers and planters; in professional and personal services, 20,736, of whom 538 were clergymen, 4,481 domestic servants, 72 journalists, 7,871 laborers not specified, 682 lawyers, 906 physicians and surgeons, and 6,012 teachers not specified; in trade and transportation, 11,762; in manufactures and mechanical and mining industries, 18,126, including 4,138 blacksmiths, 625 boot and shoe makers, 5,064 carpenters, and 1,466 brick and stone masons. The total number of deaths returned by the census of 1870 was 4,596; there were 413 deaths from consumption, or one death from that disease to 11 from all causes; 599 from pneumonia, 354 from scarlet fever, 240 from intermittent and remittent fevers, and 204 from enteric fever. The Indians remaining in Kansas, not enumerated in the census of 1870, are the Kickapoos, 290 in number, on a reservation of 19,200 acres in the N. E. part of the state; the prairie band of the Pottawattamies, about 400, on a reservation of 77,357 acres 14 m. N. of Topeka; and about 56 Chippewas and Munsees, who own 5,760 acres of land about 35 m. S. of Lawrence.
State Seal of Kansas.
—The general surface of Kansas is an undulating plateau, which gently slopes from the western border, where the altitude above the sea is about 3,500 ft., to the eastern line, which is elevated about 750 ft. above the sea at the mouth of Kansas river. The river bottoms are generally from one fourth of a mile to 3 m. wide, but toward the western part of the state, on the Arkansas and Republican rivers, they are from 2 to 10 m. wide. Back from the bottom lands, bluffs rise to a height of from 50 to 300 ft., with a slope of 20° to 30°. From the summits of these bluffs may be seen a succession of rolls, or upland prairies, whose tops are from a quarter of a mile to a mile apart, and from 20 to 80 ft. above the intervening valley. The general inclination of the ridges is N. and S. There is no portion of the state which is flat or monotonous. The surface of eastern Kansas is chiefly undulating, and presents a succession of rich prairies, grass-covered hills, and fertile valleys, with an abundance of timber on the streams. The western half is not so diversified in its scenery, but it has a rolling and varied surface, with every requisite for a fine grazing country. Kansas is well supplied with rivers. On the E. border of the state the navigable Missouri presents a water front of nearly 150 m. The Kansas is formed by the confluence of the Republican and Smoky Hill rivers near Junction City, whence it flows in an E. course about 150 m. to the Missouri near Kansas City. It is not navigable, though steamboats have ascended to Junction City on the Smoky Hill. The latter has its source near the Rocky mountains in Colorado; it receives from the north in Kansas the Saline river, about 200 m. long, and the Solomon, 300 m. The Republican river rises in Colorado, and after flowing through N. W. Kansas into Nebraska, enters Kansas again about 150 m. W. of the E. border of the state; it is more than 400 m. long from its source. The Kansas receives from the north the Big Blue river, which rises in Nebraska and is about 125 m. long, and the Grasshopper, about 75 m.; on the south, it receives near Lawrence the Wakarusa, which is nearly 50 m. long. About two thirds of the state lies S. of the Kansas and Smoky Hill rivers, and is therefore called southern Kansas, the remainder being known as northern Kansas. The Osage river rises in the E. part of the state, and after a S. E. course of about 125 m. enters Missouri. The most important rivers having a southerly course are the Neosho, which rises in the central part of the state, and after a S. E. course of about 200 m., during which it receives the Cottonwood and other streams, enters the Indian territory about 25 m. W. of the S. E. corner of Kansas; the Verdigris, which flows nearly parallel with the Neosho into the Indian territory, receiving Fall river on the west; and the Arkansas, which has its sources in the Rocky mountains in Colorado. This river runs through nearly three fourths of the length of Kansas, first E. and then S. E., and with its tributaries waters two thirds of the southern part of the state. Its windings in Kansas have been estimated at 500 m. Its tributaries on the N. or E. side include the Walnut, the Little Arkansas, and Cow creek. In the S. W. corner, the Cimarron flows for a considerable distance in the state. The above constitute only the most important of the rivers of Kansas; there are numerous tributaries of these from 25 to 75 m. long, which with the main streams make Kansas one of the best watered of the western states; but none of them are navigable.—No thorough geological survey of Kansas has yet been undertaken; but preliminary examinations have been made by Professors G. C. Swallow and B. F. Mudge. The eastern portion of the state belongs to the carboniferous system, in which are found all the bituminous coal measures of the state. The greater part of this area is the upper carboniferous, the lower carboniferous only coming to the surface in the S. E. corner. This formation is composed of many different strata of limestone, sandstone, coal, marls, shales, fire clay, slate, selenite, &c., varying in thickness, and occurring irregularly. The carboniferous system is divided by Prof. Swallow into the following series: upper coal, 391 ft. thick; chocolate limestone, 79; cave rock, 75; Stanton limestone, 74; spring rock, 80; well rock, 238; Marais des Cygnes coal, 303; Pawnee limestone, 112; Fort Scott coal, 142; Fort Scott marble, 22; lower coal, 350; lower carboniferous, 120; total, 1,986 ft. Some of these series, however, are only local. Further west is the upper and lower Permian system, having a depth of about 700 ft., and containing numerous strata of magnesian limestone and beds of gypsum. This system is supposed to extend across the state from N. to S. in an irregular belt about 50 m. wide. Adjoining it on the west is a tract belonging to the triassic system, the strata of which have a thickness of 338 ft., and are composed of limestone, sandstone, thin coal veins, gypsum, selenite, and magnesian marls and shales. West of this is the cretaceous formation, extending to the foot hills of the Rocky mountains. It crosses the state in a N. E. and S. W. direction near the mouths of the Saline and Solomon rivers, thence covering the whole western portion of the state. Prof. Mudge says: "This is one of the richest deposits of the United States in its fossils, and possesses great geological interest. It not only abounds in well preserved fossils, similar to those of other parts of the United States, as well as of Europe, but contains many species new to science. The predominant fossils of the eastern portion of this formation are dicotyledonous leaves, of which about 50 species have been found, a dozen of which are new to science. Among these is the cinnamon, now growing only in torrid climes. More westerly are quantities of the remains of sharks and other fish, equalling in size the largest now known; also saurians and other amphibians, of large size and peculiar forms." Fifteen specimens of marine shells, three of reptiles, and five of fishes, previously unknown, were obtained here. The coal-bearing region of Kansas occupies the entire E. portion of the state, having a general width from E. to W. of about 120 m., and embracing an area of about 17,000 sq. m. Throughout this region outcroppings of bituminous coal appear. Many of the veins are thin, but some of them are 7 ft. thick and produce a good quality of bituminous coal; mining is extensively carried on at several points. Coal is also found in the W. part of the state, but of inferior quality. In this region salt also exists in large quantities in numerous springs and extensive salt marshes. The salt district embraces a tract about 80 by 35 m., crossing the Republican, Solomon, and Saline valleys. Salt is also found S. of the Arkansas river. On the W. border of the state there is an extensive deposit of crystallized salt in beds from 6 to 28 in. thick. It has not, however, been made available for commercial purposes, in consequence of the difficulty of access. Analyses of Kansas salt show it to be of remarkable purity, entirely free from chloride of calcium. Iron ores have been found in various localities, but not of a character to be profitably worked. Lead, alum, limestone suitable for hydraulic cement, petroleum, deposits of paints, lime, excellent building stone, and brick and other clays are found.—Perhaps no other western state has so pleasant and beautiful a climate as that of Kansas, or so many bright sunny days. The winters are milder than in the same latitude further east, the temperature rarely falling below zero. According to observations covering five years made by Prof. Snow, Kansas had more rain during the seven months from March 1 to Oct. 1 than any other of 19 northern and western states with which comparison was made; and less during the winter months than any other except one. In summer the temperature ranges from 80 to 100, but the air is dry and pure, while the nights are invariably cool and refreshing. The extraordinary clearness of the atmosphere is remarked by all strangers. The most disagreeable feature of the climate is the severe winds which sweep over the prairies during the winter months from the northwest; during summer, pleasant S. W. breezes prevail. The mean annual temperature for five years was 52.8°: spring, 52.2°; summer, 75.5°; autumn, 54.3°; winter, 29.1°. The average annual rainfall was 44.09 in.: spring, 10.82; summer, 18.6; autumn, 9.79; winter, 5.42; from March 1 to Oct. 1, 34.15. The climate of Kansas is said to be highly favorable to consumptives and those suffering with asthmatic or bronchial complaints; the central and W. portions are singularly free from the diseases which prevail in miasmatic regions and mountain districts, such as fever and ague, and rheumatic and acute febrile diseases.—The soil of Kansas is highly favorable to agriculture. On the bottom lands it is from 2 to 10 ft. deep, and on the uplands from 1 to 3 ft. In the E. half of the state it is a black sandy loam intermixed with vegetable mould. In the W. part the soil is light-colored, and is deeper than that of eastern Kansas, being from 2 to 10 ft., but it contains less vegetable mould. The soil of the entire state is rich in mineral constituents; this feature, together with an unusually good drainage, gives to it valuable qualities for the growth of vegetation. Reports covering nine years show that the average production of Indian corn per acre was 18 to 48.4 bushels, wheat 11.6 to 21.4, rye 17 to 25.8, oats 25 to 42, barley 23 to 38, potatoes 85 to 149. Fine grazing and good hay are afforded by the prairie grasses which everywhere abound, growing from 1 to 6 ft. high. The plains in the W. part of the state are covered with a small grass, which has a short curled leaf and spreads on the ground like a thick mat. It is known as buffalo grass, and is extremely sweet and nutritious. Good timber is well distributed throughout the E. part of the state, being generally found along streams and adjacent ravines. The abundance of coal and stone, however, diminishes the need of wood for fuel or building purposes. The most abundant kinds of trees are oak, elm, black walnut, cottonwood, box elder, honey locust, willow, hickory, sycamore, white ash, and hackberry. The buffalo, elk, deer, antelope, prairie dog, squirrel, horned frog, prairie hen, grouse, wild turkey, wild goose, and many varieties of small birds are found. The rearing of cattle is a prominent industry, and the W. part of the state presents unusual advantages for sheep raising.—According to the census of 1870, there were 5,656,879 acres of land in farms, including 1,971,003 acres of improved land, 635,419 of woodland, and 3,050,457 of other unimproved land. The total number of farms was 38,202; there were 5,478 containing between 10 and 20 acres, 13,744 between 20 and 50, 8,732 between 50 and 100, 5,346 between 100 and 500, 42 between 500 and 1,000, and 13 over 1,000. The cash value of farms was $90,327,040; of farming implements and machinery, $4,053,312; total amount of wages paid during the year, including value of board, $2,519,452; total (estimated) value of all farm productions, including betterments and additions to stock, $27,630,651; value of orchard products, $158,046; of produce of market gardens, $129,013; of forest products, $368,947; of home manufactures, $156,910; of animals slaughtered or sold for slaughter, $4,156,386; of all live stock, $23,173,185. The number of acres under cultivation was returned at 2,476,862 in 1872, and 2,982,599 in 1873; the value of farm productions in the former year was $25,265,109. The chief agricultural productions in 1870 and 1873 were as follows:
PRODUCTIONS. 1870. 1873.
Wheat, spring, bushels 1,314,522 ........
Wheat, winter 1,076,676 ........
Indian corn 17,025,525 29,688,848
Rye 85,207 301,957
Oats 4,097,925 9,337,681
Barley 98,405 508,002
Buckwheat 27,626 76,929
Peas and beans 13,109 ........
Potatoes 2,392,521 ........
Grass seed 3,023 ........
Flax seed 1,553 ........
Hay, tons 490,289 ........
Hemp, lbs. 73,400 1,410,304
Flax 1,040 ........
Cotton 3,500 251,222
Tobacco 33,241 393,352
Wool 335,005 ........
Butter 5,022,758 6,804,693
Cheese, farm 226,607 143,932
Cheese, factory ....... 151,172
Honey 110,827 135,384 (1872)
Wax 2,203 3,633 (1872)
Wine, gallons 14,889 34,505
Milk sold 196,662 ........
Orchard products, bushels ....... 713,954
Orchard products, value ....... $356,977
Grapes, lbs. ....... 828,120 (1872)
Grapes, value ....... $42,441
The number of domestic animals on farms reported by the census of 1870, and the number and value of all in the state as reported by the state authorities in 1873, were:
ANIMALS. 1870. 1873. Value in 1873.
Horses 117,786 176,161 $10,393,499
Mules and asses 11,786 17,816 1,362,971
Milch cows 128,440 ..... ........
Sheep 109,088 51,166 119,723
Swine 206,587 380,701 2,093,852
Cattle 250,527 634,021 13,314,441
—Though having an abundance of water power, Kansas has not yet attained a high rank in manufacturing industry, the people being devoted chiefly to agriculture, stock raising, and fruit growing. According to the census of 1870, the total number of manufacturing establishments was 1,477, having 254 steam engines of 6,360 horse power, and 62 water wheels of 1,789 horse power, and employing 6,844 hands, of whom 6,599 were adult males, 118 adult females, and 127 youth. The capital invested amounted to $4,319,060; wages paid during the year, $2,377,511; value of materials, $6,112,163; of products, $11,775,833. The chief industries were: 195 carpentering and building establishments, capital $146,678, products $1,725,433; 106 flouring and grist mills, capital $1,056,800, products $2,938,215; 123 founderies, capital $135,986, products $326,420; 195 lumber mills, capital $642,955, products $1,736,381; 76 saddlery and harness establishments, capital $217,205, products $425,928; 6 woollen mills, capital $92,000, products $141,750. Assessors are required to collect every year statistics of agriculture, manufactures, minerals, &c., and the state board of agriculture to publish annually a detailed statement of the various industries. Transportation facilities are afforded by the Missouri river and the numerous railroads. In 1865 there were but 40 m. of railroad in Kansas. In 1873 the entire mileage had increased to 2,131, and was being rapidly extended. The railroad assessors in the latter year returned 2,062 m., assessed at $11,704,154. The railroads lying wholly or partly within the state in 1873, together with their termini and their assessed value in Kansas, are represented in the following statement:
NAMES OF CORPORATIONS. TERMINI. Miles in
in 1873. Total length
of line when
preceding. Assessed
value in
Kansas.
Atchison and Nebraska Atchison and Lincoln, Neb. 38 147 $182,619
Atchison, Topeka, and Santa Fé Atchison and state line 469
} {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \end{matrix}}\right\}\,}} ....
Branch Newton to Wichita 27 ........
Central branch of the Union Pacific Atchison and Waterville 100 .... 400,000
Doniphan and Wathena Doniphan and Wathena 13 .... 40,500
Junction City and Fort Kearney Junction City and Clay Centre 33 .... 99,000
Kansas Central Leavenworth and Denver, Col. 56 500 165,810
Kansas Pacific Kansas City, Mo., and Denver, Col. 476 639 3,764,745
Branches { {\displaystyle \scriptstyle {\left\{{\begin{matrix}\ \end{matrix}}\right.}}
Lawrence to Leavenworth 34 .... ........
Junction City to Clay Centre 33 .... ........
Lawrence and Southwestern Lawrence and Carbondale 30 .... 107,100
[1]Leavenworth, Atchison, and Northwestern Leavenworth and Atchison 21 .... 153,373
Leavenworth, Lawrence, and Galveston Lawrence and Coffeyville 144
} {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\ \end{matrix}}\right\}\,}} ....
Olathe to Ottawa 32 ........
Cherryvale to Independence 10 ........
Missouri, Kansas, and Texas ...
} {\displaystyle \scriptstyle {\left.{\begin{matrix}\ \\\\\ \\\ \ \end{matrix}}\right\}\,}} ....
Neosho division Junction City to Parsons 156 ........
Sedalia division Sedalia, Mo., to Parsons 50 ........
Osage division Holden, Mo., to Paola 19 ........
Cherokee division Parsons to Arkansas river, Indian Ter. 28 ........
Missouri River Kansas City, Mo., and Leavenworth 23 .... 177,952
Missouri River, Fort Scott, and Gulf Kansas City, Mo., and Baxter 159 .... 1,147,474
St. Joseph and Denver City Elwood and Hastings, Neb. 138 227 647,143
[1]St. Louis, Lawrence, and Denver Pleasant Hill, Mo., and Carbondale 39 93 157,000
2,128 $11,704,152
↑ 1.0 1.1 Leased by the Atlantic and Pacific railroad company.
In 1873 there were in the state 26 national banks, with a paid-in capital of $1,975,000, and an outstanding circulation of $1,537,496. The entire bank circulation was $1,825,496, being $5.01 per capita; ratio of circulation to wealth, one per cent.; ratio of circulation to bank capital, 77.8 per cent. In 1874 there were 34 fire and marine and 20 life insurance companies doing business in the state.—The executive department of the government consists of a governor, whose annual salary is $3,000; lieutenant governor; secretary of state, $2,000; auditor, $2,000; treasurer, $2,000; attorney general, $1,500; and superintendent of public instruction, $2,000. All of these are elected by the people for a term of two years. The legislature at present (1874) comprises 33 senators, who are elected for two years, and 105 representatives, elected for one year. Their compensation is fixed by the constitution at $3 a day for actual service, and 15 cents a mile for travel to and from the capital; the entire per diem compensation for each member being limited to $150 for a regular and $90 for a special session. The sessions are annual, beginning on the second Tuesday of January. A two-thirds vote of all the members elected in each branch of the legislature is required to pass a measure over the governor's veto. The judicial power is vested in a supreme court, consisting of a chief justice and two associate justices, elected by the people for a term of six years; 15 district courts, of one judge each, elected by the people of the district for four years; a probate court in each county consisting of one judge elected for two years; and justices of the peace elected in each township for two years. General elections are held annually on the Tuesday succeeding the first Monday in November. The right of suffrage is limited by the constitution to white males 21 years old and over, who are either citizens of the United States or have declared their intention to become such, and who have resided in Kansas six months next preceding the election and in the township or ward in which the vote is offered at least 30 days. Persons who have engaged in a duel are made ineligible to any office of trust or profit. The property owned by a married woman at the time of marriage, and any which may come to her afterward except from her husband, remains her separate property, not subject to the disposal of her husband, or liable for his debts. She may convey her property, or make contracts concerning it. She may sue and be sued, in the same manner as an unmarried woman, and may carry on any trade or business and have full control over her earnings. Neither husband nor wife may bequeath more than one half of his or her estate away from the other without written consent. Divorces may be granted by the district court, among other causes, for abandonment for one year, adultery, impotency, extreme cruelty, drunkenness, gross neglect of duty, and imprisonment in the penitentiary subsequent to marriage. The plaintiff must have resided a year in the state. In actions for libel, the truth published with good motives and for justifiable ends may constitute a good defence. The legal rate of interest is limited to 12 per cent. Kansas is represented in congress by two senators and three representatives, and has therefore five votes in the electoral college. The total state debt, Jan. 1, 1874, was $701,550; bonded school debt of counties, $1,928,585; municipal debt, $10,899,445; aggregate, $13,529,580. The income and disbursements of the various funds were as follows:
SOURCES. Receipts. Disburements. Balance.
General revenue $744,856 99 $658,855 83 $86,001 16
Interest fund 146,775 11 93,403 00 53,372 11
Sinking fund 47,229 96 8,905 00 38,324 96
Annual school fund 249,771 82 237,220 23 12,551 59
Permanent school fund 231,164 61 229,625 97 1,538 64
Military fund 7,516 89 3,500 00 4,016 89
Insane asylum fund ........ ........ 20
Railroad fund 8,210 88 6,060 31 2,150 57
Penitentiary fund ........ ........ 3,272 00
Int. on municipal bonds 58,339 16 54,289 79 4,049 37
Total $1,493,865 42 $1,291,860 13 $205,277 49
The value of taxable property, as fixed by the state board, and the amount and rate of taxation since Kansas became a state, are shown in the following table:
YEARS. Taxable
property. Rate. Tax
levied.
1861 $24,744,383 3 mills. 74,233
1862 19,285,749 5 mills. 101,469
1868 66,949,549 6½ mills. 435,407
1869 76,383,697 10 mills. 763,836
1870 92,528,099 8¾ mills. 809,620
1871 108,753,575 6 mills. 652,521
1872 127,690,937 8½ mills. 1,085,372
The state government is supported chiefly by a tax directly upon the people, the assessment being made upon a cash valuation of all the real and personal estate, including the property of railroad companies and other corporations. The asylums for the insane, deaf and dumb, and blind are each controlled by a board of six trustees appointed by the governor and senate. The asylum for the insane at Osawatomie is greatly inadequate to the needs of the state. The number of patients at the close of 1873 was 121; the current expenses for the year amounted to $28,221. Since the opening of the asylum in 1863, 378 persons have been admitted, of whom 161 have been discharged recovered, 38 improved, 26 stationary, and 19 died. The asylum for the deaf and dumb at Olathe, organized by the legislature in 1866, is intended to afford instruction, without charge for board or tuition, to all the deaf and dumb of the state between the ages of 10 and 21 years. The course of instruction covers six years, but may be extended in certain cases. Students are also required to devote time to industrial pursuits with a view of being able to obtain a livelihood after leaving the institution. By this means a considerable income is created for the asylum. In 1873 there were 5 instructors and 77 pupils, of whom 52 were in attendance at the close of the year. The amount appropriated by the legislature was $36,604, including $20,000 for additional buildings. The institution for the blind, founded in 1867, is at Wyandotte. It comprises educational and industrial departments, and in 1873 had 4 instructors and 33 pupils. The cost of the institution in that year was $11,590. The state penitentiary at Leavenworth at the end of 1873 had 340 convicts, of whom 19 had been sentenced by the United States and 49 by military courts; 25 had been convicted of murder, 11 of manslaughter, 10 of assault with intent to kill, 173 of larceny, 32 of burglary, 15 of robbery, and 15 of rape. The disbursements for 1873 were $126,267; the resources amounted to $139,607, including $70,000 appropriated by the legislature and $54,232 received from prisoners' labor, boarding United States prisoners, &c. Some of the convicts are employed in various industrial pursuits within the prison, while others are employed under contract outside. Convicts may receive a percentage of their earnings. In 1873, for want of a state reform school, 75 boys from 15 to 20 years of age were confined in the penitentiary.—The constitution requires the legislature to "encourage the promotion of intellectual, moral, scientific, and agricultural improvement, by establishing a uniform system of common schools, and schools of a higher grade, embracing normal, preparatory, collegiate, and university departments." The proceeds of all lands granted by the United States to the state for schools, and of the 500,000 acres granted to each of the new states by congress in 1841, all estates of persons dying without heir or will, and such percentage as may be granted by congress on the sale of lands in this state, are made a perpetual school fund. The income of the state school funds is required to be disbursed annually among the school districts; but no district is entitled to receive any portion of such funds in which a common school has not been maintained at least three months in each year. General educational interests are under the supervision of a state superintendent of public instruction, and there is a superintendent in each county. The board of education consists of the state superintendent, the chancellor of the state university, the president of the state agricultural college, and the principals of the state normal schools at Emporia and Leavenworth. A prominent duty of the board is to issue diplomas to such teachers as pass the examination. The state institutions of learning are governed by a hoard of seven regents, of whom one is an ex officio member and six are appointed by the governor and senate. According to the census of 1870, the whole number of schools was 1,689, having 1,955 teachers, of whom 872 were males and 1,083 females, and attended by 59,882 pupils. Of these, 1,663 were public schools, with 1,864 teachers and 58,030 pupils; 5 were colleges, with 27 teachers and 489 students; 6 were academies, with 36 teachers and 415 pupils; and 4 were private schools, with 4 teachers and 115 students. The total income of all the educational institutions was $787,226, of which $19,604 was from endowment, $678,185 from taxation and public funds, and $89,437 from tuition and other sources. In 1873 there had been organized 4,004 school districts, in which there were 3,133 school houses. The entire school population of the state (between 5 and 21 years of age) numbered 184,957, of whom 121,690 were enrolled in the public schools, the average daily attendance being 71,062. There were 1,880 male teachers, receiving an average monthly salary of $38 43, and 2,143 female teachers, whose average monthly salary was $30 64. The permanent school fund was $1,013,982, including $1,003,682 interest-bearing securities. The income from various sources for public schools amounted to $1,657,318, including $931,958 from district tax and $231,917 received from state fund. The total expenditures for schools were $1,488,676, including $716,056 for teachers, $51,504 for rent and repair of buildings, $160,723 for furniture, apparatus, &c., $515,071 for buildings and sites, and $79,812 for miscellaneous items. The total value of school houses was $3,408,956; of apparatus, $33,873. Kansas has four state normal schools for the free training of public school teachers: one at Emporia, organized in 1865; one at Leavenworth, in 1870; one at Quindaro, in 1871; and one at Concordia, in 1874. The first named has a normal department, which affords a two years' and a four years' course of study, and a model department. The number of students in 1873 was 218, the disbursements $17,829. The school at Leavenworth comprises a normal department, which affords a thorough knowledge of all subjects taught in the public schools of the state, and a model school in which the art of teaching may be practised. This model school comprises 13 grades or departments, in which in 1873 there were 1,100 pupils receiving instruction from 15 teachers. In the normal department there were 7 teachers and 63 students. The Quindaro normal school is for colored persons, and was attended in 1873 by 82 pupils. The state university is at Lawrence. The plan of the institution comprises six departments: 1, science, literature, and the arts; 2, law; 3, medicine; 4, theory and practice of elementary instruction; 5, agriculture; 6, normal department. In 1874 only one of these departments had been organized; this comprised a classical course, a scientific course, and a course in civil and topographical engineering. There were then 12 instructors and 272 pupils, of whom 73 were in the collegiate and 199 in the preparatory department. No charge is made for tuition. The university already has valuable collections in natural history, and a considerable library. The magnificent building of the institution, 246 ft. long, 98 ft. wide in the centre and 62 in the wings, contains 54 rooms, including an immense hall, to be devoted to purposes of instruction. The state agricultural college at Manhattan has received the national grant of lands made for the establishment of colleges of agriculture and the mechanic arts. The aim of the institution is to afford an industrial rather than a professional education. Four general courses of instruction are provided: the farmer's, the mechanic's, the commercial, and the woman's. The farm contains 200 acres of prairie upland, so arranged as to afford the best facilities for teaching the applications of science to agriculture and making practical experiments. The nursery of 67 acres contains the largest and most valuable assortment of fruit and forest trees west of the Mississippi river. The mechanical department embraces carpenter, wagon, blacksmith, paint, and harness shops. Women are taught sewing, printing, telegraphy, photography, and other branches. Tuition in all departments is free. The principal colleges are St. Benedict's (Roman Catholic), at Atchison, founded in 1859, which in 1873 had 7 instructors and 94 pupils; Washburn college (Congregational), at Topeka, founded in 1865, having 5 instructors and 93 students; Highland university (Presbyterian), with 4 instructors and 137 students; Baker university (Methodist Episcopal), at Baldwin City, with 8 instructors and 65 students; college of the sisters of Bethany (Episcopal), at Topeka, with 10 instructors and 83 pupils; and Ottawa university (Baptist), at Ottawa. The Kansas academy of science was organized in 1868 as a society of natural history, but was enlarged in its scope in 1871, and incorporated by the legislature the following year. In its present form it comprehends observers and investigators in every line of scientific inquiry, and aims to increase and diffuse a knowledge of science particularly in its relation to Kansas. The society has made valuable contributions to the knowledge of the state in geology, botany, ornithology, ichthyology, entomology, and meteorology, and designs in time to make a complete scientific survey of the state.—According to the census of 1870, there were in the state 574 libraries, having 218,676 volumes; 364, with 126,251 volumes, were private, and 190, with 92,425, were other than private, including 4 circulating libraries with 6,550 volumes. The state library in 1874 contained about 10,000 volumes. The number of newspapers and periodicals in 1870 was 97, with an aggregate circulation of 96,803; copies annually issued, 9,518,176; 12 were daily, circulation 17,570; 4 tri-weekly, circulation 1,840; 78 weekly, circulation 71,393; and 3 monthly, circulation 6,000. The number of religious organizations of all denominations was 530, having 301 edifices, with 102,135 sittings, and property valued at $1,722,700. The denominations were represented as follows:
DENOMINATIONS. Organizations. Edifices. Sittings. Property.
Baptist, regular 91 56 18,540 $247,900
Christian 35 10 4,550 45,300
Congregational 43 26 8,350 152,000
Episcopal, Protestant 14 9 3,280 57,500
Evangelical Association 2 1 300 6,000
Friends 7 7 1,600 13,800
Jewish 2 1 300 1,500
Lutheran 9 5 1,400 12,500
Methodist 166 74 23,525 316,600
Presbyterian, regular 84 55 20,660 277,900
Presbyterian, other 10 7 2,150 24,500
Reformed Church in the United States (late German Reformed) 1 1 275 3,000
Roman Catholic 37 34 14,605 513,200
Unitarian 2 1 400 20,000
United Brethren in Christ 24 3 2,200 31,500
—Kansas was annexed to the United States in 1803 as part of the territory bought from France under the general designation of Louisiana. By the Missouri compromise bill of 1820 it was provided "that in all the territory ceded by France to the United States under the name of Louisiana which lies N. of lat. 36° 30' N., excepting only such part thereof as is included within the limits of the state [Missouri] contemplated by this act, slavery and involuntary servitude, otherwise than in the punishment of crime whereof the party shall have been duly convicted, shall be and is hereby for ever prohibited." By an act of congress passed in May, 1854, the territories of Kansas and Nebraska were organized, and in section 14 of this act it was declared that the constitution and all the laws of the United States should be in force in these territories except the Missouri compromise act of 1820, "which . . . is hereby declared inoperative and void." The question of slavery was thus left to the decision of the inhabitants of the territory. This formed the leading topic of discussion in congress, and caused a great agitation throughout the country. About a month previously the legislature of Massachusetts had incorporated the Massachusetts emigrant aid company, for the purpose of assisting emigrants to settle in the new territories, by giving them useful information, procuring them cheap passage over railroads, and establishing mills and other conveniences at central points in the new settlements. In July the legislature of Connecticut granted a charter to a similar company. A large immigration into Kansas from the northwestern states had already taken place, and emigrants in considerable numbers from the free states and a few from the slave states now availed themselves of the opportunities for cheap transportation offered by these companies to settle in Kansas. A party of 30 men led by Mr. Branscomb founded the town of Lawrence, and were soon after joined by 60 or 70 more led by Mr. Charles Robinson and S. C. Pomeroy. Settlers from Missouri were at the same time passing into Kansas, in many cases taking their slaves with them. On July 29, 1854, a public meeting, called by the "Platte County Defensive Association," was held at Weston, Mo., and resolutions were adopted and published declaring that the association would hold itself in readiness, whenever called upon by any of the citizens of Kansas, "to assist in removing any and all emigrants who go there under the auspices of northern emigrant aid societies." On Aug. 12 another meeting was held at Weston, at which resolutions were adopted, declaring in favor of the extension of slavery into Kansas. It also appears from a congressional investigation ordered in 1856, that before any elections were held in the territory a secret society was formed in Missouri for the purpose of extending slavery into Kansas and other territories. This was to be done by sending voters into the territory. Andrew H. Reeder of Pennsylvania had been appointed governor by President Pierce, and arrived in Kansas Oct. 6. An election for a territorial delegate to congress was held Nov. 29. The polls were taken possession of by armed bands from Missouri, and out of 2,843 votes cast it was subsequently estimated by a congressional investigating committee that 1,729 were illegal. On March 30, 1855, another election for members of the territorial legislature was held, and the polls were again taken possession of by large bodies of armed men from Missouri, who, after electing pro-slavery delegates from every district, returned to their own homes in the adjacent state. From the investigation by the congressional committee it appeared that out of 6,218 votes cast at this election, only 1,410 were legal, of which 791 were given for the free-state or anti-slavery candidates. From six of the districts, evidence of the illegal nature of the proceedings having been laid before Gov. Reeder, he set aside the returns and ordered new elections in those districts, which resulted in the choice of free-state delegates, except at Leavenworth, where the polls were again seized by Missourians. Gov. Reeder soon after visited Washington to confer with the federal authorities, and after his return his removal from the office of governor was announced, July 20, for the alleged reason of irregular proceedings in the purchase of Indian lands. The territorial legislature assembled at Pawnee, July 3, but two days afterward adjourned to Shawnee mission, near the Missouri line, where they reassembled July 16, and remained in session till Aug. 30. One of their first acts was to expel the free-state men chosen at the second elections ordered by Gov. Reeder, and to give their seats to the pro-slavery men originally returned. They also passed an act making it a capital offence to assist slaves in escaping either into the territory or out of it; and felony, punishable with imprisonment at hard labor from two to five years, to conceal or aid escaping slaves, to circulate anti-slavery publications, or to deny the right to hold slaves in the territory; also an act requiring all voters to swear to sustain the fugitive slave law; and they also adopted in a body the laws of Missouri, and passed an act making Lecompton the capital of the territory. Wilson Shannon of Ohio was appointed governor in place of Mr. Reeder, and assumed office Sept. 1. A few days later a convention of the free-state party was held at Big Springs, and, after protesting against the acts of the legislature, nominated ex-Governor Reeder as delegate to congress, and appointed Oct. 9 as the time for holding the election, when Gov. Reeder received about 2,400 votes. Delegates were subsequently chosen to a constitutional convention, which assembled at Topeka Oct. 23, and sat till Nov. 12, when they promulgated a constitution for the state of Kansas in which slavery was prohibited. The contest between the free-state and pro-slavery parties now grew to such a pitch of violence that several men were killed on each side, and the people of Lawrence began to arm for self-defence. The governor called out the militia. A large number of Missourians enrolled themselves as Kansas militia, and Lawrence for some days was in a state of siege; but the difficulty was temporarily adjusted by negotiation, and the Missourians retired to their own state. On Dec. 15 the people voted upon the question of accepting the Topeka constitution, and the pro-slavery men abstaining from participation, it was accepted with only 45 votes against it, exclusive of Leavenworth, where the polling was prevented by an inroad from Missouri. On Jan. 15, 1856, an election was held for state officers and a legislature under the Topeka constitution, and Charles Robinson was chosen governor. The legislature met at Topeka March 4, and, after organizing and inaugurating the governor and other officers, adjourned to July 4. Early in April a considerable body of armed men from Georgia, Alabama, and other southern states, led by Major Buford, arrived in Kansas. On the 17th of the same month a special committee of the United States house of representatives, appointed about a month before, and charged to investigate the troubles in the territory of Kansas, arrived at Lawrence. The result of their investigations was a report by the majority of the committee, Messrs. Howard of Michigan and Sherman of Ohio, in which they said: "Every election has been controlled, not by the actual settlers, but by citizens of Missouri; and, as a consequence, every officer in the territory from constable to legislators, except those appointed by the president, owe their positions to non-resident voters. None have been elected by the settlers, and your committee have been unable to find that any political power whatever, however unimportant, has been exercised by the people of the territory." Mr. Oliver of Missouri, the third member of the committee, made a minority report, in which he said that there was no evidence that any violence was resorted to, or force employed, by which men were prevented from voting. On May 5 the grand jury of Douglas county found indictments against Reeder, Robinson, Lane, and other free-state leaders, for high treason, on the ground of their participation in the organization of a state government under the Topeka constitution. Reeder and Lane escaped from the territory, but Robinson was arrested and kept in prison for four months. The United States marshal took Buford's men into pay, and armed them with government muskets. Lawrence was again besieged by a large force, and on May 21, under a promise of safety to persons and protection to property, the inhabitants gave up their arms to the sheriff. The invaders immediately entered the town, blew up and burned the hotel, burned Mr. Robinson's house, destroyed two printing presses, and plundered several stores and houses. A state of civil war now spread through the territory, the free-state party being furnished with contributions of arms and money from non-slaveholding states. On May 26 a fight, in which five men were killed, occurred at Pottawattamie, where John Brown with a band of free-state men was encamped; and on June 2 there was another at Black Jack, which resulted in the capture of Capt. Pate together with 30 of his men. Similar affairs, attended with loss of life, continued to occur for three or four months. Parties of emigrants from the free states on their way through Missouri were in many cases stopped and turned back. The free-state legislature met at the appointed time (July 4) at Topeka, and was forcibly dispersed by United States troops under Col. Sumner. On Aug. 14 the free-state men assailed and took a fortified post near Lecompton, occupied by Col. Titus with a party of pro-slavery men, and captured Titus and 20 other prisoners. On Aug. 17 a treaty was agreed to between Gov. Shannon and the free-state men, by which Shannon restored the cannon taken at Lawrence, and received in exchange Titus and the other prisoners. A few days later Shannon received notice of his removal from office, John W. Geary of Pennsylvania being appointed in his stead. Mr. Woodson, the secretary of the territory, and acting governor before Geary's arrival, on Aug. 25 issued a proclamation declaring the territory to be in a state of rebellion. He collected a considerable armed force at Lecompton, while another body, amounting to 1,150 men, assembled under the Hon. David R. Atchison, late U. S. senator from Missouri, at a point called Santa Fé. On Aug. 29 a detachment from Atchison's army attacked Osawatomie, which was defended by a small band under John Brown, who made a vigorous resistance, but were defeated with the loss of two killed, five wounded, and seven prisoners. Five of the assailants were killed, and 30 buildings were burned. The next day a body of free-state men marched from Lawrence to attack Atchison's army. On their approach the latter retired with his forces into Missouri. On Sept. 1 the annual municipal election took place at Leavenworth. A party, chiefly from Missouri, killed and wounded several of the free-state men, burned their houses, and forced about 150 to embark for St. Louis. On Sept. 8 Gov. Geary arrived at Lecompton, and Robinson and the other prisoners held on a charge of treason were released on bail. The governor on assuming office issued a proclamation calling upon all bodies of armed men to disband. He also promised protection to the free-state men, who accordingly laid down their arms. But the Missouri men immediately assembled to the number of upward of 2,000, forming three regiments with artillery, and marched to attack Lawrence, under command of a member of the Missouri legislature. Gov. Geary with a force of United States soldiers interposed between them and Lawrence, and finally prevailed upon them to retire. During their retreat a free-state man named Buffum was shot down by a man named Hanes almost in the presence of the governor, who subsequently caused the arrest of Hanes on a charge of murder. The United States district judge Lecompte, who was noted as an active partisan, liberated Hanes on bail, and afterward on habeas corpus. Thereupon Gov. Geary forwarded a representation to Washington demanding the judge's removal, and about the middle of December James O. Harrison of Kentucky was appointed in his place. Gov. Geary now reported to the president that peace and order were completely reëstablished in Kansas. On Jan. 6, 1857, the legislature elected under the Topeka constitution met at Topeka, and organized next day. The United States marshal immediately arrested the president of the senate, the speaker of the house, and about a dozen of the leading members, whom he carried prisoners to Tecumseh on the charge of "having taken upon themselves the office and public trust of legislators for the state of Kansas, without lawful deputation or appointment." The houses, being left without a quorum, met the next day and adjourned till June. Shortly afterward the territorial legislature, composed entirely of pro-slavery men, chosen at an election in which the free-state men had declined to participate on the ground of its illegality, met at Lecompton, and among other acts passed one providing for the election of a convention to frame a state constitution for Kansas. Meanwhile the house of represensatives at Washington had passed a bill declaring void all the enactments of the territorial legislature, on the ground that they were "cruel and oppressive," and that "the said legislature was not elected by the legal voters of Kansas, but was forced upon them by non-residents." The senate refused to pass the bill, and also to confirm the appointment of Harrison in place of Lecompte, who thus remained chief justice of Kansas, never having been actually dismissed. Upon this Gov. Geary resigned his office and quitted the territory. Robert J. Walker of Mississippi was appointed by President Buchanan his successor, with Frederick P. Stanton of Tennessee for secretary. The election for delegates to the constitutional convention was held on June 15. The free-state men generally took no part in it, on the ground that the legislature which ordered it had no legal authority, and that if they attempted to vote they would be defrauded and overborne by intruders from Missouri. About 2,000 votes were cast, while the legal voters in the territory by a recent census numbered about 10,000. At the territorial election held a few months later, the free-state men, being assured by Gov. Walker of protection from intruders, went to the polls and cast about 7,600 votes, to 3,700 votes thrown by the opposite party, electing Marcus J. Parrott delegate to congress, together with 9 of the 17 councilmen and 27 of the 39 representatives. An attempt was made to change this result by means of a false return from Oxford, Johnson co., a place containing 11 houses. It was alleged that at this place 1,624 persons had voted, and a corresponding roll of names was sent in, which on examination proved to have been copied in alphabetical order from a Cincinnati directory. This return, which if accepted would have changed the party character of the legislature by transferring from the free-state to the pro-slavery side eight representatives and three councilmen, was rejected by Gov. Walker as a manifest falsification. Soon after the territorial election the constitutional convention met at Lecompton and adopted a constitution, four sections of which related to slavery, declaring the right of owners to their slaves to be inviolable, and prohibiting the legislature from passing acts of emancipation. This provision alone was to be submitted to the electors at an election to be held on Dec. 21. The ballots cast were to be endorsed "Constitution with slavery" or "Constitution with no slavery," thus securing in any event the adoption of the constitution, several clauses of which, besides those thus submitted, were highly objectionable to a majority of the people. A provision was inserted in the schedule annexed to the constitution preventing any amendment of that instrument previous to 1864. The promulgation of this constitution caused great excitement in Kansas. Gov. Walker condemned it in the strongest manner, and proceeded at once to Washington to remonstrate against its adoption by congress; but before his arrival there the act had received the approval of the president. Gov. Walker soon after his arrival in Washington resigned, and J. W. Denver of California became governor. At the election of Dec. 21 for the adoption or rejection of the slavery clause, the vote returned was 6,226, more than half of which was from counties along the Missouri border, whose total number of voters by the census did not exceed 1,000. Against the slavery clause there were 569 votes, the free-state men generally abstaining from voting. The constitution being thus nominally adopted, an election for officers under it was to be held on Jan. 4. The territorial legislature at a special session passed an act submitting the Lecompton constitution to the direct vote of the people on the same day with the Lecompton state election, and the result was a majority of 10,226 votes against it. Congress after long discussion referred the matter to the people of Kansas at an election on Aug. 3, 1858, when the Lecompton constitution was again rejected by 10,000 majority. Meanwhile the territorial legislature had called another convention to meet in April to frame a new constitution, which was submitted to the people and ratified by a large majority, though by a small total vote. Shortly after the rejection of the Lecompton constitution by the people, Gov. Denver resigned, and Samuel Medary of Ohio was appointed in his place. The territorial legislature met in January, 1859, and passed an act submitting to the people the question of calling still another constitutional convention. The election was held April 4, and the result was a majority of 3,881 in favor of holding a convention. An election was accordingly held for delegates, and the convention thus chosen met at Wyandotte July 5, and adjourned July 27, after adopting a constitution by a vote of 34 to 13, prohibiting slavery. This constitution was submitted to the popular vote Oct. 4, and was ratified by a vote of 10,421 to 5,530. The first election under it was held Nov. 8, when a delegate to congress and members of the territorial legislature were elected. On Dec. 6, 1859, a representative in congress, state officers, and members of a state legislature were chosen, the governor being Charles Robinson. On Jan. 29, 1861, Kansas was admitted into the Union under the Wyandotte constitution, which with the several amendments since passed is still the supreme law of the state. During the early part of the civil war eastern Kansas suffered much from the irregular warfare, known there as "jayhawking," which was carried on by confederate raiders from Missouri and Arkansas and the unionists who opposed them. The most prominent of these disorders was the attack made upon Lawrence, Aug. 21, 1863, by a band of confederate guerillas under Col. Quantrell, which resulted in the loss of many lives and much property. During the war Kansas furnished to the federal army upward of 20,000 men.—See "Resources of Kansas," by C. C. Hutchinson (Topeka, 1871).
Retrieved from "https://en.wikisource.org/w/index.php?title=The_American_Cyclopædia_(1879)/Kansas_(state)&oldid=6040267"
|
CommonCrawl
|
It seems that you're in USA. We have a dedicated site for USA
Mathematics Algebra
Developments in Mathematics
Semigroups in Complete Lattices
Quantales, Modules and Related Topics
Authors: Eklund, P., Gutiérrez García, J., Höhle, U., Kortelainen, J.
Provides a categorical approach to quantales and applications
Develops the theory of modules on unital quantales
Includes exercises and bibliographical notes
see more benefits
price for Spain (gross)
Included format: EPUB, PDF
Hardcover 114,39 €
Buy Hardcover
The final prices may differ from the prices shown due to specifics of VAT rules
Softcover 114,39 €
This monograph provides a modern introduction to the theory of quantales.
First coined by C.J. Mulvey in 1986, quantales have since developed into a significant topic at the crossroads of algebra and logic, of notable interest to theoretical computer science. This book recasts the subject within the powerful framework of categorical algebra, showcasing its versatility through applications to C*- and MV-algebras, fuzzy sets and automata. With exercises and historical remarks at the end of each chapter, this self-contained book provides readers with a valuable source of references and hints for future research.
This book will appeal to researchers across mathematics and computer science with an interest in category theory, lattice theory, and many-valued logic.
Patrik Eklund develops applications based on many-valued representation of information. Information typically resides in the form of expressions and terms as integrated in knowledge structures, so that term functors, extendable to monads, become important instrumentations in applications. Categorical term constructions with applications to Goguen's category have been recently achieved (cf. Fuzzy Sets and Syst. 298, 128-157 (2016)). Information representation supported by such monads, and as constructed over monoidal closed categories, inherits many-valuedness in suitable ways also in implementations.
Javier Gutiérrez García has been interested in many-valued structures since the late 1990s. Over recent years these investigations have led him to a deeper understanding of the theory of quantales as the basis for a coherent development of many-valued structures (cf. Fuzzy Sets and Syst. 313 43-60 (2017)).
Since the late 1980s the research work of Ulrich Höhle has been motivated by a non-idempotent extension of topos theory. A result of these activities is a non-commutative and non-idempotent theory of quantale sets which can be expressed as enriched category theory in a specific quantaloid (cf. Fuzzy Sets and Syst. 166, 1-43 (2011), Theory Appl. Categ. 25(13), 342-367 (2011)). These investigations have also led to a deeper understanding of the theory of quantales. Based on a new concept of prime elements, a characterization of semi-unital and spatial quantales by six-valued topological spaces has been achieved (cf. Order 32(3), 329-346 (2015)). This result has non-trivial applications to the general theory of C*-algebras.
Since the beginning of the 1990s the research work of Jari Kortelainen has been directed towards preorders and topologies as mathematical bases of imprecise information representation. This approach leads to the use of category theory as a suitable metalanguage. Especially, in cooperation with Patrik Eklund, his studies focus on categorical term constructions over specific categories (cf. Fuzzy Sets and Syst. 256, 211-235 (2014)) leading to term constructions over cocomplete monoidal biclosed categories (cf. Fuzzy Sets and Syst. 298, 128-157 (2016)).
Pages 1-43
Eklund, Patrik (et al.)
Preview Buy Chapter 25,95 €
Fundamentals of Quantales
Pages 45-202
Module Theory in $${\mathtt {Sup}}$$
Book Subtitle
Patrik Eklund
Javier Gutiérrez García
Ulrich Höhle
Jari Kortelainen
Springer International Publishing AG, part of Springer Nature
Hardcover ISBN
XXI, 326
Order, Lattices, Ordered Algebraic Structures
|
CommonCrawl
|
Gunnar Þór Magnússon
Anscombe's quartet and monitoring
Anscombe's quartet is a collection of four data sets of points in the plane. When graphed the data sets are obviously very different, but they have identical sets of descriptive statistics. Newer variations on the same theme include the delightful Datasaurus dozen, which includes animations of deformations from one step into another where the descriptive statistics are kept identical during the animation.
I've seen Anscombe's quartet cited in discussions around monitoring and observability to assert that the average and standard deviation (two of the statistical tools used in the quartet) are not useful for monitoring and alerting; instead one should use percentiles. I agree with the conclusion but think it does not follow from the premise, and would like to offer some thoughts on the theme from the perspective of a reformed geometer.
To do so, I'd like to work out some examples involving the average and standard deviation. The data sets above involve collections of points in the plane but I'd like us to discuss data modeled on timeseries instead. The difference isn't important, everything we say applies to both theaters, but the geometry we want to point out is easier to see in the latter.
We'll model a portion of a timeseries as a tuple of \(n\) real numbers \((x_1, \ldots, x_n)\). We can think of these as \(n\) measurements recorded at one-second (or minute, hour, …) intervals. The set of all possible such timeseries is familiar; it is the real vector space \(\mathbb{R}^n\) of dimension \(n\). (Timeseries can be added and subtracted, there is a zero-timeseries that serves as an identity, and we can multiply a timeseries by a real number.)
The main characters of our story are the average and standard deviation. If \(x = (x_1, \ldots, x_n)\) is a timeseries, its average is \[ \mu(x) = (x_1 + \cdots + x_n) / n \] and its standard deviation is defined by the scary formula[1] \[ \sigma(x) = \sqrt{ ( (x_1 - \mu(x) )^2 + \cdots + (x_n - \mu(x) )^2 ) / n }. \]
We can now look at some examples. In every one of these, we'll fix the average and standard deviation to some values and look at the collection of timeseries that have those values. The first couple of examples are just warm-up.
We first look at \(n = 1\), that is, timeseries that just have one value \(x = (x_1)\). For such a timeseries we have \(\mu(x) = x_1\) and \(\sigma(x) = 0\) so each timeseries is uniquely determined by its average and they all have the same standard deviation.
The next case is \(n = 2\), of timeseries with two values \(x = (x_1, x_2)\). This is slightly more interesting; we have the average \(\mu(x) = (x_1 + x_2) / 2\) and the standard deviation \[ \begin{align} \sigma(x) &= \sqrt{( (x_1 - \mu(x) )^2 + (x_2 - \mu(x) )^2 )/2} \\ &= \sqrt{((x_1 - x_2)^2 + (x_2 - x_1)^2)/8} = |x_1 - x_2|/2. \end{align} \] The second equality here is by substituting the definition of \(\mu(x)\).
This is more interesting because different timeseries can have different averages and standard deviations. However, I claim that if we fix both the values of the average and standard deviations, say to \(\mu\) and \(\sigma\), there is at most one timeseries that has those values.
To see this, observe that the formula for the average implies that \(x_2 = 2 \mu - x_1\), so if we fix the value of the average the value of \(x_2\) is determined by the choice of \(x_1\). If we substitute this into the formula for the standard deviation we get \[ \sigma(x) = |x_1 - 2 \mu + x_1|/2 = |x_1 - \mu|. \] We can solve this equation by splitting into cases where \(x_1 > \mu\) and \(x_1 < \mu\). In the former we get \(x_1 = \sigma + \mu\) and in the latter we get \(x_1 = \mu - \sigma\). Notice that in both cases, the value of \(x_1\) is determined by the values of the average and standard deviations, and the value of \(x_2\) is determined by that of \(x_1\) and the average, so both \(x_1\) and \(x_2\) are determined once we fix the others.
We now get into the first case that has a hint of the general picture. Our timeseries have three points, \(x = (x_1, x_2, x_3)\). We could write down equations like in the previous case and calculate, but it's worth it to take a step back and think about what's going on.
Let's say we have fixed the value of the average of our timeseries to a constant \(\mu\). The formula for the standard deviation of our series is then \[ \sigma(x) = \sqrt{((x_1 - \mu)^2 + (x_2 - \mu)^2 + (x_3 - \mu)^2)/2}. \] If we form the point \((\mu, \mu, \mu)\), we see that \[ \sigma(x) = \|(x_1, x_2, x_3) - (\mu, \mu, \mu)\|/\sqrt{2} \] is just a multiple of the Euclidean distance between our timeseries and the fixed point \((\mu, \mu, \mu)\). If we also fix \(\sigma(x) = \sigma\), the set of points that satisfy this equation is \[ S := \{ (x_1, x_2, x_3) \mid \|(x_1, x_2, x_3) - (\mu, \mu, \mu)\| = \sqrt{2} \sigma \}. \] This should look familiar, it is the sphere whose center is \((\mu, \mu, \mu)\) and whose radius is \(\sqrt{2}\sigma\).
Similarly, the set of timeseries whose average is equal to \(\mu\) is \[ P := \{ (x_1, x_2, x_3) \mid (x_1 + x_2 + x_3)/2 = \mu \}. \] For those who know a little linear algebra, we can write the condition as being that \((x_1, x_2, x_3) \cdot (1, 1, 1) = 2 \mu\), that is, that the inner product with \((1, 1, 1)\) must be equal to \(2 \mu\). The set of points that satisfies such a condition is a plane; a flat two-dimensional space.
We're interested in the set of points where both of these conditions are true, which is the intersection \(S \cap P\). We could work out what that is by calculation, but it's easiest to visualize it. The intersection of a sphere and a plane in three dimensions is either empty (when they don't meet), a single point (when the plane is tangent to the sphere), or a circle that lies on the sphere.
In three dimensions, if we pick our values for the average and standard deviations right, there is thus a whole circle of timeseries that have that average and standard deviation. They form a kind of Anscombe set.
Higher dimensions
In higher dimensions, basically the same thing happens as in dimension 3. The set of timeseries that have a fixed average is a hyperplane and the set of timeseries that have a fixed standard deviation and average is a sphere. Their intersection, when it is not empty or a single point, is not as easily described but it is a subspace of dimension \(n-2\). As \(n\) grows higher, this space grows as well, and in general the space of timeseries that have a fixed average and standard deviation is enormous.
Other prescriptive statistics
The final point I'd like to make, and one you'll have to take my word for to some extent, is that there is nothing special about the average or standard deviation here. Very similar things will happen for every collection of statistical tools we pick.
Every statistical tool can be viewed as a function that takes a timeseries (or perhaps a couple of timeseries, which complicates things a little but not a lot) and returns a real number. These functions are very well behaved (continuous, differentiable, and so on).
Given a collection of such functions \(f_1, \ldots, f_k\) on the space of timeseries of \(n\) points, we can ask whether there is an Anscombe set of fixed values of these functions? That is, if we fix values \(y_1, \ldots, y_k\), can we say anything about the set \(\{x = (x_1, \ldots, x_n) \mid f_1(x) = y_1, \ldots, f_k(x) = y_k \}\)?
With some hand waving, we can actually say this: Fixing the value of a single tool \(f_j\) creates a hypersurface \(X_j = \{x \mid f_j(x) = y_j\}\) inside the space of timeseries, that is, a subspace of dimension one less than the ambient space. All timeseries in \(X_j\) will have the same value with respect to this tool. If the intersection of all of these \(X_1 \cap \cdots \cap X_k\) is not empty, it will generally be a subspace of dimension \(n - k\). If \(n\) is much bigger than \(k\), the dimension of this space, the space of timeseries that all have the same statistical values with respect to our tools, is enormous.
Making the hand waving precise and figuring out when this intersection will be non-empty is the domain of algebraic geometry over the real numbers. Saying anything at all about that is beyond our scope here.
I hope I have convinced you that the statement "the average and standard deviation are useless for monitoring because of Anscombe's quartet" is not true. The moral of Anscombe's quartet, which boils down to intersection theory in algebraic geometry, is that the space of data that fits any given collection of statistical instruments (including any collection of percentiles) is gigantic and we have to look at the data to make sense of it.
In general, percentiles are more valuable for analyzing performance or traffic data than the average and standard deviation. The reason is not Anscombe's quartet, the robustness of percentiles, or anything to do with normal distributions, but that the average and standard deviation are fundamentally tools for answering the questions "what would this be if it were constant?" and "how far away is this from being constant?".[2] For performance and traffic analysis, we don't care about either of those questions, but about peaks in demand and how we can handle them. We should pick tools that can help with that, but other tools have their place.
1. Some people divide by \(n-1\) in the formula for the standard deviation. I'm sure they have their reasons, but the factor of \(n\) is more natural for geometers. The reason involves interpreting timeseries as functions and is a bit of a detour from what we really care about.
2. Exactly how involves some linear algebra and this post was already long. If people care, I can write another one explaining how.
How Silicon Valley will solve the trolley problem
The trolley problem asks how to decide between the lives of people in two groups. At the moment, it comes up in our industry in discussions around self-driving cars: Suppose a car gets into a situation where it must risk injuring either its passengers or pedestrians; which ones should it prioritize saving?
Always choosing one group leads to suboptimal outcomes. If we always save the pedestrians, they may perform attacks on car riders by willfully stepping into traffic. If we always save the passengers, we may run over a pedestrian whose life happens to be of greater worth than that of the passengers.
The crux of the trolley problem is how we should make the choice of what lives to save. As with all hard problems without clear metrics for success, it's best to solve this by having the participants decide this for themselves.
Every time we view a website, our friendly ad networks must decide what ads we should see. This is done by having our user profile put up on auction for prospective advertisers. During a handful of milliseconds, participants may inspect our profile and place a bid for our attention according to what they see.
The same technology could be trivially repurposed for deciding the trolley problem in the context of self-driving cars.
Assume the identity of every participant in the trolley scenario is known. Practically, we know the identity of the passengers; that of the pedestrians could be known if their phones broadcast a special short-range identification signal. An incentive for broadcasting such a signal could be that we would have to assume that a person without one were one of no means.
Given this information, a car about to be involved in a collision could take a few milliseconds to send the identities of the people involved to an auction service. Participants who had the foresight of purchasing accident insurance would have agents bidding on their behalf. The winner of the bid would be safe from the forthcoming accident, and their insurance would pay some sum to the car manufacturer post-hoc as a reward for conforming to the rules.
The greater a person's individual wealth, the better insurance coverage and latencies they could purchase, and the more accidents they could expect to survive. This aligns nicely with the values of societies like the United States, where the worth of a person's life is proportional to their wealth.
As this system lets self-driving car manufacturers off the hook for any decisions taken, and would need coordinated ethical action on the behalf of software engineers to not be implemented, we expect to see this system in action in the world in short order.
Infosec: A board game
I'm pleased to announce the release of my new card-drawing game Infosec.
The rules of the game are simple. It is for two or more players. The player with the fewest friends is the Infosec Expert; the other players are the Coworkers.
To start, the Infosec Expert deals three cards face down from a standard deck of cards. The Coworker on the Infosec Expert's right hand should draw one card.
"Which card?" the Coworker may ask.
"This is a simple phishing exercise," the Infosec Expert should reply. "Just pick a card."
"But they all look the same," the Coworker may object.
"Draw one. And get it right."
This exchange should go on in increasingly hostile tones until the Coworker agrees to draw a card. It will have been the wrong card. The Infosec Expert should inform the Coworker:
"You got phished. You moron. You fucking idiot. You're such a goddamn waste of space and time. How could you have gotten this so wrong? Were you even trying? Answer me. What the fuck was that?"
Feel free to ad-lib along these lines, or draw as many cards as you want from the accompanying Admonishment Deck (expansion packs available). Include ad-hominem attacks and use as many personal details as you know. Interrupt the Coworker if they try to reply. Don't hold back.
Once the Coworker is silent, the Infosec Expert should collect the cards, deal new ones, and proceed to the next Coworker in line.
The game ends with the Infosec Expert's victory when all the Coworkers have left.
Backing up data like the adult I supposedly am
Like so many things I'm supposed to do but don't — getting exercise, eating right, sleeping well, standing up for women and minorities in public spaces — backing up my data has always been something I've half-assed at best.
I've lugged around an external hard drive with a few hundred gigabytes of data for the last 10 years, and made backups to it once every three or four years or so. Every time I've tried restoring anything from those backups I've regretted it, because of course I just bought the drive, plugged it in and copied stuff to it, so it is a FAT32 drive while I have mostly had EXT4 filesystems, which means all my file permissions get lost during the process.
I've written shameful little shell scripts to set file permissions to 0644 and directory permissions to 0755, recursively, many many times.
Part of my problem was that I both know just enough rsync to be dangerous and have a credit card so I can provision cloud VMs, so forever just around the corner was my perfect backup solution that I'd write myself and maintain and actually do instead of dealing with whatever I had going on in my life. I've come to accept that this will never happen, or perhaps more definitively, that I'd rather cut myself than write and maintain another piece of ad-hoc software for myself.
Luckily I recently found two things that have solved this whole problem for me: borg and rsync.net.
Borg is backup software. It compresses and deduplicates data at the block level, and strongly encourages (but does not force) you to encrypt data before backing it up. It is everything I'd want from my half-assed rsync and shell script abomination.
I read its documentation a couple of times and was impressed. I then set about comparing different VM hosts to see which one would give me the cheapest block storage option, when the result of some random google search led me to rsync.net. They are a company that stores backups, pretty cheaply, and even more cheaply if you use borg to take them. I guess they just really love borg and want us to love it too.
I signed up for their cheapest plan, which starts at 100GB stored for $18 per year. They have no network in- or egress costs, and the storage amount can be adjusted at any time. Once my account had been activated, I did a little password reset dance, and uploaded a public SSH key.
I wanted to back up my $HOME directory, so after installing borg I ran:
export BORG_REMOTE_PATH="borg1"
borg init --encryption repokey-blake2 [email protected]:home
This created a remote borg repository called "home" on rsync.net's servers. The environment variable is so we use a more recent version of borg on the remote server (version 1.1.11 at the time of writing), as the default version is rather old (version 0.29.0).
When choosing what encryption method to use, one can choose between a "repokey" or a "keyfile". They both create a private key locked with a passphrase; the difference is that with "repokey" the key is stored in the borg repo, while with "keyfile" it is stored outside of it. This boils down to whether we think a passphrase is enough security for our data, or whether we think having a secret keyfile is necessary. I figured my password manager could create a strong enough passphrase for my needs, and I didn't want to think about losing the keyfile, so I chose "repokey-blake2".
To create my first backup, I ran
borg create --exclude "$HOME/.cache" [email protected]:home::backup-1 "$HOME"
which created the archive "backup-1" in my "home" borg repository. I didn't change the compression algorithm from the default one.
By default borg compresses data with lz4. It can use other compression methods (xz, zlib, zstd). I compared their compression ratios on some binary files I had and found no difference between them. I think this is because the large binary files I have are mostly audio and video files in lossy formats, which don't seem to benefit very much from further compression. I have a lot of text files as well, but text takes up so little relative space on today's hardware that it makes no sense to spend CPU cycles on compressing it better than lz4 does.
This backup command hummed along for a good while, and through a couple of reboot cycles. Doing a second backup right after it finished (or the day after) took a lot less time because of the deduplication:
Restoring from backup is also easy:
borg extract [email protected]:home::backup-2
I set this up to run as a daily timed systemd service at noon (very easy on NixOS, which every Linux user should be using unless they hate themselves), and will never, ever think about this again. For a handful of bucks a year, that is a good deal.
Review of home manager
A common theme among the people who fall in love with Nix (or its cousin NixOS) is that they want it to manage everything for them. Finding out the limits of that capability and where it breaks down is part of each's journey.
An obvious enough limit to Nix's reach is secret management. It can easily handle public keys, SSH or cryptographic ones, but the private keys cannot be managed by its store as it is readable by all.
The promise of Nix is that we will never break our system by updating it. We can always roll back to a previous working version, once we have a working version at all. This is in contrast to, say, every other Linux distribution. I have borked Ubuntu, Arch, Fedora and others during system updates. Often the safest way to update the system is to back up /home and reinstall the OS.
From somewhere comes the idea that a user on a Linux system should be able to install their own programs and manage their own system services. (See Chris Wellons for where you can run with this idea to if your sysadmin gives you a C compiler.) This seems odd from a historical perspective. In a true multi-user system, the system administrators normally do not want users to be able to install arbitrary software or run services. On a modern "multi"-user system, where there is a only single user, avoiding system packages and services seems like some kind of theater.
Yet we do it anyway. An argument I sometimes make to myself is that this is a cleaner separation between what I need the system to do versus what I do on it. I may want to be able to run traceroute as root, but I don't care about root being able to run the Go compiler.
NixOS has facilities to enforce this separation. It happily creates users and their home directories, and can fill in some of the bits that go there, like public SSH keys. It can install packages for specific users, and allow them to define systemd services. It will not manage the configuration of specific user packages (like dotfiles) without some coercing. One can presumably create custom derivations of, say, ZSH with all the configuration one wants, but who has the time?
Home manager wants to fill this gap and bring the power of Nix to user environment and configuration management. It lets individual users say what packages they want installed; what services they want run; and what configuration files should go where. On the surface it seems like something I should love, but after using it for a month I wrote it out of my system and now use plain NixOS.
I used Home manager for three things:
Installing packages for my user.
Scheduling services for my user.
Installing configuration files for my software.
The first one was never a big attraction. Home manager lets us define a list of packages in home.packages of packages to install. If we control the system configuration, we can acheive the same by defining those packages in users.users.$username.packages.
If a user controls the system configuration (directly or through a sysadmin), they can also define their own user services via systemd.user. Home manager's selling point is that it comes with a large list of already defined services that we can enable with a boolean flag, instead of having to write our own service configuration. This is admittedly nice. In the end, I found that learning the idiosyncrazies of each home manager service definition was a less useful use of my time than learning how to define NixOS systemd services once and for all. The latter is after all where the former end up.
As a long-time sufferer of dotfile management, I had high hopes for the third point. And indeed, home manager will manage dotfiles just fine. It can do this in two modes: it can generate a config file from various options we fill out if someone has written a home manager module for the program we're trying to configure, or it can plomp a file on the system verbatim from a source. I used the latter, as I didn't feel like learning a configuration language to be able to partially configure program dotfiles was a good idea.
This works well, until we want to change anything in a dotfile. This experiment with home manager coincided with a regular low point in my life in which I try to use emacs. This comes with quite a lot of .emacs changes as I use Lisp for the only thing it's ever been good for; configuring a text editor in the most complicated way imaginable. Now, the dotfiles that home manager (or Nix) puts on our systems are read-only, so every change would involve changing the source file and running home-manager switch. This seems like unnecessarily many steps, especially after I saw this brilliant Hacker news comment, which after a week of use is a much better solution for this problem.
All in all home manager is nice software. I can see it being useful for people who either don't control the system they run on but want to use Nix in user mode to run their corner of it, or for those Nix users who gamers (a well-adjusted group of humans if there ever was one) would call "filthy casuals", that is, people who just want things to work and don't care very much about learning how to write enough Nix to make that happen.
I'm not included in those groups, as I run this system and explicitly want to learn to use Nix in anger so I can try and fail to convince people to run it in production at work. Home manager is fine software and if it makes you happy, then please use it.
Use ad hoc structs for command-line flags in Go
The path of least resistance to commandline flag parsing in Go is to use the flag package from the standard library. A lot of times the result looks like this:
help := flags.Bool("help", false, "HALP")
frobinate := flags.Int("frobinate", 0, "Amount to frobinate by")
blargalarg := flags.String("blargalarg", "", "Social media comment")
// [713 variables later]
// Much later
if *blargalarg != "" {
// Do things. We may or may not remember what this variable is.
That is, we have a bunch of variables lying around we don't really care about and take up perfectly good names. If we see one of them later in the program, we don't have any context on where it comes from, so we have to start jumping around in the source.
In my projects I've used a little accounting trick to hold these flags. I find it helps me deal with them. We just define an anonymous struct to hold the flags:
flags := struct{
help *bool
frobinate *int
blargalarg *string
// [713 field definitions]
}{
help: flags.Bool("help", false, "HALP"),
frobinate: flags.Int("frobinate", 0, "Amount to frobinate by"),
blargalarg: flags.String("blargalarg", "", "Social media comment"),
// [713 field instantiations]
if *flags.blargalarg != "" {
// AAAAAH YES IT'S A FLAG
If I really need to, I can pull the anonymous struct out into its own global variable or type definition or whatever, and pass it around as arguments to functions that deal with its contents. That is not as handy with a litter of flag variables. But really I just find that defining all these flags clearly in one place makes the program easier to read later once I've forgotten what it does.
Kleroteria
I got picked in Kleroteria a couple of months ago. This is my contribution.
Three years ago, my wife and I moved to Amsterdam. We wanted to have kids, and thought the Netherlands was a better place to do that than Mexico, where my wife is from and we lived.
We've both moved around a lot, and have picked up the languages of the places we've lived. That's how we speak French and Spanish, along with English. Learning those was always a necessity; no one in France is going to speak English voluntarily, and my in-laws don't speak the best English so I had to learn Spanish.
So it was a change to come to the Netherlands. Everyone here speaks perfect English. They also have no patience for your attempts at Dutch pronunciation or verb conjugation. The second they hear you're not from here, they switch. On paper this is great, but it means that after three years here I still don't speak Dutch. I can never get anyone to have a whole conversation with me in it.
A couple of weeks after we came here, we got pregnant. We had a boy, who has grown up into a little man who loves cars and trains, makes funny faces at the dinner table, and keeps trying to show our dog his books. ("Pancho! See!", but Pancho doesn't care.)
We speak to him each in our own language, and by now he understands what we say. He's been going to daycare here since he was five months old, where they speak Dutch, so he understands that as well. Kids who grow up with more than one language start speaking later, which is can be frustrating for everyone. He knows what he wants to say, but can't figure out what words to use, and we have to keep guessing at sounds while other parents don't.
He's finally speaking now. He tells us everything he wants and knows. In Dutch. Every time. Every word comes out in Dutch. And we still don't really speak it. It's not really any less frustrating for anyone. There's a lot of guessing at pronunciation and Google translate.
But what do you know. I did finally get someone to speak Dutch to me.
Bug-fixing checklist
It is OK not to do all of these things for a given bug. Some bugs are trivial, or nontrivial to reproduce or write tests for. But you should explicitly decide not to do some of these things for any given bug, and be able to explain why.
Is there a (possibly old, closed) issue for the bug?
Can you reproduce the bug manually?
Can you write a regression test for the bug?
Did you check that your change actually fixes the problem?
Does the fix's commit message explain the bug, the fix, and point to the issue?
It's debatable whether checking for old or closed issues is the responsibility of the developer fixing the bug or a project manager who triages the backlog. Sometimes the second person doesn't exist, but the job should still be done.
Remember that a bug report is a report of a symptom. A bug fix is directed at a cause. A software symptom may have more than one cause, making bug fixes that say "This fixes issue X" very optimistic about what they claim to achieve.
Exercise moderation and common sense in all things. Both 0% and 100% test coverage are awkward places to live in. Sometimes the information or effort needed to write a test make it too expensive to do so. However, when writing a test is cheap and easy or can be made so without causing harm, there should be a good reason not to do it.
Gitsplorer
Have you ever found yourself using Git and thinking "This is great, but I wish these filesystem operations were read-only and ten times slower?". Well, friend, do I have news for you.
API of a Golang codebase at different times in its history. To do this, I figured I'd clone the repo, check out commit A and analyze it, then check out commit B and analyze that, and boo! Hiss! That's inelegant and leaves clutter that needs to be cleaned up all around the disk. There has got to be a better way!
Once I made a Git commit hash miner because I wanted to race it against a coworker to see who could get a commit with more leading zeros into a frequently used repository at work. That had the side effect of teaching me some about Git's internals, like what its objects are (blobs, trees, commits, tags) and how they fit together. I figured that if I could convince the Golang AST parser to read the Git database instead of the filesystem, I could do what I wanted in a much better way.
Alas, doing that would have required monkey-patching the Go standard library, and I don't want to hunt down every system call it ends up making to be sure I got them all. However, Git is famously a content-addressable filesystem, so what if we just made a filesystem that points to a given commit in a repo and pointed the parser at that?
This turns out to be pretty easy to do by combining libgit2 and libfuse. We use the former to read objects in the Git repository. (The objects are easy to read by hand, until you have to read packed objects. That's doable, but a bit of a distraction in what is already quite the distraction.) We then use the latter to create a very basic read-only filesystem. In the end, we have a read-only version of git checkout that writes nothing to disk.
I put a prototype of this together in Python, because I'm lazy. It's called gitsplorer and you should absolutely not use it anywhere near a production system. It scratches my itch pretty well, though. In addition to my API comparisons (which I still haven't got to), I do sometimes want to poke around the state of a repository at a given commit and this saves me doing a stash-checkout dance or reading the git worktree manpage again.
For fun, and to see how bad of an idea this was, I came up with a very unscientific benchmark: We checkout the Linux kernel repository at a randomly selected commit, run Boyter's scc line-counting tool, and checkout master again. We do this both with gitsplorer and with ye olde git checkout. The results speak for themselves:
git checkout: 62 seconds
gitsplorer: 567 seconds
The gitsplorer version is also remarkable for spending all its time using 100% of a CPU, which the git version does not. (It uses around 90% of a CPU while doing the checkouts, then all of my CPUs while counting lines. The Python FUSE filesystem is single-threaded, so beyond Python being slow it must also be a point of congestion for the line counting.) I did some basic profiling of this with the wonderful profiler Austin, and saw that the Python process spends most of its time reading Git blobs. I think, but did not verify, that this is because libgit2 decompresses the contents of the blobs on every such call, while most of the reads we make are in the FUSE getattr call where we are only interested in metadata about the blob. I made no attempts to optimize any of this.
So, friends, if you've ever wished git checkout was read-only and 10 times slower than it is, today is your lucky day.
Lyttle Lytton 2020
My Lyttle Lytton entry for 2020:
"Actually, you do like this," maverick CEO Eric Davies, Ph.D., insisted as he pulled my foreskin back and cunnilingussed my pee hole.
I'm not exactly proud of it, but I'm glad it's no longer in my head.
Copyright 2018-2020 :: Gunnar Þór Magnússon
|
CommonCrawl
|
13thOCTOBER2019
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry. Adversarial Examples Are Not Bugs, They Are Features. CoRR abs/1905.02175 (2019).
ADVERSARIAL MACHINE LEARNING DEEP LEARNING
Ilyas et al. present a follow-up work to their paper on the trade-off between accuracy and robustness. Specifically, given a feature $f(x)$ computed from input $x$, the feature is considered predictive if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}[y f(x)] \geq \rho$;
similarly, a predictive feature is robust if
$\mathbb{E}_{(x,y) \sim \mathcal{D}}\left[\inf_{\delta \in \Delta(x)} yf(x + \delta)\right] \geq \gamma$.
This means, a feature is considered robust if the worst-case correlation with the label exceeds some threshold $\gamma$; here the worst-case is considered within a pre-defined set of allowed perturbations $\Delta(x)$ relative to the input $x$. Obviously, there also exist predictive features, which are however not robust according to the above definition. In the paper, Ilyas et al. present two simple algorithms for obtaining adapted datasets which contain only robust or only non-robust features. The main idea of these algorithms is that an adversarially trained model only utilizes robust features, while a standard model utilizes both robust and non-robust features. Based on these datasets, they show that non-robust, predictive features are sufficient to obtain high accuracy; similarly training a normal model on a robust dataset also leads to reasonable accuracy but also increases robustness. Experiments were done on Cifar10. These observations are supported by a theoretical toy dataset consisting of two overlapping Gaussians; I refer to the paper for details.
Also find this summary on ShortScience.org.
|
CommonCrawl
|
Journal of Statistical Distributions and Applications
Skewness-kurtosis adjusted confidence estimators and significance tests
Wolf-Dieter Richter1
Journal of Statistical Distributions and Applications volume 3, Article number: 4 (2016) Cite this article
First and second kind modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be asymptotically exactly controlled. The orders of convergence to zero of both types of probabilities are assumed to be suitably bounded below according to an Osipov-type condition and the sample distribution is assumed to satisfy a corresponding tail condition due to Linnik. Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis as well as under a modified local alternative. A limit theorem for large deviations by S.V. Nagajev/V.V. Petrov applies to prove the results. Applications are given for exponential families.
Asymptotic normality of the distribution of the suitably centered and normalized arithmetic mean of i.i.d. random variables is one of the best studied and most often exploited facts in asymptotic statistics. It is supplemented in local asymptotic normality theory by limit theorems for the corresponding distributions under the assumption that the mean is shifted of order n −1/2. There are many successful simulations and real applications of both types of central limit theorems, and one may ask for a more detailed explanation of this success. The present note is aimed to present such additional theoretical explanation under certain circumstances. Moreover, the note is aimed to stimulate both analogous consideration in more general situations and checking the new results by simulation. Furthermore, based upon the results presented here, it might become attractive to search for additional explanation to various known simulation results in the area of asymptotic normality which is, however, behind the scope of the present note.
Based upon Nagajev's and Petrov's large deviation result in (Nagaev 1965; Petrov 1968), skewness-kurtosis modifications of usual confidence intervals for estimating the expectation and of usual local alternative parameter choices are introduced here in a way such that the asymptotic behavior of the true non-covering probabilities and the covering probabilities under the modified local non-true parameter assumption can be exactly controlled. The orders of convergence to zero of both types of probabilities are suitably bounded below by assuming an Osipov-type condition, see (Osipov 1975), and the sample distribution is assumed to satisfy a corresponding Linnik condition, see (Ibragimov and Linnik 1971; Linnik 1961).
Analogous considerations are presented for the power function when testing a hypothesis concerning the expectation both under the assumption of a true hypothesis and under a local alternative. Finally, applications are given for exponential families.
A concrete situation where the results of this paper apply is the case sensitive preparing of the machine settings of a machine tool. In this case, second and higher order moments of the manipulated variable do not change from one adjustment to another one and may be considered to be known over time.
It might be another aspect of stimulating further research if one asks for the derivation of limit theorems in the future being close to those in (Nagaev 1965; Petrov 1968) but where higher order moments are estimated.
Let X 1,…,X n be i.i.d. random variables with the common distribution law from a shift family of distributions, P μ (A)=P(A−μ),A∈ , where denotes the Borel σ-field on the real line, the expectation equals μ,μ∈R, and the variance is σ 2. It is well known that \(T_{n}=\sqrt {n}(\bar {X}_{n} -\mu)/\sigma \) is asymptotically standard normally distributed, T n ∼A N(0,1). Hence, P μ (T n >z 1−α )→α, and under the local non-true parameter assumption, \(\mu _{1,n}=\mu +\frac {\sigma } {\sqrt {n}}(z_{1-\alpha }-z_{\beta })\), i.e. if one assumes that a sample is drawn with a shift of location (or with an error in the variable), then \( P_{\mu _{1,n}}(T_{n} \leq z_{1-\alpha })= P_{\mu _{1,n}}\left (\sqrt {n}\frac {\bar X_{n}-\mu _{1,n}}{\sigma } \leq z_{\beta }\right)\rightarrow \beta \) as n→∞, where z q denotes the quantile of order q of the standard Gaussian distribution.
Let \(ACI^{u}= \left [\left.\bar {X}_{n} - \frac {\sigma }{\sqrt {n}}z_{1-\alpha }, \infty \right)\right.\) denote the upper asymptotic confidence interval for μ where the true non-covering probabilities satisfy the asymptotic relation
$$ P_{\mu}(ACI^{u} {\; does\; not\; cover\;} \mu)\rightarrow \alpha,\; n\rightarrow \infty. $$
Because \( P_{\mu _{1,n}}\left (\bar {X}_{n}-\frac {\sigma }{\sqrt {n}}z_{1-\alpha }<\mu \right) = P_{\mu _{1,n}}\left (\sqrt {n}\frac {\bar {X}_{n}-\mu _{1,n}}{\sigma }\leq z_{\beta }\right)\), the covering probabilities under n −1/2-locally chosen non-true parameters satisfy
$$P_{\mu_{1,n}}(ACI^{u} {\; covers\;} \mu) \rightarrow \beta, \; n\rightarrow \infty.$$
The aim of this note is to prove refinements of the latter two asymptotic relations where α=α(n)→0 and β=β(n)→0 as n→∞, and to prove similar results for two-sided confidence intervals and for the power function when testing corresponding hypotheses.
Expectation estimation
2.1 First and second kind adjusted one-sided confidence intervals
According to (Ibragimov and Linnik 1971; Linnik 1961), it is said that a random variable X satisfies the Linnik condition of order γ,0<γ<1/2, if
$$ {E}_{\mu} \exp\left\{|X-\mu|^{\frac{4\gamma}{2\gamma+1}}\right\} <\infty. $$
((1))
Let us define the first kind (or first order) adjusted asymptotic Gaussian quantile by
$$z_{1-\alpha(n)}(1)=z_{1-\alpha(n)} +\frac{g_{1}}{6\sqrt{n}} z^{2}_{1-\alpha(n)} $$
where g 1=E(X−E(X))3/σ 3/2 is the skewness of X. Moreover, let the first kind (order) adjusted upper asymptotic confidence interval for μ be defined by
$$ACI^{u}(1)=\left[\left.\bar{X}_{n} -\frac{\sigma}{\sqrt{n}} z_{1-\alpha(n)}(1), \infty\right)\right. $$
and denote a first kind modified non-true local parameter choice by
$$\mu_{1,n}(1)=\mu_{1,n}+\frac{\sigma g_{1}} {6n} \left(z^{2}_{1-\alpha(n)}-z^{2}_{\beta(n)}\right). $$
Let us say that the probabilities α(n) and β(n) satisfy an Osipov-type condition of order γ if
$$ n^{\gamma}\exp\left\{\frac{n^{2\gamma}}{2}\right\} \cdot \min\left\{\alpha(n),\beta(n)\right\}\rightarrow \infty,\; n\rightarrow \infty. $$
This condition means that neither α(n) nor β(n) tend to zero as fast as or even faster than n −γ exp{−n 2γ/2}, i.e. min{α(n),β(n)}≫n −γ exp{−n 2γ/2}, and that max{z 1−α(n),z 1−β(n)}=o(n γ),n→∞. Here, o(.) stands for the small Landau symbol.
If two functions f,g satisfy the relation \(\lim \limits _{n\rightarrow \infty }f(n)/g(n)=1\) then this asymptotic equivalence will be expressed as f(n)∼g(n),n→∞.
Theorem 1.
If α(n)↓0, β(n)↓0 as n→∞ and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\right.\left.\!\!\!\frac {1}{4}\right ]\) then
$$ P_{\mu} (ACI^{u}(1) {\; does\; not\; cover\;} \mu)\sim \alpha(n), \, n\rightarrow \infty $$
$$ P_{\mu_{1,n}(1)} (ACI^{u}(1) {\; covers\;} \mu)\sim \beta(n), \, n\rightarrow \infty. $$
Let us define the second kind adjusted asymptotic Gaussian quantile
$$z_{1-\alpha(n)}(2)=z_{1-\alpha(n)}(1) +\frac{3g_{2}-4{g_{1}^{2}}}{72n} z^{3}_{1-\alpha(n)} $$
where g 2=E(X−E(X))4/σ 4−3 is the kurtosis of X, the second kind adjusted upper asymptotic confidence interval for μ
$$ACI^{u}(2)=\left[\vphantom{\frac{0}{0}}\bar{X}_{n} \right.\left.-\frac{\sigma}{\sqrt{n}} z_{1-\alpha(n)}(2), \infty\right), $$
and a second kind modified non-true local parameter choice
$$\mu_{1,n}(2) =\mu_{1,n}(1)+ \frac{\sigma\left(3g_{2}-4{g_{1}^{2}}\right)} {72n^{3/2}}\left(z^{3}_{1-\alpha(n)}-z_{\beta(n)}^{3}\right). $$
If α(n)↓0, β(n)↓0 as n→∞ and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{4},\right.\left.\!\!\!\frac {3}{10}\right ]\) then
Remark 1.
Under the same assumptions, analogous results are true for lower asymptotic confidence intervals, i.e. for \(ACI^{l}(s)=\left (-\infty, \bar {X}_{n}+\frac {\sigma }{\sqrt {n}}z^{-}_{1-\alpha }(s)\right), s=1,2:\)
$$P_{\mu}(ACI^{l}(s)\; does\; not\; cover\; \mu) \sim \alpha(n) $$
$$P_{\mu^{-}_{1,n}(s)}(ACI^{l}(s)\; covers\; \mu) \sim \beta(n),\, n\rightarrow \infty. $$
Here, \(z^{-}_{1-\alpha }(s)\) means the quantity z 1−α (s) where g 1 is replaced by −g 1,s=1,2, and
$$\mu^{-}_{1,n}(s)=\mu-\frac{\sigma}{\sqrt{n}} (z_{1-\alpha}-z_{\beta})+\frac{\sigma g_{1}}{6n} \left(z^{2}_{1-\alpha}-z^{2}_{\beta}\right)- \frac{\sigma\left(3g_{2}-4{g_{1}^{2}}\right)} {72n^{3/2}}\left(z^{3}_{1-\alpha}- z^{3}_{\beta}\right)I_{\{2\}}(s). $$
In many situations where limit theorems are considered as they were in Section 1, the additional assumptions (1) and (2) may, possibly unnoticed, be fulfilled. In such situations, Theorems 1 and 2, together with the following theorem, give more insight into the asymptotic relations stated in Section 1.
Large Gaussian quantiles satisfy the asymptotic representation
$$z_{1-\alpha}=\sqrt{-2\ln\alpha-\ln|\ln\alpha|- \ln(4\pi)}\cdot \left(1+O\left(\frac{\ln|\ln\alpha|}{(\ln\alpha)^{2}}\right)\right), \alpha\rightarrow +0. $$
Note that O(.) means the big Landau symbol.
2.2 Two-sided confidence intervals
For s∈{1,2},α>0, put \( L(s;\alpha)=\bar {X}_{n}-\frac {\sigma }{\sqrt {n}}z_{1-\alpha }(s)\) and \( R(s;\alpha)=\bar {X}_{n}+\frac {\sigma }{\sqrt {n}}z^{-}_{1-\alpha }(s).\) Further, let α i (n)>0, i=1,2,α 1(n)+α 2(n)<1, and
$$ACI(s;\alpha_{1}(n),\alpha_{2}(n))=\left[L(s;\alpha_{1}(n)), R(s;\alpha_{2}(n))\right]. $$
If conditions (1) and (2) are fulfilled then P μ ((−∞,L(s;α 1(n))) covers μ)∼α 1(n) and P μ ((R(s;α 2(n)),∞) covers μ)∼α 2(n) as n→∞.
With more detailed notation μ 1,n (s)=μ 1,n (s;α,β) and \(\mu ^{-}_{1,n}(s)=\mu ^{-}_{1,n}(s;\alpha,\beta)\),
\(P_{\mu _{1,n}(s;\alpha _{1}(n),\beta _{1}(n))} ((L(s;\alpha _{1}(n)),\infty)\; covers\; \mu)\sim \beta _{1}(n)\),
\(P_{\mu ^{-}_{1,n}(s;\alpha _{2}(n),\beta _{2}(n))} ((-\infty, R(s;\alpha _{2}(n)))\; covers\; \mu)\sim \beta _{2}(n), n\rightarrow \infty.\)
The following corollary has thus been proved.
Corollary 1.
If α 1(n)↓0, α 2(n)↓0 as n→∞ and conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {1}{4}\right ]\) if s=1 and for \(\gamma \in \left (\frac {1}{4},\!\!\right.\left.\frac {3}{10}\right ]\) if s=2, and with (α(n),β(n))=(α 1(n),α 2(n)), then
$$ P_{\mu} (ACI(s;\alpha_{1}(n),\alpha_{2}(n)) {\; does\; not\; cover\;} \mu)\sim (\alpha_{1}(n)+\alpha_{2}(n)), \, n\rightarrow \infty. $$
Moreover,
$$ \max\limits_{\nu\in\{\mu_{1,n} (s;\alpha_{1}(n),\beta_{1}(n)), \mu^{-}_{1,n}(s;\alpha_{2}(n),\beta_{2}(n)) \}} P_{\nu} (ACI(s) {\; covers\;} \mu)\leq\max\left\{\beta_{1}(n),\beta_{2}(n)\right\}. $$
3.1 Adjusted quantiles
Let us consider the problem of testing the hypothesis H 0:μ≤μ 0 versus the alternative H A :μ>μ 0. The first and second kind adjusted decision rules of the one-sided asymptotic Gauss test suggest to reject H 0 if T n,0>z 1−α(n)(s) for s=1 or s=2, respectively, where \(T_{n,0}=\sqrt {n}(\bar {X}_{n}-\mu _{0})/\sigma \). Because
$$ P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}(ACI^{u}(s) \; does\; not\; cover\; \mu_{0}), $$
it follows from Theorems 1 and 2 that under the conditions given there the (sequence of) probabilities of an error of first kind satisfy the asymptotic relation
$$ P_{\mu_{0}}(reject\; H_{0})\sim \alpha(n), n\rightarrow \infty. $$
Concerning the power function of this test, because
$$ P_{\mu_{1,n}(s)}(do\; not\; reject\; H_{0})= P_{\mu_{1,n}(s)}(ACI^{u}(s) \; covers\; \mu_{0}), $$
it follows under the same assumptions that the probabilities of a second kind error in the case that the sequence of the modified local parameters is (μ 1,n (s)) n=1,2,..., satisfy
$$ P_{\mu_{1,n}(s)}(do\; not\; reject\; H_{0})\sim \beta(n), n\rightarrow \infty. $$
Similar consequences for testing H 1:μ>μ 0 or H 2:μ≠μ 0 are omitted, here.
3.2 Adjusted statistics
Let \(T_{n}{(1)}=T_{n}-\frac {g_{1}}{6\sqrt {n}}{T_{n}^{2}}\) and \(T_{n}{(2)}=T_{n}{(1)}-\frac {3g_{2}-8{g_{1}^{2}}}{72n}{T_{n}^{3}}\) be the first and second kind adjusted asymptotically Gaussian statistics, respectively, where \(T_{n}=\frac {\sqrt {n}}{\sigma }\left (\bar {X}_{n} - \mu \right)\).
If the conditions (1) and (2) are satisfied for a certain \(\gamma \in \left (\frac {s}{2s+4},\!\right.\left.\frac {s+1}{2s+6}\right ]\) where s∈{1,2} then
$$P_{\mu_{0}}\left(T_{n}{(s)}>z_{1-\alpha(n)}\right)\sim \alpha(n), \; n\rightarrow \infty $$
$$ P_{\mu_{1,n}(s)}\left(T_{n}{(s)}\leq z_{1-\alpha(n)}\right)\sim \beta (n), \; n\rightarrow \infty. $$
Clearly, the results of this theorem apply to both hypothesis testing and confidence estimation in a similar way as described in the preceding sections.
The material of the present paper is part of a talk presented by the author at the Conference of European Statistics Stakeholders, Rome 2014, see Abstracts of Communication, p.90, and arXiv:1504.02553. A more advanced 'testing-part' of this talk is presented in (Richter 2016) and deals with higher order comparisons of statistical tests.
Application to exponential families
Let ν denote a σ-finite measure and assume that the distribution P 𝜗 has the Radon-Nikodym density \(\frac {dP_{\vartheta }}{d\nu }(x)= \frac {e^{\vartheta x}}{\int e^{\vartheta x}\nu (dx)}=e^{\vartheta x-B(\vartheta)}\), say. For basics on exponential families we refer to Brown (1986). We assume that X(𝜗)∼P 𝜗 and \(X_{1}=X(\vartheta)-{ E}X(\vartheta)+\mu \sim \widetilde {P}_{\mu }\) where 𝜗 is known and μ is unknown. In the product-shift-experiment [ R n, \(\left.,\;\left \{\widetilde {P}^{\times n}_{\mu },\,\mu \in R\right \}\right ]\), expectation estimation and testing may be done as in Sections 2 and 3, respectively, where g 1=B ′′′(𝜗)/(B ′′(𝜗))3/2 and g 2 allows a similar representation.
Another problem which can be dealt with is to test the hypothesis H 0:𝜗≤𝜗 0 versus the alternative H 1n :𝜗≥𝜗 1n if one assumes that the expectation function 𝜗→B ′(𝜗)=E 𝜗 X is strongly monotonous. For this case, we finally present just the following particular result which applies to both estimating and testing.
Proposition 1.
If conditions (1) and (2) are satisfied for \(\gamma \in \left (\frac {1}{6},\frac {1}{4}\right ]\) then
$$P_{\vartheta_{0}}^{\times n}\left(\sqrt{n}\frac{\overline{X}_{n}-B'(\vartheta_{0})}{\sqrt{B^{\prime\prime}(\vartheta_{0})}}> z_{1-\alpha(n)}+\frac{B^{\prime\prime\prime}(\vartheta_{0})}{6\sqrt{n}(B^{\prime\prime}(\vartheta_{0}))^{3/2}} z^{2}_{1-\alpha(n)}\right)\sim\alpha(n),n\;\rightarrow \infty. $$
Sketch of proofs
Proof of Theorems 1 and 2.
If condition (2) is satisfied then x=z 1−α(n)=o(n γ),n→∞ for \(\gamma \in \left (\frac {1}{6},\!\!\right.\left.\frac {3}{10}\right ]\), and if (1) then, according to (Linnik 1961; Nagaev 1965), \(P_{\mu }(T_{n}>x)\sim f_{n,s}^{(X)}(x), x\rightarrow \infty \) where \( f_{n,s}^{(X)}(x)=\frac {1}{\sqrt {2\pi }x} \exp \left \{-\frac {x^{2}}{2}+\frac {x^{3}}{\sqrt {n}}\sum \limits _{k=0}^{s-1}a_{k}\left (\frac {x}{\sqrt {n}}\right)^{k}\right \} \) and s is an integer satisfying \(\frac {s}{2(s+2)}<\gamma \leq \frac {s+1}{2(s+3)}\), i.e. s=1 if \(\gamma \in \left (\frac {1}{6},\!\!\right.\left. \frac {1}{4}\right ]\) and s=2 if \(\gamma \in \left (\frac {1}{4},\!\!\right.\left. \frac {3}{10}\right ]\). Here, the constants \(a_{0}=\frac {g_{1}}{6},\, a_{1}=\frac {g_{2}-3{g_{1}^{2}}}{24} \) are due to the skewness g 1 and kurtosis g 2 of X. Note that \(\frac {g_{1}x^{2}}{6\sqrt {n}}=o(x)\) because x =o(n 1/2), thus \(x+\frac {g_{1}x^{2}}{6\sqrt {n}}=o(n^{\gamma })\), and \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right) \sim f_{n,1}\left (x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\). Hence, \(P_{\mu }\left (T_{n}>x+\frac {g_{1}x^{2}}{6\sqrt {n}}\right)\sim 1-\Phi (x).\) Similarly, P μ (T n >z 1−α(n)(s))∼α(n), s=1,2. Further, \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)
$$ {}=P_{\mu_{1,n}(s)}\left(\frac{\sqrt{n}}{\sigma} (\bar{X}_{n} -\mu_{1,n}(s)\right)\!< z_{1-\alpha(n)}(s) -\frac{\sqrt{n}}{\sigma}(\mu_{1,n}(s)-\mu)) =P_{0}\left(\frac{\sqrt{n}}{\sigma} \bar{X}_{n}< z_{\beta(n)}(s)\right). $$
The latter equality holds because {P μ ,μ∈(−∞,∞)} is assumed to be a shift family. It follows that \(P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\)
$$ =P_{0}\left(\frac{\sqrt{n}}{\sigma} (-\bar{X}_{n})\geq z_{1-\beta(n)}+ \frac{-g_{1}}{6\sqrt{n}}z^{2}_{1-\beta(n)} +I_{\{2\}}(s) \frac{3g_{2}-4{g_{1}^{2}}}{72n}z^{3}_{1-\beta(n)}\right). $$
Note that −g 1,g 2 are skewness and kurtosis of −X 1. Thus,
$$P_{\mu_{1,n}(s)}\left(T_{n}\leq z_{1-\alpha(n)}(s)\right)\sim f_{n,s}^{(-X)}(z_{1-\beta(n)}(s)) \sim\beta(n), n \rightarrow \infty. $$
Because P μ (A C I u d o e s n o t c o v e r μ)=P μ (T n >z 1−α(n)(s)) and \(P_{\mu _{1,n}(s)}(ACI^{u} \; covers\; \mu) =P_{\mu _{1,n}(s)}(T_{n}\leq z_{1-\alpha (n)}(s))\), the theorems are proved.
Proof of Remark 1.
The first statement of the remark follows from
$$P_{\mu}\left(\mu>\bar{X}_{n}+\sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right) =P_{\mu}\left(\sqrt{n}(-\bar{X}_{n}+\mu)/\sigma >z^{-}_{1-\alpha(n)}\right) $$
and the second one from
$$P_{\mu^{-}_{1,n}(s)}\left(\mu< \bar{X}_{n}+ \sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right) =P_{0}\left(\bar{X}_{n}>\mu-\mu^{-}_{1,n}(s)-\sigma z^{-}_{1-\alpha(n)}/\sqrt{n}\right)$$
$$=P_{0}\left(\sqrt{n}\bar{X}_{n}/\sigma > z_{1-\beta(n)}(s)\right). $$
Proof of Theorem 3.
We start from the well known relations
$$\alpha=1-\Phi(z_{1-\alpha})= \left(1+O\left(\frac{1}{z^{2}_{1-\alpha}}\right)\right)\frac{1}{\sqrt{2\pi}z_{1-\alpha}} e^{-\frac{z^{2}_{1-\alpha}}{2}},\;\alpha\rightarrow 0. $$
The solution to the approximative quantile equation \(\alpha =\frac {1}{\sqrt {2\pi }x}e^{-\frac {x^{2}}{2}}\) will be denoted by x=x 1−α . Let us put
$$ xe^{\frac{x^{2}}{2}}=\frac{1}{\sqrt{2\pi}\alpha}=:y. $$
If x≥1 then it follows from (3) that \( y\geq e^{\frac {x^{2}}{2}}\), hence x 2≤ ln(y 2). It follows again from (3) that \( y^{2}\leq \ln (y^{2})e^{x^{2}}\), thus \(x^{2}\geq \ln \left (\frac {y^{2}}{\ln y^{2}}\right).\) After one more such step,
$$\ln\left(\frac{y^{2}}{\ln y^{2}}\right)\leq x^{2}\leq\ln\left[\frac{y^{2}}{\ln\left(\frac{y^{2}}{\ln y^{2}}\right)}\right]. $$
The theorem now follows from
$$x^{2}=\left\{\ln y^{2}-\ln 2-\ln\ln y\right\}\left\{1+O\left(\frac{\ln\ln y}{(\ln y^{2})^{2}}\right)\right\}, y\rightarrow \infty. $$
Let us remark that the inverse of the function w→w e w is called the Lambert W function. An asymptotic representation of the solution of (3) as y→∞ can therefore be derived from the more general representation (4.19) of W in (Corless et al. 1996) if one reads (3) as w e w=y 2. Our derivation of the particular result needed here, however, is much more elementary than the general one given in the paper just mentioned.
Recognize that if \(g_{n,s}(x)=o\left (\frac {1}{x}\right), x\rightarrow \infty \) then \(f^{(+/-)(X)}_{n,s}(x+g_{n,s}(x))\sim f^{(+/-)(X)}_{n,s}(x),x\rightarrow \infty.\) Let us restrict to the case s=1. According to (Linnik 1961),
$$P_{\mu_{0}}(T_{n}{(1)}>z_{1-\alpha(n)})\sim P_{\mu_{0}}\left(\frac{3\sqrt{n}}{g_{1}}>T_{n}{(1)}>z_{1-\alpha(n)}\right). $$
The function \(f^{(1)}_{n}(t)=t-\frac {g_{1}t^{2}}{6\sqrt {n}}\) has a positive derivative, \(f^{(1)'}_{n}(t)=1- \frac {g_{1}t}{3\sqrt {n}}>0\), if \(g_{1}t<3\sqrt {n}\). Denoting there the inverse function of \(f_{n}^{(1)}\) by \(f_{n}^{(1)^{-1}}\), it follows \(f_{n}^{(1)^{-1}}(x)= x+\frac {g_{1}x^{2}}{6\sqrt {n}}+O\left (\frac {x^{3}}{n}\right)\) and \( f^{(1)}_{n}\left (f_{n}^{(1)^{-1}}(x)\right) = x+o\left (\frac {1}{x}\right).\) Thus,
$$P_{\mu_{o}}(T_{n}{(1)}>z_{1-\alpha(n)})\sim P_{\mu_{o}}(T_{n}>z_{1-\alpha(n)}(1)) \sim\alpha(n). $$
Moreover, \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}(T_{n}{(1)}\!\leq \! z_{1-\alpha (n)})\,=\,P_{\mu _{1n}(1;\alpha (n),\beta (n))}\!\left (T_{n}\!\leq \! \left (f_{n}^{(1)}\right)^{-1} \!\left (z_{1-\alpha (n)}\right)\!\right)\!=\) \(P_{\mu _{1n}(1;\alpha (n),\beta (n))}\left (\sqrt {n}\frac {\overline {X}_{n} -\mu _{1n}(1)}{\sigma } \leq z_{1-\alpha (n)}(1)+\frac {z^{2}_{1-\alpha (n)}g_{1}}{6\sqrt {n}}+\!O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)- \sqrt {n}\frac {\mu _{1n}(1)-\mu _{0}}{\sigma }\right)\! \sim f_{n,1}^{(-X)}\left (-z_{\beta (n)}(1)+ O\left (\frac {z^{3}_{1-\alpha (n)}}{n}\right)\right) \sim 1-\Phi (z_{1-\beta (n)})=\beta (n).\)
Proof of Proposition 1.
$$P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}(ACI^{u}(1)\; does\; not\; cover\; \mu_{0}) $$
it follows by Theorem 1 that
$$P_{\mu_{0}}(reject\; H_{0})=P_{\mu_{0}}\left(\bar{X}_{n}-\frac{\sigma}{\sqrt{n}}z_{1-\alpha}(1)>\mu_{0}\right) =P_{\mu_{0}}\left(\sqrt{n}\frac{\bar{X}_{n}-\mu_{0}}{\sigma}>z_{1-\alpha}(1)\right). $$
With \(P_{\mu _{0}}=P_{\vartheta _{0}}^{\times n}, \mu _{0}=B'(\vartheta _{0}), \sigma ^{2}=B^{\prime \prime }(\vartheta _{0})\) and B ′′′(𝜗 0)/(B ′′(𝜗 0))3/2=g 1, the proof of Proposition 1 is finished.
Brown, LD: Fundamentals of statistical exponential families. IMS, Lecture Notes and Monograph Series. Hayward, CA (1986).
Corless, RM, Gonnet, GH, Hare, DEG, Jeffrey, DJ, Knuth, DE: On the Lambert W-Funktion. Adv. Comp. Math. 5, 329–359 (1996).
Article MathSciNet MATH Google Scholar
Ibragimov, IA, Linnik, YW: Independent and stationary sequence. Walters, Nordhoff. Translation from Russian edition, 1965 (1971).
Linnik, YV: Limit theorems for sums of independent variables taking into account large deviations. I-III. Theor. Probab. Appl. 6 (1961). 131–148, 345–360; 7 (1962), 115–129.
Nagaev, SV: Some limit theorems for large deviations. Theory Probab. Appl. 10, 214–235 (1965).
Osipov, LV: Multidimensional limit theorems for large deviations. Theory Probab. Appl. 20, 38–56 (1975).
Article MATH Google Scholar
Petrov, VV: Asymptotic behaviour of probabilities of large deviations. Theor. Probab. Appl. 13, 408–420 (1968).
Richter, W-D: Skewness-kurtosis controlled higher order equivalent decisions. Open Stat. Probability J. 7, 1–9 (2016). doi:10.2174/1876527001607010001.
The author is grateful to the Reviewers for their valuable hints and declares no conflicts of interest.
Institute of Mathematics, University of Rostock, Ulmenstraße 69, Haus 3, Rostock, 18057, Germany
Wolf-Dieter Richter
Correspondence to Wolf-Dieter Richter.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Richter, WD. Skewness-kurtosis adjusted confidence estimators and significance tests. J Stat Distrib App 3, 4 (2016). https://doi.org/10.1186/s40488-016-0042-3
Orders of confidence
Orders of modified local alternatives
True non-covering probabilities
Local non-true parameter choice
Covering probabilities
Linnik condition
Osipov-type condition
Skewness-kurtosis adjusted decisions
Order of significance
Error probabilities of first and second kind
Exponential family
Large deviations
Mathematics Subject Classification
62E20; 62F05; 62F12; 60F10
|
CommonCrawl
|
We use cookies to make interactions with our website easy and meaningful, to better understand the use of our services, and to tailor advertising. For further information, including about cookie settings, please read our Cookie Policy . By continuing to use this site, you consent to the use of cookies.
To learn more or modify/prevent the use of cookies, see our Cookie Policy and Privacy Policy.
Request full-text
Cantor spectra of magnetic chain graphs
Article in Journal of Physics A Mathematical and Theoretical 50(16) · November 2016 with 21 Reads
DOI: 10.1088/1751-8121/aa6328
Cite this publication
Pavel Exner
Daniel Vasata
We demonstrate a one-dimensional magnetic system can exhibit a Cantor-type spectrum using an example of a chain graph with $\delta$ coupling at the vertices exposed to a magnetic field perpendicular to the graph plane and varying along the chain. If the field grows linearly with an irrational slope, measured in terms of the flux through the loops of the chain, we demonstrate the character of the spectrum relating it to the almost Mathieu operator.
Do you want to read the rest of this article?
A family of quantum graph vertex couplings interpolating between different symmetries
J PHYS A-MATH THEOR
Ondřej Turek
Miloš Tater
The paper discusses quantum graphs with the vertex coupling which interpolates between the common one of the $\delta$ type and a coupling introduced recently by two of the authors which exhibits a preferred orientation. Describing the interpolation family in terms of circulant matrices, we analyze the spectral and scattering property of such vertices, and investigate the band spectrum of the corresponding square lattice graph.
Cantor spectrum of graphene in magnetic fields
INVENT MATH
Simon Becker
Rui Han
Svetlana Jitomirskaya
We consider a quantum graph as a model of graphene in magnetic fields and give a complete analysis of the spectrum, for all constant fluxes. In particular, we show that if the reduced magnetic flux $\Phi/2\pi$ through a honeycomb is irrational, the continuous spectrum is an unbounded Cantor set of Lebesgue measure zero.
Geometry of Sets and Measures in Euclidean Spaces
P. Mattila
Rieffel: C * -algebras associated with irrational rotations
M.A. Rieffel: C * -algebras associated with irrational rotations, Pacific J. Math., 93 (1981), 415– 429.
Shubin: Discrete magnetic Laplacian
M.A. Shubin: Discrete magnetic Laplacian, Commun. Math. Phys., 164 (1994), 259–275.
Jacobi Operators and Completely Integrable Nonlinear Lattices
G. Teschl: Jacobi Operators and Completely Integrable Nonlinear Lattices, Mathematical Surveys and Monographs, vol. 72, AMS, 2000.
The general motion of conduction electrons in a uniform magnetic field, with application to the diamagnetism of metals
D R Hofstadter
D.R. Hofstadter: The general motion of conduction electrons in a uniform magnetic field, with application to the diamagnetism of metals, Proc. Roy. Soc. A68 (1955), 879–892.
Hofstadter: Energy levels and wavefunctions of Bloch electrons in rational and irrational magnetic fields
D.R. Hofstadter: Energy levels and wavefunctions of Bloch electrons in rational and irrational magnetic fields, Phys. Rev. B14 (1976), 2239–2249.
Power: Simplicity of C * -algebras of minimal dynamical systems
S.C. Power: Simplicity of C * -algebras of minimal dynamical systems, J. London Math. Soc., 3 (1978), 534–538.
Zero Hausdorff Dimension Spectrum for the Almost Mathieu Operator
COMMUN MATH PHYS
Yoram Last
Mira Shamis
We study the almost Mathieu operator at critical coupling. We prove that there exists a dense $G_\delta$ set of frequencies for which the spectrum is of zero Hausdorff dimension.
Spectra of magnetic chain graphs: Coupling constant perturbations
Stepan S. Manko
We analyze spectral properties of a quantum graph in the form of a ring chain with a $\delta$ coupling in the vertices exposed to a homogeneous magnetic field perpendicular to the graph plane. We find the band spectrum in the case when the chain exhibits a translational symmetry and study the discrete spectrum in the gaps resulting from changing a finite number of vertex coupling constants. In particular, we discuss in details some examples such as perturbations of one or two vertices, weak perturbation asymptotics, and a pair of distant perturbations.
Fractal Geometry: Mathematical Foundations and Applications
K J Falconer
J Wiley
Almost Periodic Schrödinger Operators. II. The Integrated Density of States
DUKE MATH J
J. E. Avron
Barry Simon
Introduction to quantum graphs
Gregory Berkolaiko
Peter Kuchment
Locally compact transformation groups and $C^*$-algebras
B AM MATH SOC
Edward Effros
Frank Hahn
Almost periodic Schr??dinger operators: A Review
We review the recent rigorous literature on the one dimensional Schördinger equation, \(H = - \frac{{d^2 }}{{dx^2 }} + V(x)\)with V(x) al most periodic and the discrete (= tight binding) analogy, i. e. the doubly infinite Jacobi matrix, hij = σi,j+1 + σi,j−1 + viσi,j with vi almost periodic on the integers. Two themes dominate. The first is that the gaps in the spectrum tend to be dense so that the spectrum is a Cantor set. We describe intuitions for this from the point of view of where gaps open and from the point of view of anamalous long time behaviour. We give a theorem of Avron-Simn, Chulasvsky and Moser that for a generic sequence with Σ|an| < ∞, the continuum operator with V(x) = Σ an cos(x/2n) has a Cantor spectrum. The second theme involves unusual spectral types that tend to occur. We describe recurrent absolutely continuous spectrum and show it occurs in some examples of the type just discussed. We give an intuition for dense point spectrum to occur and some theorems on the occurende of point spectrum. We sketch the proof of Avron-Simon that for the discrete case with Vn = λcos(2παn + θ) if λ > 2 and α is a Lionville number, then for a.e. θ, h has purely singular continuous spectrum.
Azbel: Energy spectrum of a conduction electron in a magnetic field
M Ya
M.Ya. Azbel: Energy spectrum of a conduction electron in a magnetic field, J. Exp. Theor. Phys 19 (1964), 634–645.
Cloning of Dirac fermions in graphene superlattices
L. A. Ponomarenko
Roman Gorbachev
Geliang Yu
A. K. Geim
Superlattices have attracted great interest because their use may make it possible to modify the spectra of two-dimensional electron systems and, ultimately, create materials with tailored electronic properties. In previous studies (see, for example, refs 1, 2, 3, 4, 5, 6, 7, 8), it proved difficult to realize superlattices with short periodicities and weak disorder, and most of their observed features could be explained in terms of cyclotron orbits commensurate with the superlattice. Evidence for the formation of superlattice minibands (forming a fractal spectrum known as Hofstadter's butterfly) has been limited to the observation of new low-field oscillations and an internal structure within Landau levels. Here we report transport properties of graphene placed on a boron nitride substrate and accurately aligned along its crystallographic directions. The substrate's moiré potential acts as a superlattice and leads to profound changes in the graphene's electronic spectrum. Second-generation Dirac points appear as pronounced peaks in resistivity, accompanied by reversal of the Hall effect. The latter indicates that the effective sign of the charge carriers changes within graphene's conduction and valence bands. Strong magnetic fields lead to Zak-type cloning of the third generation of Dirac points, which are observed as numerous neutrality points in fields where a unit fraction of the flux quantum pierces the superlattice unit cell. Graphene superlattices such as this one provide a way of studying the rich physics expected in incommensurable quantum systems and illustrate the possibility of controllably modifying the electronic spectra of two-dimensional atomic crystals by varying their crystallographic alignment within van der Waals heterostuctures.
Hofstadter's butterfly and the fractal quantum Hall effect in Moire superlattices
Cory R. Dean
Lei Wang
Patrick Maher
Phaly Kim
Electrons moving through a spatially periodic lattice potential develop a quantized energy spectrum consisting of discrete Bloch bands. In two dimensions, electrons moving through a magnetic field also develop a quantized energy spectrum, consisting of highly degenerate Landau energy levels. When subject to both a magnetic field and a periodic electrostatic potential, two-dimensional systems of electrons exhibit a self-similar recursive energy spectrum. Known as Hofstadter's butterfly, this complex spectrum results from an interplay between the characteristic lengths associated with the two quantizing fields, and is one of the first quantum fractals discovered in physics. In the decades since its prediction, experimental attempts to study this effect have been limited by difficulties in reconciling the two length scales. Typical atomic lattices (with periodicities of less than one nanometre) require unfeasibly large magnetic fields to reach the commensurability condition, and in artificially engineered structures (with periodicities greater than about 100 nanometres) the corresponding fields are too small to overcome disorder completely. Here we demonstrate that moiré superlattices arising in bilayer graphene coupled to hexagonal boron nitride provide a periodic modulation with ideal length scales of the order of ten nanometres, enabling unprecedented experimental access to the fractal spectrum. We confirm that quantum Hall features associated with the fractal gaps are described by two integer topological quantum numbers, and report evidence of their recursive structure. Observation of a Hofstadter spectrum in bilayer graphene means that it is possible to investigate emergent behaviour within a fractal energy landscape in a system with tunable internal degrees of freedom.
The spectrum of the continuous Laplacian on a graph
MONATSH MATH
Carla Cattaneo
We study the spectrum of the continuous Laplacian Δ on a countable connected locally finite graph Γ without self-loops, whose edges have suitable positive conductances and are identified with copies of segments [0, 1], with the condition that the sum of the weighted normal exterior derivatives is 0 at every node (Kirchhoff-type condition). In particular, we analyse the relation-between the spectrum of the operator Δ and the spectrum of the discrete Laplacian (I - P) defined on the vertices of Γ.
Spectral Theory of Sturm-Liouville Operators on Infinite Intervals: A Review of Recent Developments
This review discusses some of the central developments in the spectral theory of Sturm-Liouville operators on infinite intervals over the last thirty years or so. We discuss some of the natural questions that occur in this framework and some of the main models that have been studied.
Gauss polynomials and the rotation algebra
Man-Duen Choi
George A. Elliott
Noriko Yui
Newton's binomial theorem is extended to an interesting noncommutative setting as follows: If, in a ring,ba=γab with γ commuting witha andb, then the (generalized) binomial coefficient\(\left( {\begin{array}{*{20}c} n \\ k \\ \end{array} } \right)_r \) arising in the expansion$$\left( {a + b} \right)^n = \sum\limits_{k = 0}^n {\left( {\begin{array}{*{20}c} n \\ k \\ \end{array} } \right)} _\gamma a^{n - k} b^k $$ (resulting from these relations) is equal to the value at γ of the Gaussian polynomial$$\left[ {\begin{array}{*{20}c} n \\ k \\ \end{array} } \right] = \frac{{\left[ n \right]}}{{\left[ k \right]\left[ {n - k} \right]}}$$ where [m]=(1-x m )(1-x m−1)...(1-x). (This is of course known in the case γ=1.) From this it is deduced that in the (universal)C *-algebraA gq generated by unitariesu andv such thatvu=e 2πiθuv, the spectrum of the self-adjoint element (u+v)+(u+v)* has all the gaps that have been predicted to exist-provided that either θ is rational, or θ is a Liouville number. (In the latter case, the gaps are labelled in the natural way-viaK-theory-by the set of all non-zero integers, and the spectrum is a Cantor set.)
Quantum Wires with Magnetic Fluxes
Vadim Kostrykin
Robert Schrader
In the present article magnetic Laplacians on a graph are analyzed. We provide a complete description of the set of all operators which can be obtained from a given self-adjoint Laplacian by perturbing it by magnetic fields. In particular, it is shown that generically this set is isomorphic to a torus. We also describe the conditions under which the operator is unambiguously (up to unitary equivalence) defined by prescribing the magnetic fluxes through all loops of the graph.
Unitary dimension reduction for a class of self-adjoint extensions with applications to graph-like structures
J Math Anal Appl
Konstantin Pankrashkin
We consider a class of self-adjoint extensions using the boundary triple technique. Assuming that the associated Weyl function has the special form $M(z)=\big(m(z)\Id-T\big) n(z)^{-1}$ with a bounded self-adjoint operator $T$ and scalar functions $m,n$ we show that there exists a class of boundary conditions such that the spectral problem for the associated self-adjoint extensions in gaps of a certain reference operator admits a unitary reduction to the spectral problem for $T$. As a motivating example we consider differential operators on equilateral metric graphs, and we describe a class of boundary conditions that admit a unitary reduction to generalized discrete laplacians.
Fractal Geometry : Mathematical Foundations and Applications / K. Falconer.
Kenneth Falconer
Introducción a los fundamentos matemáticos y las aplicaciones de la geometría de fractales.
Reducibility or nonuniform hyperbolicity for quasiperiodic Schrödinger cocycles
ANN MATH
Arturo Avila
Raffi Krikorian
We show that for almost every frequency ¿¿ ¿¿ R\Q, for every C¿Ö potential v : R/Z ¿¿ R, and for almost every energy E the corresponding quasiperiodic Schr¿Nodinger cocycle is either reducible or nonuniformly hyperbolic. This result gives very good control on the absolutely continuous part of the spectrum of the corresponding quasiperiodic Schr¿Nodinger operator, and allows us to complete the proof of the Aubry-Andr¿Le conjecture on the measure of the spectrum of the Almost Mathieu Operator.
Metal-Insulator Transition for the Almost Mathieu Operator
Svetlanaya Jitomirskaya
We prove that for Diophantine ù and almost every è, the almost Mathieu operator, (Hù,ë,è )(n) = (n+1)+ (n.1)+ë cos 2ð(ùn+è) (n), exhibits localization for ë > 2 and purely absolutely continuous spectrum for ë < 2. This completes the proof of (a correct version of) the Aubry-Andr/e conjecture.
A duality between Schrodinger operators on graphs and certain Jacobi matrices
this paper is to show that the same duality can be established for a wide class of Schrodinger operators on graphs, including the case of a nonempty boundary. In general, the resulting Jacobi matrices exhibit a varying "mass". 1 II Preliminaries
The Ten Martini Problem
Artur Avila
S. Jitomirskaya
We prove the conjecture (known as the ``Ten Martini Problem'' after Kac and Simon) that the spectrum of the almost Mathieu operator is a Cantor set for all non-zero values of the coupling and all irrational frequencies.
Spectra of self-adjoint extensions and applications to solvable Schr??dinger operators
REV MATH PHYS
Jochen Brüning
Vladimir Geyler
We give a self-contained presentation of the theory of self-adjoint extensions using the technique of boundary triples. A description of the spectra of self-adjoint extensions in terms of the corresponding Krein maps (Weyl functions) is given. Applications include quantum graphs, point interactions, hybrid spaces, singular perturbations.
Cantor and Band Spectra for Periodic Quantum Graphs with Magnetic Fields
We provide an exhaustive spectral analysis of the two-dimensional periodic square graph lattice with a magnetic field. We show that the spectrum consists of the Dirichlet eigenvalues of the edges and of the preimage of the spectrum of a certain discrete operator under the discriminant (Lyapunov function) of a suitable Kronig-Penney Hamiltonian. In particular, between any two Dirichlet eigenvalues the spectrum is a Cantor set for an irrational flux, and is absolutely continuous and has a band structure for a rational flux. The Dirichlet eigenvalues can be isolated or embedded, subject to the choice of parameters. Conditions for both possibilities are given. We show that generically there are infinitely many gaps in the spectrum, and the Bethe-Sommerfeld conjecture fails in this case.
Discover more publications, questions and projects in Graphs
Nuclear magnetic resonance evidence for a strong modulation of the Bose-Einstein condensate in BaCuS...
September 2007 · Physical review. B, Condensed matter
Steffen Krämer
Raivo Stern
M. Horvatic
R. Fisher
We present a Cu-63,Cu-65 and Si-29 NMR study of the quasi-2D coupled spin 1/2 dimer compound BaCuSi2O6 in the magnetic field range 13-26 T and at temperatures as low as 50 mK. NMR data in the gapped phase reveal that below 90 K different intradimer exchange couplings and different gaps (Delta(B)/Delta(A)=1.16) exist in every second plane along the c axis, in addition to a planar incommensurate ... [Show full abstract] (IC) modulation. Si-29 spectra in the field induced magnetic ordered phase reveal that close to the quantum critical point at H-c1=23.35 T the average boson densityn of the Bose-Einstein condensate is strongly modulated along the c axis with a density ratio for every second plane n(A)n(B)similar or equal to 5. An IC modulation of the local density is also present in each plane.
Quantum graphs with vertices of a preferred orientation
Motivated by a recent application of quantum graphs to model the anomalous Hall effect we discuss quantum graphs the vertices of which exhibit a preferred orientation. We describe an example of such a vertex coupling and analyze the corresponding band spectra of lattices with square and hexagonal elementary cells showing that they depend heavily on the network topology, in particular, on the ... [Show full abstract] degrees of the vertices involved.
Strong-electric-field eigenvalue asymptotics for the Iwatsuka model
May 2005 · Journal of Mathematical Physics
Shin-ichi Shirai
We consider the two-dimensional Schro¨dinger operator, Hg(b)=−∂2∕∂x2+[(1∕−1)(∂∕∂y)−b(x)]2−gV(x,y), where V is a non-negative scalar potential decaying at infinity like (1+|x|+|y|)−m, and (0,b(x)) is a magnetic vector potential. Here, b is of the form b(x)=0xB(t)dt and the magnetic field B is assumed to be positive, bounded, and monotonically increasing on R (the Iwatsuka model). Following the ... [Show full abstract] argument as in Refs. 15, 16, and 17 [Raikov, G. D., Lett. Math. Phys., 21, 41–49 (1991); Raikov, G. D, Commun. Math. Phys., 155, 415–428 (1993); Raikov, G. D. Asymptotic Anal., 16, 87–89 (1998)], we obtain the asymptotics of the number of discrete spectra of Hg(b) crossing a real number λ in the gap of the essential spectrum as the coupling constant g tends to ±∞, respectively.
Energy spectrum for two-dimensional periodic potentials in a magnetic field
June 1993 · Physical review. B, Condensed matter
Oliver Kühn
Vassilios Fessatidis
Hong-Liang Cui
Norman J. M. Horing
The single-particle energy spectrum of two-dimensional electrons subject to a two-dimensional periodic potential and a perpendicular magnetic field is investigated. Effects of the potential shape as well as of Landau-level coupling are studied. In addition to the well-known recursive structure of the uncoupled Landau bands it is found that the steepness of the potential is decisive for the actual ... [Show full abstract] form of the spectra. The coupling between different Landau bands leads to an increased width of the magnetic subbands.
Periodic quantum graphs from the Bethe-Sommerfeld perspective
May 2017 · Journal of Physics A Mathematical and Theoretical
The paper is concerned with the number of open gaps in spectra of periodic quantum graphs. The well-known conjecture by Bethe and Sommerfeld (1933) says that the number of open spectral gaps for a system periodic in more than one direction is finite. To the date its validity is established for numerous systems, however, it is known that quantum graphs do not comply with this law as their spectra ... [Show full abstract] have typically infinitely many gaps, or no gaps at all. These facts gave rise to the question about the existence of quantum graphs with the `Bethe-Sommerfeld property', that is, featuring a nonzero finite number of gaps in the spectrum. In this paper we prove that the said property is impossible for graphs with the vertex couplings which are either scale-invariant or associated to scale-invariant ones in a particular way. On the other hand, we demonstrate that quantum graphs with a finite number of open gaps do indeed exist. We illustrate this phenomenon on an example of a rectangular lattice with a $\delta$ coupling at the vertices and a suitable irrational ratio of the edges. Our result allows to find explicitly a quantum graph with any prescribed exact number of gaps, which is the first such example to the date.
What type of file do you want?
What do you want to download?
Looking for the full-text?
You can request the full-text of this article directly from the authors on ResearchGate.
© ResearchGate 2019. All rights reserved.
|
CommonCrawl
|
Donations Box
Tek's Domain
#<NTA:NnT:SSrgS:H6.6-198:W200-90.72:CBWg>
Complying With the Latest Security Policies
2020-04-12 12 min read Behind the scenes Blog improvements Tech Sysadmin stuff Web stuff Teknikal_Domain Unable to load comment count
default-src
script-src
style-src
img-src
font-src
connect-src
media-src
frame-src
frame-ancestors
manifest-src
prefetch-src
block-all-mixed-content
Cross-Origin Resource Sharing (CORS)
HTTP Public Key Pinning (HPKP)
HSTS Preloading
Referrer Policy
Server Header
Subresource Integrity (SRI)
Modern websites and modern browsers support a wide range of security features to communicate specifically what is and is not allowed to be loaded, executed, or sent over the network. Being the person that I am, I'm going to comply with the latest guidelines and best practices as much as I can… and it's a headache.
The biggest one has to be the Content Security Policy, which is giant list of "you can load X from Y". Before showing my CSP, here's a quick run-down:
You can specify controls for JavaScript, stylesheets, images, fonts, WebSocket / XHR connections, other non-image media, objects, embeds, & applets, prefetch / pre-rendered content, child content, <iframe> targets, Worker sources (not the Cloudflare type), domains allowed to <iframe> the site (see: X-Frame-Options), allowed <form> action targets, upgrade HTTP requests for resources, block HTTP resources on HTTPS page, allowed document base URI, web manifests, and allowed plugins.
While most can take a list of domains, they can also take * meaning "allow all", self and none should be pretty self-explanatory, and.. well the rest I'll get to in a second.
And yes, I know that disclosing security details is usually a bad thing, but it's also sent over on every request, so it's not exactly private knowledge.
Finally, this has been formatted for ease of reading:
'none'
'self'
'unsafe-eval'
'sha256-4ExdAblVQS3vP9+dOQPogQfTvVY/B4KGnQWH42NhRg0='
'sha256-CmjHf3aA+ooXJwVib2vComMSJWhXsR6tbZ+gj+s3zSU='
*.algolianet.com
https://ajax.cloudflare.com:443
https://static.cloudflareinsights.com:443
https://cdnjs.cloudflare.com:443
https://cdn.ko-fi.com:443
https://ko-fi.com:443
https://c.disquscdn.com:443
https://disqus.com:443
https://teknikaldomain-me.disqus.com:443
https://www.gstatic.com:443
'unsafe-inline'
https://fonts.googleapis.com:443
https://fonts.gstatic.com:443
*.algolia.net
https://links.services.disqus.com:443
object-src
child-src
Put simply: block everything unless explicitly allowed. I could have gone with self just in case, but… eh if every tool complains about self, I'll just use none.
You're likely not happy with this one. The entire reason I need to allow the use of eval() and those two hashes are because MathJax, the thing that allows me to write $\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$ and get this: $\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}$. The hashes mean that only a script with that exact cryptographic hash is allowed, instead of allowing inline JavaScript. Luckily since there's no unsafe-inline (because hashes are used), you can't inject scripts.
I also need to allow gstatic for something cool: Google Charts. And the wildcard on Algolia's… well I'm not whitelisting their individual domains. Algolia is the search engine that powers the search bar in the topnav, and trust me, it's not tracking you. All it has access to are the characters you type into that box so it knows what to give back in search results.
Fun fact: I allow Cloudflare insights (browser timings) to see if there's something that I can do to improve performance, it's completely aggregate data that cannot be linked to a single person, feel free to block it if you want, I just want to make sure there isn't some obvious slowdown somewhere. Also note that this also blocks Google Analytics, which is indeed somewhere in the site's theme files (that I have yet to remove, working on it), but that also means that yeah, there's no GA tracking because your browser will just complain about a CSP violation until I excise it. Nice.
Also weird… again, MathJax puts style attributes in everything, meaning I need inline styles. Not as bad as inline scripts, but still a little insecure.
And yes, a stylesheet is loaded from the fonts domain. I don't know.
Basically, just allow all images. Because of the last two directives, everything is forced to be HTTPS only, and I don't know ahead of time what sites I might pull images from.
Fonts… self-explanatory.
This allows WebSocket and XMLHttpRequest connections to be made. 'self' is likely not necessary, but a few things here use WS for communication, though they really shouldn't in production. And the Disqus URL is for.. Disqus.
Non-image media, like videos, audio, whatnot. Heck with it, just include it all.
What sites are we allowed to include in a <frame> or <iframe>? Disqus needs it.
What sites can use us in a <frame> or <iframe>? Nobody.
Where to pull a manifest file from. In this case, it's site.webmanifest, which is this:
"name": "TeknikalDomain.me",
"short_name": "TeknikalDomain.me",
"src": "/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
"theme_color": "#1da82d",
"background_color": "#1da82d",
"display": "standalone"
All it really does is tell Android phones how to display everything when someone makes a shortcut to their launcher.
Domains that you can prefetch or prerender content from before it's required, saving a little bit of load time. Disqus.. again.. uses this.
Side note: For a long time, these kept failing because prefetch-src doesn't exist. I checked and there's a message above, stating that "The Content-Security-Policy directive prefetch-src is implemented behind a flag which is currently disabled."
The answer is chrome://flags/#enable-experimental-web-platform-features (for me), which needed to be enabled for the browser to recognize prefetch-src. Luckily content seemed to load fine once it was actually required, but the time-saving prefetch was blocked.
Any resource that has an HTTP URL will be automatically rewritten to HTTPS, no exceptions.
If a resource cannot be loaded over HTTPS, only HTTP, then refuse to load.
I don't actually use cookies on here.. at all. Cloudflare does provide a _cfduid cookie for.. whatever they're doing.
if you're doing cookies, make sure that all cookies are Secure (meaning that they aren't transmitted on a plain HTTP connection), and any backend session cookies are HttpOnly (Scripts have no access).
CORS is a mechanism where one domain is permitted to load code or other resources from another domain. Normally, the web browser will block cross-origin requests like this unless it sends an Access-Control-Allow-Origin header, which specifies which domains are allowed to request a resource. Since I'm not sharing anything, just not adding the header is enough to prevent it.
Certificate Transparency (CT) is, simplified, a public list of issued TLS certificates that can be checked by anyone. An Expect-CT header means that the browser is to cross-reference the CT database, and refuse to accept the certificate if it does not exist.
For example: the header my website sends is:
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Now this one is part of Cloudflare, and I don't have access to it. what's interesting is that it lacks an enforce directive, meaning the browser will report a CT failure, but will still load the page. max-age is just the number, in seconds, that the browser should remember to check CT for this domain.
I.. don't have one of these yet and it's still very experimental. Like a CSP, a feature policy is a set of directive with an access list specifying what's allowed to use them. Unlike a CSP, a FP is the set of policies for browser features, such as video autoplay, getting your device's accelerometer data or battery status, use the microphone, there's an entire list of these, which I won't really get into right now, but suffice it to say that one acts like a CSP for browser features instead of resource types.
Deprecated but I'll still cover it. HPKP was presented in the form of a Public-Key-Pins header, which would specify the SHA-256 hash of the key that the website's certificate was using. You're allowed to specify multiple, and had to specify at least two (one backup) for it to be honored. Since the spec meant that you could pin any key in the chain, pinning your end certificate and then the intermediate certificate as a backup was what a lot of people would do.
The issue with HPKP was that, besides needing to constantly update headers, you could lock everyone out of your site if you pinned keys incorrectly, and HPKP has been deprecated in favor of Expect-CT.
HSTS is a mechanism where the browser is required to transparently redirect any HTTP requests to that domain to HTTPS. The HSTS header, Strict-Transport-Security, specifies up to three things: the max-age, subdomain inclusion, and preloading.
max-age is just the number of seconds where the browser needs to remember that this site is HSTS. Once this amount of seconds elapses, plain HTTP requests are allowed (usually resulting in a redirect to HTTPS which resets HSTS). For security, you want a long max-age, I set 31536000. 31,536,000 seconds is equivalent to 525,600 minutes, or 8,760 hours, or 365 days. Yes, you need to not touch my website for an entire year before your browser is even allowed to think about making plain HTTP requests again.
I also send includeSubDomains, meaning that any subdomain of teknikaldomain.me (say, forum.teknikaldomain.me for example) is also included in HSTS.
Most browsers have a built-in (literally hard-coded source code) HSTS preload list, which is a list of domains that are known to use HSTS, and the browser should consider as always HSTS.
The site hstspreload.org can be used to query or submit sites to the list.
There's a few requirements for preloading though:
Your TLS certificate must be valid
HTTP redirects to HTTPS on the same host
All subdomains are HTTPS
HSTS header must specify a minimum time of 1 year, includeSubDomains, and a preload directive, which specifies that the site is requesting inclusion in the list.
You must continue to meet these requirements, or it'll get removed.
Also changes to the list may only take place every couple months, since the list is actually hard-coded source, it only updates every major browser build.
When redirected from one link to another, browsers will send a Referersic header, indicating the URL that it just came from. For security reasons, this isn't always good practice, which is why we have Referrer-Policy, and it's spelt correctly this time.
RP can take on only a few values, which I'll go through one by one:
no-referrer: Don't send Referer.
no-referrer-when-downgrade (default): HTTPS -> HTTP connections omit Referer, the rest keep it.
origin: Only send the origin (e.g., send http://example.com/ for http://example.com/page.html).
origin-when-cross-origin: Send just the origin when going to a different site, and the entire thing when staying on the same site.
same-origin: Only send header to the same domain, omit on cross-domain.
strict-origin: Only send over HTTPS.
strict-origin-when-cross-origin: Send everything when going to the same site, and only the origin to other sites. Nothing is sent over plain HTTP.
unsafe-url: Send everything, regardless.
I use strict-origin-when-cross-origin, so as long as you're going from one page of teknikaldomain.me to another page of teknikaldomain.me, it'll know what you did. But if you clicked off a link to a different site, all it will know is that you came from teknikaldomain.me but not exactly where. Additionally, if you're visiting another site over plain HTTP, no header is to be sent at all.
The Server header isn't a real security header, but it can be a security risk. Usually the web server that's… serving the content will fill this in, say with like Server: nginx or something. Knowing the backend server in use is actually a security leak, since now an attacker knows what you might be vulnerable to. Cloudflare automatically masks this with Server: cloudflare, so it's pretty useless for determining anything.
Any value here really works as long as it does not give away the actual program that's doing the handling, like Apache, Nginx, Caddy, or IIS. You can still use Server to indicate, say, which datacenter a request was routed to, got debugging. Just don't stick program names in there.
The one that… I literally cannot do. For resources, like JavaScript, that are loaded from an external domain, there's no real guarantee that you're loading the correct thing. This is why, for example, the <script> tag has an integrity attribute, which specifies the cryptographic hash of the resource. As such, if the hashes do not match, the browser will refuse to load what it got in response. Essentially, you're saying that this resource and only this resource are allowed, regardless of what the name says. The issue is that for SRI to work, resources need the proper CORS header, and of the two external resources I load, neither do. Given that they're both resources from CDNs, the fact that they don't is confusing in its own right, actually. Furthermore why don't browsers reject it then?! Regardless, what that means is that I could add in an integrity attribute to the scripts, but the browser would refuse it because the resource domain isn't authorizing CORS requests.
Servers usually send a Content-Type header to the browser, telling it the MIME type of what it just requested. For example, asking for test.png will have Content-Type: image/png on it.
XCTO only has one value, nosniff, which means that browsers should, get this, respect the Content-Type header and not try to deduce the content type themselves. The need arose when 'type sniffing' (automatic file type detection) was turning non-executable types into executable types back in the early days of the web.
Either way, why do we need to tell browsers to literally just not tamper with a value they're already given‽
Superseded by CSP's frame-ancestors directive. Essentially, when a browser loads a page through an <embed>, <iframe>, or similar mechanism, it checks the XFO header, if any. I set it to DENY, meaning no attempt to frame the page will succeed, and the browser should refuse to load like that. Alternatively, you can use SAMEORIGIN meaning that you can only be framed by yourself.
Most modern browsers have a cross-site scripting (XSS) auditor that will attempt to block what it thinks is XSS, for example, someone leaving a comment that is just some <script>.
The header can have a base value of 0 or 1. Zero means to not protect (why?), and one means to ask the browser to enable XSS protection. If any is detected, it will be deleted. If mode=block is added, then instead of deleting it, the browser will just fail the page load, and complain that due to XSS it's not going to.
Like most things, it's a little obsolete now, especially with CSPs in place.
CORS HPKP HSTS HTTP HTTP CSP HTTP headers HTTP security headers HTTP SRI
Narration for Posts, for You!
Fail2Ban Behind a Reverse Proxy: The Almost-Correct Way
AbuseIPDB Checking With Postfix
A Professional Amateur Develops Color Film
What Exactly Is Federation, Anyways?
Tech explained (37)
Blog improvements (22)
My stuff (21)
Cloudflare (8)
© 2022 Teknikal_Domain
Modified fork of the Bilberry Hugo Theme
|
CommonCrawl
|
Multi-wavelength Observations of the 2015 Nova in the Local Group Irregular Dwarf Galaxy IC 1613
Williams, SC, Darnley, MJ and Henze, M Multi-wavelength Observations of the 2015 Nova in the Local Group Irregular Dwarf Galaxy IC 1613. Monthly Notices of the Royal Astronomical Society. ISSN 0035-8711 (Accepted)
1707.04115v1.pdf - Accepted Version
A nova in the Local Group irregular dwarf galaxy IC 1613 was discovered on 2015 September 10 and is the first nova in that galaxy to be spectroscopically confirmed. We conducted a detailed multi-wavelength observing campaign of the eruption with the Liverpool Telescope, the LCO 2m telescope at Siding Spring Observatory, and Swift, the results of which we present here. The nova peaked at $M_V=-7.93\pm0.08$ and was fast-fading, with decline times of $t_{2(V)}=13\pm2$ and $t_{3(V)}=26\pm2$ days. The overall light curve decline was relatively smooth, as often seen in fast-fading novae. Swift observations spanned 40 days to 332 days post-discovery, but no X-ray source was detected. Optical spectra show the nova to be a member of the hybrid spectroscopic class, simultaneously showing Fe II and N II lines of similar strength during the early decline phase. The spectra cover the eruption from the early optically thick phase, through the early decline and into the nebular phase. The H$\gamma$ absorption minimum from the optically thick spectrum indicates an expansion velocity of $1200\pm200$ km s$^{-1}$. The FWHM of the H$\alpha$ emission line between 10.54 and 57.51 days post-discovery shows no significant evolution and remains at $\sim1750$ km s$^{-1}$, although the morphology of this line does show some evolution. The nova appears close to a faint stellar source in archival imaging, however we find the most likely explanation for this is simply a chance alignment.
This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society Published by Oxford University Press on behalf of the Royal Astronomical Society.
astro-ph.SR; astro-ph.SR
Q Science > QB Astronomy
Astrophysics Research Institute
|
CommonCrawl
|
PrepAnywhere
Get Started Sign In
Chapter Test Max and Min
Calculus and Vectors McGraw-Hill
Purchase this Material for $5
You need to sign up or log in to purchase.
Subscribe for All Access
Solutions 18 Videos
On the interval 0 \leq x \leq 3, the function f(x) =x^2 - 8x +16.
A is always increasing
B is always decreasing
C has a local minimum
D is concave down
Buy to View
The graph of f'(x) is shown. Which of these statements is not true for the graph of f(x)?
A It has one turning point.
B It is concave down for all values of x.
C It is increasing for x <2.
D It is decreasing for all values of x.
For a certain function, f'(2)=0 and f'(x) >0 for -1 < x < 2. Which statement is not true?
A (2, f(2)) is a critical point.
B (2, f(2)) is a turning point
C (2, f(2)) is a local minimum
D (2, f(2)) is a local maximum
If f(x) is an odd function and f(a)= 5, then
A f(-a) = 5
B f(-a) = a
C f(-a) = -5
D f(-a) = -a
For the function \displaystyle f(x) = \frac{-3}{(x - 2)^2}, which statement is not true?
A The graph has no x-intercepts
B The graph is concave down for all x for which f(x) is defined.
C f'(x)>0 when x < 2 and f'(x) < 0 when x > 2
D \displaystyle \lim_{x \to 2} f(x0 = -\infty
Copy and complete this statement.
Given \displaystyle f'(x) = x(x -1)^2 , the graph of f(x) has critical points and turning points.
The graph of f'(x) is shown. Identify the features on the graph of f(x) at each of points A, B, and C. Be as specific as possible.
Find the absolute extrema for f(x) = x^3-5x^2 + 6x + 2 on the interval 0 \leq x \leq 4.
Copy the graph of the function f(x) into your notebook. Sketch the first and second derivatives on the same set of axes.
Consider the function \displaystyle f(x) = 3x^4 -16x^3 + 18x^2 .
a) How will the function behave as x\to \pm \infty ?
b) Find the critical points and classify them using the second derivative test.
c) Find the locations of the points of inflection.
The cost, in thousands of dollars, to produce x all-terrain vehicles (ATVs) per day is given by the function C(x) = 0.1x^2 + 1.2x + 3.6.
a) Find the a function U(x) to represent the cost per unit to produce the ATVs.
b) How many ATVs should be produced per day to minimize the cost per unit?
Consider the function \displaystyle y = x^2 + \frac{1}{x^2} .
a) Identify the vertical asymptote.
b) Find and classify the critical points.
c) Identify the intervals of concavity.
d) Sketch the graph.
The graph shows the derivative, f'(x), of a function f(x).
a )Which is greater?
i) f('0) or f'(1)
ii) f(-1) or f(3)
iii) f(5) or f(10)
b) Sketch a possible graph of f(x).
The function g(x) = \frac{1}{(x -a)^2} has vertical asymptote x = a. Without graphing, explain how you know how the graph will behave near x = a.
A hotel chain typically charges $120 per room and rents an average of 40 rooms per night at this rate. They have found that for each $10 reduction in price, they rent an average of 10 more rooms.
a) Find the rate they should be charging to maximize revenue.
b) How does this change if the hotel only has 50 rooms?
a) For a given perimeter, what shape of rectangle encloses the most area?
b) For a given perimeter, what type of triangle encloses the most area?
c) Which shape would enclose more area for a given perimeter: a pentagon or an octagon? Explain your reasoning.
d) What two-dimensional shape would enclose the maximum area for a given perimeter?
In a certain region, the number of bushels of corn per acre, B, is given by the function
B(n) = -0.1n^2 + 10n, where n represents the number of seeds, in thousands, planted per acre.
a) What number of seeds per acre yields the maximum number of bushels of corn?
b) If corn sells for $3/bushel and costs $2 for 1000 seeds, find the optimal number of
An isosceles triangle is to have a perimeter of 64 cm. Determine the side lengths of the triangle if the area is to be a maximum.
Login Signup Reset Password Contact
Textbooks Solutions
Grade 9 Math Grade 10 Math Grade 11 Math Grade 12 Math University
© MGL Math
|
CommonCrawl
|
On conjectures of Stenger in the theory of orthogonal polynomials
Walter Gautschi1 &
Ernst Hairer2
Journal of Inequalities and Applications volume 2019, Article number: 159 (2019) Cite this article
The conjectures in the title deal with the zeros \(x_{j}\), \(j=1,2, \ldots ,n\), of an orthogonal polynomial of degree \(n>1\) relative to a nonnegative weight function w on an interval \([a,b]\) and with the respective elementary Lagrange interpolation polynomials \(\ell _{k} ^{(n)}\) of degree \(n-1\) taking on the value 1 at the zero \(x_{k}\) and the value 0 at all the other zeros \(x_{j}\). They involve matrices of order n whose elements are integrals of \(\ell _{k}^{(n)}\), either over the interval \([a,x_{j}]\) or the interval \([x_{j},b]\), possibly containing w as a weight function. The claim is that all eigenvalues of these matrices lie in the open right half of the complex plane. This is proven to be true for Legendre polynomials and a special Jacobi polynomial. Ample evidence for the validity of the claim is provided for a variety of other classical, and nonclassical, weight functions when the integrals are weighted, but not necessarily otherwise. Even in the case of weighted integrals, however, the conjecture is found by computation to be false for a piecewise constant positive weight function. Connections are mentioned with the theory of collocation Runge–Kutta methods in ordinary differential equations.
Let w be a nonnegative weight function on \([a,b]\), \(-\infty \leq a< b \leq \infty \), and \(p_{n}\) be the orthonormal polynomial of degree n relative to the weight function w. Let \(\{x_{j}\}_{j=1}^{n}\) be the zeros of \(p_{n}\) and
$$ \ell _{k}^{(n)}(x)=\prod _{\stackrel{1\leq j\leq n}{j\neq k}} \frac{x-x _{j}}{x_{k}-x_{j}}, \quad k=1,2,\ldots ,n, $$
the elementary Lagrange interpolation polynomial of degree \(n-1\) having the value 1 at \(x_{k}\) and 0 at all the other zeros \(x_{j}\). The Stenger conjectures relate to the eigenvalues of matrices of order n whose elements are certain integrals involving the elementary Lagrange polynomials (1), the claim being that the real part of all eigenvalues is positive. We distinguish between the restricted Stenger conjecture [8, §2.3, Remark 2.2], in which the matrices are
$$ \begin{aligned} &U_{n}=\bigl[u_{jk}^{(n)} \bigr], \quad u_{jk}^{(n)}= \int _{a}^{x_{j}}\ell _{k} ^{(n)}(x) \,\mathrm{d}x, \\ &V_{n}=\bigl[v_{jk}^{(n)}\bigr], \quad v_{jk}^{(n)}= \int _{x_{j}}^{b}\ell _{k} ^{(n)}(x)\, \mathrm{d}x, \end{aligned}\quad j, k=1,2,\ldots ,n, $$
and the extended Stenger conjecture (called "new conjecture" in [8, §2.4]), in which the matrices are
$$ \begin{aligned} &U_{n}=\bigl[u_{jk}^{(n)} \bigr], \quad u_{jk}^{(n)}= \int _{a}^{x_{j}}\ell _{k} ^{(n)}(x)w(x)\,\mathrm{d}x, \\ &V_{n}=\bigl[v_{jk}^{(n)}\bigr], \quad v_{jk}^{(n)}= \int _{x_{j}}^{b}\ell _{k} ^{(n)}(x)w(x)\,\mathrm{d}x, \end{aligned} \quad j, k=1,2,\ldots ,n, $$
where w is assumed to be positive a.e. on \([a,b]\). (For the fact that this assumption is essential, see Sects. 7 and 8.) Thus, in the latter conjecture the elements of \(U_{n}\), \(V_{n}\) depend on the weight function w not only through the polynomials \(\ell _{k}^{(n)}\), but also by virtue of w being part of the integration process. Note that, unlike for the extended conjecture, the restricted conjecture requires \([a,b]\) to be a finite interval, at least for one of the two matrices \(U_{n}\), \(V_{n}\).
We also note that the order in which the \(x_{j}\) are arranged is immaterial since a permutation of \(j=\{ 1,2,3,\ldots ,n\}\) implies the same permutation of \(k=\{1,2,3,\ldots ,n\}\), which amounts to a similarity transformation of \(U_{n}\) resp. \(V_{n}\), and therefore leaves the eigenvalues unchanged.
The weight function \(w(x)=1\) on \([-1.1]\) is special in the sense that the extended conjecture is the same as the restricted one and will be simply called the Stenger conjecture. Its proof will be given in Sect. 4. In Sect. 2 we will prove that the eigenvalues of \(U_{n}\) and \(V_{n}\) in the restricted as well as in the extended Stenger conjecture are the same if w is a symmetric weight function. In Sect. 3 we show that, both in the restricted and extended conjecture, the matrix \(U_{n}^{(\alpha , \beta )}\) belonging to the Jacobi weight function \(w(x)=(1-x)^{\alpha }(1+x)^{\beta }\) on \([-1,1]\) with parameters α, β is the same as the matrix \(V_{n}^{(\beta ,\alpha )}\) with the Jacobi parameters interchanged. Section 5, devoted to the restricted Stenger conjecture, shows, partly by numerical computation, that the conjecture may be true for large classes of weight functions, but can also be false for other classes of weight functions. In contrast, Sect. 6 provides ample computational support for the validity of the extended Stenger conjecture for a variety of classical and nonclassical weight functions. Discrete weight functions are considered in Sect. 7. In Sect. 8 the extended Stenger conjecture is challenged in the case of a piecewise constant positive weight function. Related work on collocation Runge–Kutta methods is mentioned in the Appendix.
Symmetric weight functions
We assume here the weight function \(w(x)\) to be symmetric, i.e., \(w(-x)=w(x)\) on \([-b,b]\), \(0< b\leq \infty \), and the zeros \(x_{j}\) of the corresponding orthonormal polynomial \(p_{n}\) ordered increasingly:
$$ -b< x_{1}< x_{2}< \cdots < x_{n}< b. $$
We then have, by symmetry,
$$ x_{j}+x_{n+1-j}=0, \quad j=1,2,\ldots ,n. $$
Theorem 1
If w is symmetric, the eigenvalues of \(V_{n}\) are the same as those of \(U_{n}\), both in the case of the restricted (where \(b<\infty \)) and the extended Stenger conjecture.
We present the proof for the extended conjecture, the one for the restricted conjecture being the same (just drop the factor \(w(t)\) in all integrals). From the definition of \(V_{n}\) in (3), we have
$$ v_{jk}= \int _{x_{j}}^{b} \ell _{k}^{(n)}(x)w(x) \,\mathrm{d}x= \int _{-b} ^{-x_{j}} \ell _{k}^{(n)}(-t)w(t) \,\mathrm{d}t, $$
and, therefore, by (4),
$$ v_{jk}= \int _{-b}^{x_{n+1-j}} \ell _{k}^{(n)}(-t)w(t) \,\mathrm{d}t. $$
Since \(\ell _{k}^{(n)}(-t)=1\) if \(-t=x_{k}\), that is, \(t=-x_{k}=x_{n+1-k}\), and \(\ell _{k}^{(n)}(-t)=0\) if \(t=x_{j}\), \(j\neq n+1-k\), we get
$$ v_{jk}= \int _{-b}^{x_{n+1-j}} \ell _{n+1-k}^{(n)}(x)w(x) \,\mathrm{d}x, $$
thus, by (3) (with \(a=-b\)),
$$ v_{jk}=u_{n+1-j,n+1-k}. $$
In matrix form, this can be written as
which is a similarity transformation of \(U_{n}\). Hence, \(V_{n}\) and \(U_{n}\) have the same eigenvalues. □
Jacobi weight functions
In this section we look at Jacobi weight functions
$$ w^{(\alpha ,\beta )}(x)=(1-z)^{\alpha }(1+x)^{\beta }\quad \text{on } [-1,1], $$
where α, β are greater than −1.
Switching Jacobi parameters has the effect of turning a U-matrix into a V-matrix and vice versa. More precisely, we have the following.
Let \(U_{n}^{(\alpha ,\beta )}\) be the matrix \(U_{n}\) for Jacobi polynomials with parameters α, β, and \(V_{n}^{(\beta ,\alpha )}\) be the matrix \(V_{n}\) for Jacobi polynomials with parameters β, α. Then
$$ U_{n}^{(\alpha ,\beta )}=V_{n}^{(\beta ,\alpha )}, $$
both in the restricted and extended Stenger conjecture.
We give the proof for the restricted Stenger conjecture. It is the same for the extended conjecture, using \(w^{(\alpha ,\beta )} (-x)=w^{(\beta ,\alpha )}(x)\).
We denote quantities x related to Jacobi parameters α, β by \(x^{*}\) after interchange of the parameters. Since the Jacobi polynomial satisfies \(P_{n}^{(\alpha ,\beta )}(x)=(-1)^{n} P _{n}^{(\beta ,\alpha )}(-x)\) (cf. [9, Eq. (4.1.3)]), we can take \(x_{j}^{*}=x_{j}^{(\beta ,\alpha )}=-x_{j}=-x_{j}^{(\alpha ,\beta )}\) for the zeros of \(P_{n}^{(\beta ,\alpha )}\). Noting that
$$ \ell _{k}^{(n)}(x;\alpha ,\beta )=\prod _{j\neq k}\frac{x-x_{j}}{x_{k}-x _{j}} =-\prod_{j\neq k} \frac{x+x_{j}^{*}}{x_{k}^{*}-x_{j}^{*}}= \prod_{j\neq k} \frac{(-x)-x_{j}^{*}}{x_{k}^{*}-x_{j}^{*}}=\ell _{k} ^{(n)}(-x;\beta ,\alpha ), $$
we get
$$ u_{jk}^{(\alpha ,\beta )}= \int _{-1}^{x_{j}} \ell _{k}^{(n)}(t; \alpha , \beta )\,\mathrm{d}t = \int _{-1}^{x_{j}}\ell _{k}^{(n)}(-t; \beta ,\alpha )\,\mathrm{d}t= \int _{x_{j}^{*}}^{1} \ell _{k}^{(n)}(x; \beta ,\alpha )) \,\mathrm{d}x=v_{jk}^{(\beta ,\alpha )}. $$
Proof of the Stenger conjecture for Legendre polynomials
By virtue of Theorem 1, it suffices to consider the matrix \(U_{n}\).
Let \(\lambda \in {\mathbb{C}}\) be an eigenvalue of \(U_{n}\) and \(y=[y_{1},y_{2},\ldots ,y_{n}]^{T}\in {\mathbb{C}}^{n}\) be a corresponding eigenvector,
$$ U_{n} y=\lambda y, \quad y\neq [0,0,\ldots ,0]^{T}, $$
$$ \int _{-1}^{x_{i}} \Biggl( \sum _{j=1}^{n} \ell _{j}^{(n)}(x)y_{j} \Biggr) \,\mathrm{d}x =\lambda y_{i}, \quad i=1,2,\ldots ,n. $$
Let \(y(x)\in {\mathbb{P}}_{n-1}\) be the unique polynomial of degree \(\leq n-1\) interpolating to \(y_{j}\) at \(x_{j}\), \(j=1,2,\ldots ,n\). By the Lagrange interpolation formula and (8), we then have
$$ \int _{-1}^{x_{i}} y(t)\,\mathrm{d}t=\lambda y(x_{i}), \quad i=1,2, \ldots ,n. $$
With \(w_{i}\), \(i=1,2,\ldots ,n\), denoting the weights of the n-point Gauss–Legendre quadrature formula, multiply (9) by \(w_{i}\overline{y(x_{i})}\) and sum over i to get
$$ \sum_{i=1}^{n} w_{i} \overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) \,\mathrm{d}t =\lambda \sum _{i=1}^{n} w_{i} \bigl\vert y(x_{i}) \bigr\vert ^{2}. $$
Since \(\overline{y(x)}\int _{-1}^{x} y(t)\,\mathrm{d}t\) is a polynomial of degree \(2n-1\), and n-point Gauss quadrature is exact for any such polynomial, and since \(|y(x)|^{2}\) is a polynomial of degree \(2n-2\), we have
$$ \int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x =\lambda \int _{-1}^{1} \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x. $$
Integration by parts on the left yields the identity
$$ \int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x + \int _{-1}^{1} y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x = \biggl\vert \int _{-1}^{1} y(t)\,\mathrm{d}t \biggr\vert ^{2}. $$
The real part of the left-hand side of (10) is
$$ \frac{1}{2} \biggl[ \int _{-1}^{1} \overline{y(x)} \biggl( \int _{-1} ^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x + \int _{-1}^{1} y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x \biggr], $$
which, by (11), equals \(\frac{1}{2} \vert \int _{-1}^{1} y(t) \,\mathrm{d}t \vert ^{2}\). Therefore, taking the real part on the right of (10) yields
$$ \operatorname{Re}\lambda \int _{-1}^{1} \bigl\vert y(x) \bigr\vert ^{2}\,\mathrm{d}x= \frac{1}{2} \biggl\vert \int _{-1}^{1} y(t)\,\mathrm{d}t \biggr\vert ^{2}. $$
From this, it follows that \(\operatorname{Re}\lambda \geq 0\).
To prove strict positivity of Reλ, we have to show that the integral on the right of (12) does not vanish. To do this, we look at \(\int _{-1}^{x} y(t)\,\mathrm{d}t-\lambda y(x)\), which is a polynomial of degree n vanishing at \(x_{i}\), \(i=1,2,\ldots ,n\), by (9). Therefore,
$$ \int _{-1}^{x} y(t)\,\mathrm{d}t-\lambda y(x)= \mathrm{const}\, P_{n}(x), $$
where \(P_{n}\) is the Legendre polynomial of degree n. We now multiply (13) by \((1-x)^{k-1}\), \(1\leq k\leq n\), and integrate over \([-1,1]\). Then, by orthogonality, we get
$$ \int _{-1}^{1} (1-x)^{k-1} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr) \, \mathrm{d}x = \lambda \int _{-1}^{1} (1-x)^{k-1} y(x)\, \mathrm{d}x. $$
On the left, integrating by parts, letting
$$ \begin{aligned} &u(x)= \int _{-1}^{x} y(t)\,\mathrm{d}t, \qquad v^{\prime }(x)=(1-x)^{k-1}, \\ &u^{\prime }(x)=y(x), \qquad v(x)= \int _{1}^{x} (1-t)^{k-1}\, \mathrm{d}t=-(1-x)^{k}/k , \end{aligned} $$
and noting that \(u(-1)=v(1)=0\), we get
$$ \int _{-1}^{1} \frac{(1-x)^{k}}{k} y(x)\,\mathrm{d}x= \lambda \int _{-1} ^{1} (1-x)^{k-1} y(x)\, \mathrm{d}x, \quad 1\leq k\leq n. $$
Now suppose that \(\int _{-1}^{1} y(x)\,\mathrm{d}x=0\). Then (14) for \(k=1\) implies that \(y(x)\) is orthogonal to all linear functions. Putting \(k=2\) in (14) then implies orthogonality of \(y(x)\) to all quadratic functions. Proceeding in this manner up to \(k=n-1\), we conclude that \(y(x)\) is orthogonal to all polynomials of degree \(n-1\), in particular to itself, so that \(\int _{-1}^{1} y^{2}(x)\,\mathrm{d}x =0\), hence \(y(x)\equiv 0\). This contradicts (7). Thus, by (12), \(\operatorname{Re}\lambda >0\). □
The restricted Stenger conjecture
Proof of the restricted Stenger conjecture for a special Jacobi polynomial
Here we consider the weight function \(w(x)=1-x\) on \([-1,1]\), that is, the Jacobi weight function \((1-x)^{\alpha }(1+x)^{\beta }\) with parameters \(\alpha =1\), \(\beta =0\), and denote by \(x_{i}\), \(i=1,2, \ldots ,n\), the zeros of the Jacobi polynomial \(P_{n}^{(1,0)}\) and by \(U_{n}\) the matrix in (2) formed with these zeros \(x_{i}\). As is well known, the \(x_{i}\) are the internal nodes of the \((n+1)\)-point Gauss–Radau quadrature formula
$$ \int _{-1}^{1} f(x)\,\mathrm{d}x=\sum _{i=1}^{n} w_{i} f(x_{i})+w_{n+1}f(x _{n+1}), \quad f\in {\mathbb{P}}_{2n}, $$
where \(x_{n+1}=1\).
Let again \(\lambda \in {\mathbb{C}}\) be an eigenvalue of \(U_{n}\) and \(y=[y_{1},y_{2},\ldots ,y_{n}]\in {\mathbb{C}}^{n}\) be a corresponding eigenvector, and \(y(x)\) be as defined in Sect. 4. Multiplying (9) now by \(w_{i}(1-x_{i}) \overline{y(x_{i})}\) and summing over \(i=1,2,\ldots ,n+1\), we obtain
$$ \sum_{i=1}^{n+1} w_{i}(1-x_{i}) \overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) \,\mathrm{d}t=\lambda \sum _{i=1}^{n+1} w_{i}(1-x_{i}) \bigl\vert y(x_{i}) \bigr\vert ^{2}. $$
(The last term in the sums on the left and right, of course, is zero.) Therefore, by (15), since \((1-x)\overline{ y(x)}\int _{-1} ^{x} y(t)\,\mathrm{d}t\) is a polynomial of degree \(\leq 2n\) and \((1-x)|y(x)|^{2}\) a polynomial of degree \(\leq 2n-1\),
$$ \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) \,\mathrm{d}t \biggr)\, \mathrm{d}x=\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x. $$
$$ \begin{aligned}[b] & \frac{1}{2} \biggl[ \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr)\, \mathrm{d}x+ \int _{-1}^{1} (1-x)y(x) \biggl( \int _{-1}^{x} \overline{y(t)}\,\mathrm{d}t \biggr) \,\mathrm{d}x \biggr] \\ &\quad =\frac{1}{2} \int _{-1}^{1} (1-x)\frac{\mathrm{d}}{\mathrm{d}x} \biggl\vert \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x, \end{aligned} $$
having used the product rule of differentiation on the right. Integration by parts then yields
$$ \frac{1}{2} \int _{-1}^{1} \biggl\vert \int _{-1}^{x} y(t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x= \operatorname{Re}\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2} \,\mathrm{d}x. $$
Since the integral on the right is positive, and so is the integral on the left, there follows \(\operatorname{Re}\lambda >0\). □
It may be thought that the same kind of proof might work also for Jacobi weight functions with parameters \(\alpha =0\), \(\beta =1\), or \(\alpha =\beta =1\) using Gauss–Radau quadrature with fixed node −1 or Gauss–Lobatto quadrature, respectively. The last step in the proof (integration by parts of the integral on the right of (17)), however, fails to produce the desired conclusion, the first factor in that integral being \(1+x\), resp. \(1-x^{2}\).
A counterexample
The simplest counterexample we came across involves a Gegenbauer polynomial of small degree.
Counterexample
$$ p_{n}(x)=C_{n}^{(\alpha )}(x), \quad n=5, \alpha =10, $$
where \(C_{n}^{(\alpha )}\) is the Gegenbauer polynomial of degree n.
From [1, Eq. 22.3.4] one finds
$$ C_{5}^{(\alpha )}(x)=\alpha (\alpha +1) (\alpha +2) x \biggl[ \frac{4}{15} (\alpha +3) (\alpha +4) x^{4}-\frac{4}{3} ( \alpha +3) x^{2}+1 \biggr]. $$
One zero of \(C_{5}^{(\alpha )}\), of course, is 0, while the other four are the zeros of the polynomial P in brackets. When \(\alpha =10\), one finds
$$ P(x)=\frac{1}{3} \biggl( \frac{728}{5} x^{4}-52 x^{2}+3 \biggr). $$
This is a quadratic polynomial in \(x^{2}\), the zeros of which could be found explicitly. However, we proceed computationally, using Matlab, since eventually, to obtain eigenvalues, one has to compute anyway.
The Matlab routine doing the computations is counterex.m.Footnote 1 It computes the elements of \(U_{n}\) in (2) (where \(n=5\)) exactly by 3-point Gauss–Legendre quadrature of the last integral in
$$ u_{jk}= \int _{-1}^{x_{j}} \ell _{k}^{(5)}(x) \,\mathrm{d}x=\frac{1}{2} (1+x _{j}) \int _{-1}^{1} \ell _{k}^{(5)} \biggl(\frac{1}{2} (1+x_{j}) t - \frac{1}{2} (1-x_{j}) \biggr)\,\mathrm{d}t $$
and uses a routine lagrange.m for calculating the elementary Lagrange interpolation polynomials as well as the OPQ routines r_jacobi.m, gauss.m. For the latter, see [4, pp. 301, 304].
The output, showing the five eigenvalues d of \(U_{5}\), is
>> counterex
.431796388637445 + 0.000000000000000i
.285123529721968 + .272861054932517i
.285123529721968 - .272861054932517i
-.001021724040688 + .286723270044925i
-.001021724040688 - .286723270044925i
The last pair of eigenvalues has negative real part, disproving, at least computationally, the restricted Stenger conjecture. The extended conjecture, however, seems to be valid for this example; see Sect. 6.2, Example 1.
Conjectures
The counterexample in Sect. 5.2 is symptomatic for more general counterexamples, not only regarding Gegenbauer, but also many other weight functions. They are formulated here as separate conjectures, all firmly rooted in computational evidence.
Gegenbauer polynomials
Conjecture 5.1
The restricted Stenger conjecture for \(U_{n}\) (and, by Theorem 1, also for \(V_{n}\)) is true for all Gegenbauer polynomials \(C_{n}^{(\alpha )}\) with \(2\leq n\leq 4\), but for \(n\geq 5\) it is true only for \(\alpha >-1\) up to some \(\alpha _{n}>1\).
The routine Uconj_restr_jac.m evaluates the matrix \(U_{n}\) (for Jacobi polynomials) in Matlab double-precision arithmetic and its eigenvalues in 32-digit variable-precision arithmetic. Since the eigenvalues become more ill-conditioned as n increases, we first make sure that they are accurate to at least four significant decimal digits by running the routine entirely in 32-digit arithmetic for selected values of α (and also of β) in \((-1,1]\) and selected values of n, using the routine sUconj_restr_jac.m, and comparing the results with those obtained in double precision.
Conjecture 5.1 has then been confirmed for all \(\alpha = -0.9:0.1:10\), using the routine run_Uconj_restr_jac.m. Estimates of \(\alpha _{n}\) have been obtained by a bisection-type procedure and are shown in Table 1. They are "estimates" in the sense that the conjecture is true for \(\alpha \leq \alpha _{n}\), but false for \(\alpha =\alpha _{n}+0.001\).
Table 1 Estimates of \(\alpha _{n}\), \(n=5:5:40\)
It appears that \(\alpha _{n}\) converges monotonically down to 1 as \(n\rightarrow \infty \).
Jacobi polynomials
The restricted Stenger conjecture for \(U_{n}\) holds true in the case of Jacobi polynomials \(P_{n}^{(\alpha ,\beta )}\) for all \(n>1\) if \(-1<\alpha ,\beta \leq 1\), but not necessarily otherwise.
The positive part of the conjecture has been confirmed for \([\alpha , \beta ] =-0.9:0.1:1\), and in each case for \(n=2:40\), using the routine run_Uconj_restr_jac.m. The negative part follows from Conjecture 5.1, Table 1 (if true). By Theorem 2, the same conjecture can be made for the matrix \(V_{n}\).
Algebraic/logarithmic weight functions
Here we first examine weight functions of the type
$$ w_{\alpha }(x)=x^{\alpha }\log (1/x) \quad \text{on} [0,1] \text{ with } \alpha >-1. $$
For the matrix \(U_{n}\), the restricted Stenger conjecture holds true in the case of the weight function (20) for all \(n>1\) if \(-1<\alpha \leq \alpha _{1}\), where \(1<\alpha _{1}<2\), but not necessarily otherwise. For the matrix \(V_{n}\), in contrast, the conjecture is true for all \(\alpha >-1\).
In order to compute the zeros \(x_{j}\) of the required orthogonal polynomials (needed to obtain the Lagrange polynomials \(\ell _{k}^{(n)}\)) for degrees \(2\leq n\leq 40\) and arbitrary \(\alpha >-1\), we need a routine that generates the respective recurrence coefficients for the orthogonal polynomials. This can be done by applying a multicomponent discretization procedure, using appropriate quadrature rules to discretize the integral \(\int _{0}^{1} f(x)x^{\alpha }\log (1/x) \,\mathrm{d}x\), where f is a polynomial of degree \(\leq 2n-1\). It was found to be helpful to split the integral in two integrals, one extended from 0 to ξ, and the other from ξ to 1, \(0<\xi <1\), and use ξ to optimize the rate of convergence (that is, to minimize the parameter Mcap in the discretization routine mcdis.m). Using obvious changes of variables, one finds
$$\begin{aligned}& \begin{aligned}[b] \int _{0}^{\xi }f(x)x^{\alpha }\log (1/x)\, \mathrm{d}x&=\xi ^{\alpha +1} \biggl[ \log (1/\xi ) \int _{0}^{1} f(t\xi )t^{\alpha }\, \mathrm{d}t \\ &\quad{} +\frac{1}{(1+\alpha )^{2}} \int _{0}^{\infty }f\bigl(\xi \mathrm{e}^{-t/(1+\alpha )} \bigr) t\mathrm{e}^{-t}\,\mathrm{d}t \biggr], \end{aligned} \end{aligned}$$
$$\begin{aligned}& \int _{\xi }^{1} f(x)x^{\alpha }\log (1/x)\, \mathrm{d}x=(1-\xi ) \int _{0} ^{1} f\bigl(x(t)\bigr)\bigl[x(t) \bigr]^{\alpha }\log \bigl(1/x(t)\bigr)\,d t, \end{aligned}$$
where in (22), \(x(t)=(1-\xi )t+\xi \) maps the interval \([0,1]\) onto \([\xi ,1]\). In (21), the first integral on the right can be discretized (without error) by n-point Gauss–Jacobi quadrature on \([0,1]\) with Jacobi parameters 0 and α, and the second integral (with small error) by sufficiently high-order generalized Gauss–Laguerre quadrature with Laguerre parameter 1. The integral in (22) can be discretized by sufficiently high-order Gauss–Legendre quadrature on \([0,1]\). For the optimal ξ, one can use, as found empirically (using the routine run_r_alglog1.m),
$$ \xi =\textstyle\begin{cases} [1+10(\alpha +0.9)]/1000 & \text{if } -0.9\leq \alpha \leq 1, \\ 0.02 & \text{if } \alpha >1. \end{cases} $$
This is implemented in the routine r_alglog1.m.
The routine sUconj_restr_log1.m, run with \(\mathtt{dig} =32\), generates the matrix \(U_{n}\) and its eigenvalues in 32-digit arithmetic. It relies on the global \(n\times 2\) arrays ab and ableg containing the first n recurrence coefficients of the (monic) orthogonal polynomials relative to the weight functions \(w_{\alpha }\) and 1, respectively (both supported on \([0,1]\)). The array ab, when \(\alpha =-1/2,0,1/2,1,2\) is available, partly in [5, 2.3.1,2.41,2.4.3], to 32 digits for n at least as large as 100, whereas ableg can easily be generated by the routine sr_jacobi01.m. For these five values of α, we can therefore produce reference values to high precision for the eigenvalues of \(U_{n}\).
The Matlab double-precision routine Uconj_restr_log1.m, also run with \(\mathtt{dig} =32\), generates the matrix \(U_{n}\) in double-precision arithmetic and the eigenvalues in 32-digit arithmetic for arbitrary values of \(\alpha >-1\), its global array ab being produced by the routine r_alglog1.m. When the eigenvalues so obtained are compared with the reference values, for the above five values of α, it is found that for \(n\leq 40\) they all are accurate to at least four decimal digits (cf. test_Uconj_restr_log1.m). This provides us with some confidence that the routine Uconj_restr_log1.m, when \(n\leq 40\), will produce eigenvalues to the same accuracy, also when α is arbitrary in the range from \(-1/2\) to 2.
The routine run_Uconj_restr_log1.m validates the restricted Stenger conjecture for the matrix \(U_{n}\) when \(\alpha =-1/2,0,1/2,1\), at least for all n between 2 and 40, but refutes it when \(\alpha =2\) and \(n=8\), producing a pair of eigenvalues with negative real part \(-1.698\ldots (-3)\). This provides some indication that Conjecture 5.3 for the matrix \(U_{n}\) may be valid. We strengthen this expectation by running the routine for additional values of α, and at the same time try to estimate the value of \(\alpha _{1}\) in dependence of n by applying a bisection-type procedure. It is found that, when \(n\leq 40\), Conjecture 5.3 for \(U_{n}\) is true with \(\alpha _{1}\) as shown in Table 2.
Table 2 The values of \(\alpha _{1}\) in Conjecture 5.2 in dependence of n
It appears that \(\alpha _{1}\) is monotonically decreasing. Since it is bounded below by 1, it would then have to converge to a limit value (perhaps =1).
The routines dealing with the matrix \(V_{n}\) are Vconj_restr_log1.m and run_Vconj_restr_log1.m. They validate Conjecture 5.3 for the matrix \(V_{n}\) when \(\alpha =-1/2,0,1/2,1,2,5,10\), in each case for \(2\leq n\leq 40\).
For illustration, the eigenvalues of \(U_{n}\) are shown in Fig. 1 for \(\alpha =0\) and \(n=10,20,40\), and those of \(V_{n}\) in Fig. 2 for the same α and n.
Eigenvalues of the matrix \(U_{n}\) for a logarithmic weight function and \(n=10,20,40\) (from left to right)
Eigenvalues of the matrix \(V_{n}\) for a logarithmic weight function and \(n=10,20,40\) (from left to right)
For the weight function
$$ w(x)=x^{\alpha }\log ^{2}(1/x) \quad \text{on } [0,1], \text{with } \alpha >-1, $$
our conjecture for \(U_{n}\) is the same as the one in Conjecture 5.3, but not so for \(V_{n}\).
For the matrix \(U_{n}\), the restricted Stenger conjecture holds true in the case of the weight function (23) for all \(n>1\) if \(-1<\alpha < \alpha _{2}\), where \(\alpha _{2}\) is a number between 1 and 2, but not necessarily otherwise. For the matrix \(V_{n}\), the conjecture is false for all \(\alpha >-1\).
The routines used to make this conjecture are the same as those used for Conjecture 5.3 but with "log1" replaced by "log2". The statements regarding the matrix \(U_{n}\) are arrived at in the same way as in Conjecture 5.3, the values of \(\alpha _{2}\) now being as shown in Table 3.
With regard to \(V_{n}\), the conjecture is found to be false for \(\alpha =-1/2,0,1/2, 1,2,5\) and \(n=7\) in each case, there being a single pair of conjugate complex eigenvalues with negative real part.
We illustrate by showing in Fig. 3 the eigenvalues of \(U_{n}\) for \(\alpha =0\) and \(n=10,20,40\).
Eigenvalues of the matrix \(U_{n}\) for a square-logarithmic weight function and \(n=10,20,40\) (from left to right)
Laguerre and generalized Laguerre weight functions
For generalized Laguerre weight functions
$$ w(x)=x^{\alpha }\mathrm{e}^{-x} \quad \text{on } [0,\infty ], \alpha >-1, $$
it only makes sense to look at the U-conjecture.
For the matrix \(U_{n}\), the restricted Stenger conjecture is true in the case of the weight function (24) for all \(n>1\) if \(-1<\alpha \leq \alpha _{0}\), where \(1<\alpha _{0}<2\), but not necessarily otherwise.
The routines written for this conjecture are Uconj_restr_lag.m and run_Uconj_restr_lag.m. The latter, run for \(\alpha =-0.9:0.1:2\), \(n=2:40\), confirms the conjecture up to, and including, \(\alpha =1.2\), but refutes it when \(\alpha =1.3\) and \(n=40\), producing a single pair of conjugate complex eigenvalues with negative real part. The case \(\alpha =1.3\) was checked by running the routine run_sUconj_restr_lag.m in 32-digit arithmetic, which produced eigenvalues agreeing with those obtained in double precision to at least 12 digits. (This check may take as many as five hours to run.) A bisection-type procedure, run in double precision, yields the values of \(\alpha _{0}\) shown in Table 4 in dependence of n.
Figure 4 shows the eigenvalues of \(U_{n}\) when \(\alpha =0\) and \(n=10,20,40\).
Eigenvalues of the matrix \(U_{n}\) for the Laguerre weight function and \(n=10,20,40\) (from left to right)
The extended Stenger conjecture
To avoid extensive and time-consuming Matlab variable-precision computations, we restrict ourselves in Sects. 6.2–6.6 to values of n that are less than, or equal to, 30. Also note that in all figures of this section the horizontal axis carries a logarithmic scale.
Proof of a weak form of the extended Stenger conjecture for a special Jacobi polynomial
We consider here, as in Sect. 5.1, the Jacobi weight function \(w(x)=(1-x)^{\alpha }(1+x)^{\beta }\) on \([-1,1]\), with \(\alpha =1\), \(\beta =0\), and continue using the same notations as in that section. In particular, we again use the \((n+1)\)-point Gauss–Radau quadrature formula
$$ \int _{-1}^{1} f(x)\,\mathrm{d}x = \sum _{i=1}^{n+1} w_{i} f(x_{i})+R_{n}(f), $$
where \(x_{n+1}=1\), but this time we include the remainder term
$$ R_{n}(f)=-\gamma _{n} \frac{f^{(2n+1)}(\xi )}{(2n+1)!}, \quad \gamma _{n}=2^{2n+1} \frac{(n+1)n!^{4}}{(2n+1)!^{2}} $$
(cf. [3, top of p. 158, where \(\gamma ^{b}\) should read \(\gamma _{n} ^{b}\)]). In place of (9), we now have
$$ \int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t=\lambda y(x_{i}), \quad i=1,2, \ldots ,n. $$
Multiplying this, as in Sect. 5.1, by \(w_{i}(1-x_{i})\overline{y(x_{i})}\) and summing over \(i=1,2,\ldots ,n+1\), we obtain
$$ \sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t) \,\mathrm{d}t=\lambda \sum _{i=1}^{n+1} w_{i}(1-x_{i}) \bigl\vert y(x_{i}) \bigr\vert ^{2}. $$
$$ f(x):=(1-x)\overline{y(x)} \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t $$
is a polynomial of degree \(2n+1\) and the left-hand side of (28) is equal to the quadrature sum on the right of (25) with f as in (29), we get
$$ \begin{aligned} &\sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t \\ &\quad = \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t) \,\mathrm{d}t \biggr)\, \mathrm{d}x+\gamma _{n} \frac{f^{(2n+1)}(\xi )}{(2n+1)!}, \end{aligned} $$
where \(f^{(2n+1)}\) is a nonnegative constant, namely
$$ f^{(2n+1)}(\xi )=\frac{(2n+1)!}{n+1} \vert a_{n-1} \vert ^{2}, $$
with \(a_{n-1}\) the leading coefficient (of \(x^{n-1}\)) of the polynomial \(y(x)\). Thus,
$$ \begin{aligned}[b] &\sum_{i=1}^{n+1} w_{i}(1-x_{i})\overline{y(x_{i})} \int _{-1}^{x_{i}} y(t) (1-t)\,\mathrm{d}t \\ &\quad = \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t) \,\mathrm{d}t \biggr) \, \mathrm{d}x+C_{n}, \end{aligned} $$
$$ C_{n}=\frac{\gamma _{n}}{n+1} \vert a_{n-1} \vert ^{2}. $$
Now the real part of the left-hand side of (28), by (30), is
$$ \begin{aligned} &\frac{1}{2} \biggl[ \int _{-1}^{1} (1-x)\overline{y(x)} \biggl( \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t \biggr)\, \mathrm{d}x \\ &\qquad {}+ \int _{-1}^{1} (1-x)y(x) \biggl( \int _{-1}^{x} \overline{y(t)} (1-t)\,\mathrm{d}t \biggr)\,\mathrm{d}x \biggr] + C_{n} \\ &\quad = \frac{1}{2} \int _{-1}^{1} \frac{\mathrm{d}}{\mathrm{d}x} \biggl\vert \int _{-1}^{x} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2} \,\mathrm{d}x + C_{n} \\ &\quad = \frac{1}{2} \biggl\vert \int _{-1}^{1} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2} + C_{n}, \end{aligned} $$
so that, by (28),
$$ \frac{1}{2} \biggl\vert \int _{-1}^{1} y(t) (1-t)\,\mathrm{d}t \biggr\vert ^{2}+C _{n} = \operatorname{Re}\lambda \int _{-1}^{1} (1-x) \bigl\vert y(x) \bigr\vert ^{2}\,\mathrm{d}x. $$
the integrand on the right being a polynomial of degree \(2n-1\). From this, it follows that \(\operatorname{Re}\lambda \geq 0\). □
Strict positivity of Reλ holds if \(|a_{n-1}|>0\), that is, if \(y(x)\) is a polynomial of exact degree \(n-1\), or if the integral on the left of (31) does not vanish. Computation, using the routines check_pos.m and run_check_pos.m, confirms that both are indeed the case, at least for \(n\leq 40\). Table 5 shows, for selected values of n, the minimum values of \(\vert \int _{-1} ^{1} y(t)(1-t)\,\mathrm{d}t \vert \) and \(|a_{n-1}|\), the minimum being taken over all eigenvalues/vectors. For checking purposes, the computations have also been carried out entirely in 32-digit arithmetic.
Table 5 The minimum values min int of \(\vert \int _{-1}^{1} y(t)(1-t)\,\mathrm{d}t \vert \) and \(|a_{n-1}|\)
The element \(u_{jk}^{(n)}\) of the matrix \(U_{n}\) in (3) for the Jacobi weight function \(w(x)=(1-x)^{\alpha }(1+x)^{\beta }\) on \([-1,1]\) is
$$ u_{jk}^{(n)}= \int _{-1}^{x_{j}} \ell _{k}^{(n)}(x)w(x) \,\mathrm{d}x = \frac{1}{2} (1+x_{j}) \int _{-1}^{1} \ell _{k}^{(n)} \bigl(x(t)\bigr)w\bigl(x(t)\bigr) \,\mathrm{d}t, $$
$$ x(t)=\frac{1}{2} (1+x_{j}) t-\frac{1}{2} (1-x_{j}) $$
maps \([-1,1]\) onto \([-1,x_{j}]\). An elementary computation yields
$$ u_{jk}^{(n)}= \biggl(\frac{1+x_{j}}{2} \biggr)^{\alpha +\beta +1} \int _{-1}^{1} \ell _{k}^{(n)} \bigl(x(t)\bigr) \biggl[ \frac{3-x_{j}}{1+x_{j}}- t \biggr] ^{\alpha }(1+t)^{\beta } \,\mathrm{d}t. $$
Although the second factor in the integrand of (32) may be algebraically singular at a point close to, but larger than, 1 (when \(x_{j}<1\) is close to 1), we simply apply Gauss–Jacobi quadrature with Jacobi parameters 0 and β to the integral in (32) and choose the number of quadrature points large enough so as to produce eigenvalues of \(U_{n}\) accurate to at least four decimal places (which is good enough for plotting purposes). This is implemented by the Matlab function Uconj_ext_jac.m and can be run with the Matlab script run_Uconj_ext_jac.m.
Gegenbauer weight function \(w(x)=(1-x^{2})^{\alpha }\) on \([-1,1]\) with \(\alpha =10\).
This is the weight function for which the restricted Stenger conjecture is false already for \(n=5\) (cf. Sect. 5.2). The extended conjecture, however, is found to be true for all \(2\leq n \leq 30\); see Fig. 5 for the cases \(n=5,15,30\).
Eigenvalues of the matrix \(U_{n}\) for a special Gegenbauer polynomial of degrees \(n=5,15,30\) (from left to right)
Jacobi weight function with parameters \((\alpha , \beta )=[-0.9:0.6:0.9, 1.7:0.7:3.8, 4.7:0.9:7.4]\).
We used the script run_Uconj_ext_jac.m to check the extended U-conjecture for all these Jacobi weight functions, separately for \(n=5,15,30\), and found in every case that the conjecture is valid. By Theorem 2, the same is true for the matrix \(V_{n}\).
To illustrate, we show in Fig. 6 the eigenvalues of \(U_{n}\) for the three parameter choices \(\alpha =\beta =-0.9\), \(\alpha =-0.3\), \(\beta =-0.9\), and \(\alpha =5.6\), \(\beta =1.7\), in each case with \(n=30\).
Eigenvalues of the matrix \(U_{n}\), \(n=30\), for selected Jacobi polynomials
The weight function \(w(x)=x^{\alpha }\log (1/x)\) on \([0,1]\)
Here, for the matrix \(U_{n}\), we use the change of variables \(x=x_{j} t\) in
$$ u_{jk}^{(n)}= \int _{0}^{x_{j}}\ell _{k}^{(n)}(x) x^{\alpha }\log (1/x) \,\mathrm{d}x =x_{j}^{\alpha +1} \int _{0}^{1}\ell _{k}^{(n)}(x_{j} t) t ^{\alpha }\log \bigl(1/(x_{j} t)\bigr)\,\mathrm{d}t $$
to get
$$ u_{jk}^{(n)}=x_{j}^{\alpha +1} \biggl[ \log (1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\,\mathrm{d}t + \int _{0}^{1} \ell _{k} ^{(n)}(x_{j} t) t^{\alpha }\log (1/t)\,\mathrm{d}t \biggr]. $$
Both integrals can be evaluated exactly, the first by m-point Gauss–Jacobi quadrature on \([0,1]\) with Jacobi parameters 0 and α, where \(m=\lceil n/2\rceil \), and the second by m-point Gauss quadrature relative to the weight function \(w(t)=t^{\alpha } \log (1/t)\) on \([0,1]\). For the latter, the recurrence coefficients for the relevant orthogonal polynomials (when \(\alpha =0, -1/2, 1/2, 1, 2, 5\)) are available to 32 decimal digits, partly in [5, 2.3.1, 2.4.1, 2.4.3], which allow us to generate the Gaussian quadrature rule in a well-known manner (cf., e.g., [3, §3.1.1]) using the OPQ routine gauss.m (see [4, p. 304]). This is implemented by the Matlab function Uconj_ext_log1.m and can be run with the Matlab script run_Uconj_ext_log1.m.
Alternatively, when \(n\leq 40\), we may compute the recurrence coefficients for arbitrary \(\alpha >-1\) as described in Sect. 5.3.3. This is implemented by the routines r_alglog1.m, Uconj_ext_log1.m, and run0_Uconj_ext_log1.m.
Algebraic/logarithmic weight function \(w(x)=x^{ \alpha }\log (1/x)\) on \([0,1]\) with \(\alpha =(-0.9:0.1:5)(5.2:0.2:7) (7.5:0.5:10)\).
Our routines validate the extended Stenger conjecture for all these values of α and \(2\leq n\leq 30\). The eigenvalues of \(U_{n}\) are shown in the case \(\alpha =0\) in Fig. 7, and in the cases \(\alpha =-1/2, 1/2\) in Figs. 8 and 9, respectively, for \(n=5,15,30\). They are similar when \(\alpha =1,2,5\).
Eigenvalues of \(U_{n}\) in the case of a logarithmic weight function for \(n=5,15,30\) (from left to right)
Eigenvalues of \(U_{n}\) in the case of an algebraic/logarithmic weight function with parameter \(\alpha =-1/2\) for \(n=5,15,30\) (from left to right)
Eigenvalues of \(U_{n}\) in the case of an algebraic/logarithmic weight function with parameter \(\alpha =1/2\) for \(n=5,15,30\) (from left to right)
With regard to \(V_{n}\), the conjecture has been similarly validated, using the routines Vconj_ext_log1 and run_Vconj_ext_log1.m, for the same values of n and α as in Example 3. To compute the matrix \(V_{n}\), we have used
$$ v_{jk}^{(n)}= \int _{0}^{1} \ell _{k}^{(n)}(x)x^{\alpha } \log (1/x) \,\mathrm{d}x -u_{jk}^{(n)} $$
with \(u_{jk}^{(n)}\) as in (33) and the integral evaluated by \(\lceil n/2\rceil \)-point Gaussian quadrature relative to the weight function \(w(x)\). The eigenvalues of \(V_{n}\) are found to be similar to those for \(U_{n}\) shown in Figs. 7–9.
Algebraic/square-logarithmic weight function \(w(x)=x^{\alpha }\log ^{2}(1/x)\) on \([0,1]\), \(\alpha >-1\)
Similarly as in Sect. 6.3.1, one finds
$$ \begin{aligned}[b] u_{jk}^{(n)}&=x_{j}^{\alpha +1} \biggl[ \log ^{2}(1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\,\mathrm{d}t +2\log (1/x_{j}) \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\log (1/t)\,\mathrm{d}t \\ &\quad{} + \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t) t^{\alpha }\log ^{2}(1/t)\,\mathrm{d}t \biggr], \end{aligned} $$
where again the integrals can be evaluated exactly and some of the required recurrence coefficients taken from [5, 2.3.2], [5, 2.4.5], [5, 2.4.7]. This is implemented by the Matlab function Uconj_ext_log2.m and driver run_Uconj_ext_log2.m.
Algebraic/square-logarithmic weight function \(w(x)=x^{\alpha }\log ^{2}(1/x)\) on \([0,1]\) with \(\alpha =0,-1/2,1/2, 1,2,5\).
Our routines validate the extended Stenger conjecture for all these values of α and \(2\leq n\leq 30\). The eigenvalues of \(U_{n}\) in the case \(\alpha =0\) are found to be similar to those depicted in Fig. 7 for the weight function \(\log (1/x)\). For the cases \(\alpha =-1/2, 1/2,5\), they are shown respectively in Figs. 10–12 for \(n=5,15,30\). Interestingly, all eigenvalues appear to be real when \(\alpha -=-1/2\).
Eigenvalues of \(U_{n}\) in the case of an algebraic/square-logarithmic weight function, with exponent \(\alpha =-1/2\), for \(n=5,15,30\) (from left to right)
Eigenvalues of \(U_{n}\) in the case of an algebraic/square-logarithmic weight function, with exponent \(\alpha =1/2\), for \(n=5,15,30\) (from left to right)
Eigenvalues of \(U_{n}\) in the case of an algebraic/square-logarithmic weight function, with exponent \(\alpha =5\), for \(n=5,15,30\) (from left to right)
Similar results and validations, using the routines Vconj_ext_log2.m and run_Vconj_ext_log2.m, are obtained for the matrix \(V_{n}\), which, as in (34), is computed exactly by
$$ v_{jk}^{(n)}= \int _{0}^{1} \ell _{k}^{(n)}(x)x^{\alpha } \log ^{2}(1/x) \,\mathrm{d}x-u_{jk}^{(n)} $$
with \(u_{jk}^{(n)}\) as in (35).
Here, the weight function is assumed to be \(w(x)=x^{\alpha } \mathrm{e}^{-x}\) on \([0,\infty ]\), where \(\alpha >-1\). We write
$$ u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x- \int _{x_{j}}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e}^{-x}\,\mathrm{d}x $$
and, in the second integral, make the change of variables \(x=x_{j}+t\) to get
$$ u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x-\mathrm{e}^{-x_{j}} \int _{0}^{\infty }\ell _{k}^{(n)}(x _{j}+t) (x_{j}+t)^{\alpha }\mathrm{e}^{-t}\, \mathrm{d}t. $$
The first integral can be evaluated exactly by \(\lceil n/2 \rceil \)-point generalized Gauss–Laguerre quadrature. The second integral, similarly as in (32) for Jacobi weight functions, has an algebraic singularity close to, and to the left of, the origin when \(x_{j}\) is close to zero (and α not an integer). As in Sect. 6.2, we ignore this and simply apply Gauss–Laguerre quadrature of sufficiently high order so as to obtain plotting accuracy for all the eigenvalues of \(U_{n}\). However, there is yet another complication: Around \(n=25\), the Gauss–Laguerre weights, in Matlab double precision, start becoming increasingly inaccurate (in terms of relative accuracy) and adversely affect the accuracy of the second integral in (37). For this reason, we use 32-digit variable-precision arithmetic to compute these weights and convert them to Matlab double precision, once computed. At the same time we lower the accuracy requirement from 4- to 3-digit accuracy.
Generalized Laguerre weight function \(w(x)=x^{ \alpha }\mathrm{e}^{-x}\,\mathrm{d}x\) on \([0,\infty ]\) for the same values of α and n as in Example 2.
The Matlab routines implementing this and validating the conjecture in each case are Uconj_ext_lag.m and run_Uconj_ext_lag.m. They may take several hours to run because of the extensive variable-precision work involved. The accuracy achieved for the eigenvalues is consistently of the order of 10−4 or better, but the necessary number of quadrature points is found to be as large as 440 (for \(\alpha =-0.9\) and \(n=30\)).
For illustration, we show in Fig. 13 the eigenvalues obtained in the case of the ordinary Laguerre weight function (\(\alpha =0\)) and for \(n=5, 15,30\). Notice the extremely small real eigenvalues when \(n=30\), the smallest being of the order 10−43.
Eigenvalues of \(U_{n}\) in the case of the Laguerre weight function for \(n=5,15,30\) (from left to right)
$$ v_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(x)x^{\alpha } \mathrm{e} ^{-x} \,\mathrm{d}x-u_{jk}^{(n)} $$
with \(u_{jk}^{(n)}\) as in (37), the conjecture has been similarly validated with the help of the routines Vconj_ext_lag.m, run_Vconj_ext_lag.m.
Hermite and generalized Hermite weight functions
These are the weight functions \(w(x)=|x|^{2\mu }e^{-x^{2}}\) on \([-\infty ,\infty ]\), \(\mu >-1/2\). Since they are symmetric, it suffices, by Theorem 1, to consider \(U_{n}\). To simplify matters, we assume 2μ to be a nonnegative integer.
For the evaluation of \(u_{jk}^{(n)}\), we distinguish the cases \(x_{j}<0\) and \(x_{j}\geq 0\). In the former case, by the change of variables \(x=x_{j}-t\), one gets
$$ u_{jk}^{(n)}=\mathrm{e}^{-x_{j}^{2}} \int _{0}^{\infty }\ell _{k}^{(n)}(x _{j}-t) (t-x_{j})^{2\mu }\mathrm{e}^{2x_{j} t} \mathrm{e}^{-t^{2}} \,\mathrm{d}t, \quad x_{j}< 0. $$
Here, half-range Gauss–Hermite quadrature (cf. [5, 2.9.1]) is expected to converge rapidly. When \(x_{j}\geq 0\), breaking up the first integral in (3) (with \(a=-\infty \)) into two parts, one extended from −∞ to 0 and the other from 0 to \(x_{j}\), and making appropriate changes of variables in each yield
$$ u_{jk}^{(n)}= \int _{0}^{\infty }\ell _{k}^{(n)}(-t)t^{2\mu } \mathrm{e} ^{-t^{2}}\,\mathrm{d}t +x_{j}^{2\mu +1} \int _{0}^{1} \ell _{k}^{(n)}(x _{j} t)\mathrm{e}^{-x_{j}^{2} t^{2}} t^{2\mu }\,\mathrm{d}t, \quad x _{j}\geq 0. $$
The first integral can be evaluated exactly by \(\lceil (n+2\mu )/2 \rceil \)-point half-range Gauss–Hermite quadrature. The second integral may be approximated by Gauss–Jacobi quadrature on \([0,1]\) with Jacobi parameters 0 and 2μ. This, too, is expected to converge quickly.
Generalized Hermite weight function \(w(x)=|x|^{2 \mu }\mathrm{e}^{-x^{2}}\) on \([-\infty ,\infty ]\), \(\mu =0:1/2:25\) and \(n=5,15,30\).
The conjecture has been validated in all cases using the routines Uconj_ext_herm.m, run_Uconj_ext_herm.m. For illustration, the eigenvalues of \(U_{n}\) are shown in Fig. 14 for the case \(\mu =0\).
Eigenvalues of \(U_{n}\) in the case of the Hermite weight function for \(n=5,15,30\) (from left to right)
A weight function supported on two disjoint intervals
We now consider a weight function which is not positive a.e.:
$$ w(x)=\textstyle\begin{cases} \vert x \vert (x^{2}-\xi ^{2})^{p}(1-x^{2})^{q} & \text{if }x\in [-1,-\xi ]\cup [\xi ,1], \\ 0 & \text{otherwise}, \end{cases} $$
where \(0<\xi <1\), \(p>-1\), \(q>-1\). This weight function, of interest in theoretical chemistry when \(p=q=-1/2\), has been studied in [2]. In our present context, we assume, for simplicity, that p and q are nonnegative integers. Then only integrations of polynomials are required, which, as before, can be done exactly.
Since the weight function w is symmetric, it suffices, by Theorem 1, to look at the matrices \(U_{n}\) only.
Any polynomial \(\pi _{n}\) orthogonal with respect to w can have at most one zero in the interval \([-\xi ,\xi ]\) where w is zero [3, Theorem 1.20]. By symmetry, therefore, all zeros of \(\pi _{n}\) are located in the intervals \((-1,-\xi )\) or \((\xi ,1)\), except when n is odd, in which case there is a zero at the origin.
The recurrence coefficients \(\alpha _{k}\), \(\beta _{k}\) for the (monic) polynomials \(\pi _{n}\) are known explicitly [2, Eq. (4.1)]: All \(\alpha _{k}=0\), by symmetry, and
$$\begin{aligned}& \beta _{0} =\bigl(1-\xi ^{2} \bigr)^{p+q+1}\varGamma (p+1)\varGamma (q+1)/\varGamma ( p+q+2), \\& \beta _{1} = \frac{1}{2}\bigl(1-\xi ^{2}\bigr) \alpha _{0}^{J}+ {\frac{1}{2}}\bigl(1+\xi ^{2}\bigr), \\& \left.\textstyle\begin{array}{l} \beta _{2k} =({\frac{1}{2}}(1-\xi ^{2}))^{2} \beta _{k}^{J}/\beta _{2k-1} \\ \beta _{2k+1} ={\frac{1}{2}}(1-\xi ^{2}) \alpha _{k}^{J}+{\frac{1}{2}}(1+\xi ^{2})-\beta _{2k} \end{array}\displaystyle \right\} \quad k=1,2,3,\ldots , \end{aligned}$$
where \(\alpha _{k}^{J}\), \(\beta _{k}^{J}\) are the recurrence coefficients of the monic Jacobi polynomials with parameters \(\alpha =q\), \(\beta =p\). Therefore, the zeros of \(\pi _{n}\) are easily computed by the OPQ routine gauss.m (see [4, p. 304]).
The computation of \(u_{jk}^{(n)}\) is different, depending on where the zero \(x_{j}\) is located. In fact,
$$ u_{jk}^{(n)}=-\frac{1+x_{j}}{2} \int _{-1}^{1} \ell _{k}^{(n)} \bigl( x_{1}(t)\bigr)x _{1}(t) \bigl(x_{1}^{2}(t)- \xi ^{2}\bigr)^{p}\bigl(1-x_{1}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}< -\xi , $$
where \(x_{1}(t)=\frac{1+x_{j}}{2}t+\frac{x_{j}-1}{2}\) maps \([-1,1]\) onto \([-1,x_{j}]\);
$$ u_{jk}^{(n)}=-\frac{1-\xi }{2} \int _{-1}^{1} \ell _{k}^{(n)} \bigl( x_{2}(t)\bigr)x _{2}(t) \bigl(x_{2}^{2}(t)- \xi ^{2}\bigr)^{p}\bigl(1-x_{2}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}=0, $$
where \(x_{2}(t)=\frac{1-\xi }{2}t-\frac{1+\xi }{2}\) maps \([-1,1]\) onto \([-1,-\xi ]\); and
$$ u_{jk}^{(n)}= \bigl( u_{jk}^{(n)} \bigr)_{x_{j}=0}+\frac{ x_{j}- \xi }{2} \int _{-1}^{1} \ell _{k} \bigl(x_{3}(t)\bigr)x_{3}(t) \bigl(x_{3}^{2}(t)- \xi ^{2}\bigr)^{p} \bigl(1-x_{3}^{2}(t) \bigr)^{q} \,\mathrm{d}t \quad \text{if } x_{j}>\xi , $$
where \(x_{3}(t)=\frac{x_{j}-\xi }{2}t+\frac{x_{j}+\xi }{2}\) maps \([-1,1]\) onto \([\xi ,x_{j}]\).
All integrals can be computed exactly by \((\lceil (n+1)/2\rceil +p+q)\)-point Gauss–Legendre quadrature.
The weight function (41) with \(\xi =0.1:0.2:0.9\) and \(p,q=0:5\) for \(n=5,15,30\).
The routines Uconj_ext_twoint.m, run_Uconj_ext_twoint.m can be used to validate the conjecture in all cases, even though the weight function is not in the class of weight functions assumed in the conjecture. (For another such example, see Example 9 with \(N=1\).)
To illustrate, we show in Fig. 15 the eigenvalues of \(U_{n}\), \(n=5,15,30\), in the case \(\xi =1/2\), \(p=q=0\), i.e., for the weight function \(w(x)\) on \([-1,1]\) equal to \(|x|\) outside of \([-1/2,1/2]\) and 0 inside.
Eigenvalues of \(U_{n}\) in the case of a two-interval weight function for \(n=5, 15,30\) (from left to right)
Discrete weight functions
To demonstrate that an assumption about the weight function like the one made for the extended Stenger conjecture is called for, we now consider a discrete measure \(\mathrm{d}\lambda _{N+1}\) supported on \(N+1\) points \(0,1,2,\ldots ,N\) with jumps \(w_{k}>0\) at the points k, \(k=0,1, \ldots ,N\). The corresponding orthogonal polynomials, now \(N+1\) in number, are again denoted by \(p_{n}\), \(n=0,1,\ldots ,N\). If \(w_{0}=w_{1}=\cdots =w_{N}=1\), we are dealing with the classical discrete orthogonal polynomials attributed to Chebyshev [3, Example 1.15]). They are the special case \(\alpha =\beta =0\) of Hahn polynomials with parameters α, β (cf. [3, last entry of Table 1.2]). Both the weight function and the zeros of \(p_{n}\) are symmetric about the midpoint \(N/2\). In particular, when N is even and n odd, one of the zeros is equal to \(N/2\), hence an integer.
For the elements of \(U_{n}\), we have
$$ u_{j,k}^{(n)}=\sum _{i=0}^{i_{j}} w_{i} \ell _{k}^{(n)}(i), \quad i _{j}=\lfloor x_{j}\rfloor , $$
where \(x_{j}\) are the zeros of \(p_{n}\) (assumed in increasing order). These can be generated by the functions r_hahn.m and gauss.m.
The measure \(\mathrm{d}\lambda _{N+1}\), \(N\geq 2\), with \(w_{0}=w_{1}=\cdots =w_{N}=1\), and \(p_{n}\) with \(2\leq n\leq N\).
It is important to note that when the zeros of \(p_{n}\) are computed by the routine gauss.m, and when N is even and n odd, the integer zero \(x_{j}=N/2\) may end up becoming slightly less than \(N/2\), in which case \(\lfloor x_{j}\rfloor \) in (42) will yield an incorrect result. Similarly, the smallest zero, when computed, may turn out to become negative, or the largest zero equal to N. To avoid these pitfalls, we overwrite the zero, once computed, by \(N/2\) or reset \(\lfloor x_{j} \rfloor \), \(j=1,n\), by 0 resp. \(N-1\).
On running the script run_Uconj_ext_hahn.m, using Uconj_ext_hahn.m, to compute \(U_{n}\) and its eigenvalues, we found that the extended Stenger conjecture is still true for all \(N\leq 10\) and all \(2\leq n\leq N\), but no longer when \(N>10\). The values of N and n for which eigenvalues with negative real parts appear are shown in Table 6 for \(11\leq N\leq 30\).
Table 6 The presence of delinquent eigenvalues of \(U_{n}\) in the case of a discrete weight function
Asterisks indicate the presence of two pairs of delinquent complex conjugate eigenvalues rather than the usual single pair. (48-digit arithmetic was used for the last two entries in Table 6.)
Since the weight function is symmetric (with respect to the midpoint \(N/2\)), by Theorem 1 the same pattern of validity and nonvalidity holds also for the V-conjecture.
We illustrate by showing in Fig. 16 the eigenvalues of \(U_{n}\), \(n=N\), for \(N=11,15,30\).
Eigenvalues of \(U_{n}\) in the case of discrete weight functions for \(n=N=11,15,30\) (from left to right)
Since there are no approximations involved, the results obtained should be quite accurate. In fact, we reran Example 8 in 48-digit arithmetic and found the double-precision eigenvalues accurate to 13, 12, and 10 digits for, resp., \(n=11,15,30\).
With regard to the restricted Stenger conjecture, the routines used are run_Uconj_restr_hahn.m and Uconj_restr_hahn.m. They, too, confirm the validity of the conjecture for \(N\leq 10\) and \(2\leq n\leq N\). But for \(N>11\), there are now more values of n than shown in Table 6 for which there are eigenvalues with negative real parts, and there can be as many as four pairs of delinquent eigenvalues.
Block-discrete and ε-block-discrete weight functions
It may be interesting to see whether the eigenvalues of \(U_{n}\) behave similarly as in Example 8 when the weight function is not (\(N+1\))-discrete, but (\(N+1\))-block-discrete, that is, of the form
$$ w(x;N+1)= \textstyle\begin{cases} w_{\nu }& \text{if } 2\nu \leq x\leq 2\nu +1, \nu =0,1,\ldots ,N, \\ 0 & \text{otherwise}, \end{cases} $$
where \(w_{0},w_{1},\ldots ,w_{N}\), \(N\geq 1\), are positive numbers. Thus, the weight function is made up of \(N+1\) "blocks" with base 1 and heights \(w_{\nu }\), \(\nu =0,1,\ldots ,N\), any two consecutive blocks being separated by a zero-block. More generally, we may consider \((N+1)\)-ε-block-discrete weight functions, where the separating zero-blocks are replaced by ε-blocks, that is,
$$ w(x;N+1,\varepsilon )= \textstyle\begin{cases} w_{\nu }& \text{if } 2\nu \leq x< 2\nu +1, \nu =0,1,\ldots ,N, \\ \varepsilon & \text{if } 2\nu -1\leq x< 2\nu , \nu =1,2,\ldots N, \\ 0 & \text{otherwise}. \end{cases} $$
The orthogonal polynomials \(p_{n}\) associated with the weight function \(w(x; N+1,\varepsilon )\) can be generated from their three-term recurrence relation, which in turn can be computed (exactly) by a (\(2N+1\))-component discretization procedure (cf. [3, §2.2.4]) using \(\lceil n/2\rceil \)-point Gauss–Legendre quadrature on \([0,1]\). This is implemented in Matlab double and variable precision by the routines ab_blockhahn.m, sab_blockhahn.m. (For checking purposes, the same recurrence relation was also computed by a moment-based routine in sufficiently high precision.)
The elements \(u_{jk}\) of the matrix \(U_{n}\)
$$ u_{jk}= \int _{0}^{x_{j}} \ell _{k}^{(n)}(x) w(x;N+1,\varepsilon ) \,\mathrm{d}x, $$
where \(x_{j}\) are the zeros of \(p_{n}\), can be computed (exactly) as follows. Let \(m=\lfloor x_{j} \rfloor \).
If \(m=0\),
$$ u_{jk}^{(n)}=w_{0} \int _{0}^{x_{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x= w _{0} x_{j} \int _{0}^{1} \ell _{k}^{(n)}(x_{j} t)\,\mathrm{d}t; $$
$$ \begin{aligned} u_{jk} & = w_{0} \int _{0}^{1} \ell _{k}^{(n)}(x) \,\mathrm{d}x+\varepsilon \int _{1}^{x_{j}} \ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \bigl[ w_{0} \ell _{k}^{(n)}(t)+\varepsilon (x _{j}-1) \ell _{k}^{(n)}\bigl((x_{j}-1)t+1\bigr) \bigr]; \end{aligned} $$
if \(m>0\) is even,
$$ \begin{aligned} u_{jk} & = \sum _{\nu =0}^{(m-2)/2} w_{\nu } \int _{2\nu }^{2\nu +1} \ell _{k} ^{(n)}(x)\,\mathrm{d}x+w_{m/2} \int _{m}^{x_{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x +\varepsilon \sum_{\nu =1}^{m/2} \int _{2\nu -1}^{2\nu } \ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \Biggl( \sum _{\nu =0}^{(m-2)/2} w_{\nu } \ell _{k} ^{(n)} (2\nu +t)+w_{m/2}(x_{j}-m)\ell _{k}^{(n)}\bigl((x_{j}-m)t+m\bigr) \\ & \quad{} + \varepsilon \sum_{\nu =1}^{m/2} \ell _{k}^{(n)} (2 \nu -1+t) \Biggr) \,\mathrm{d}t; \end{aligned} $$
if \(m>1\) is odd,
$$ \begin{aligned} u_{jk} & = \sum _{\nu =0}^{(m-1)/2} w_{\nu } \int _{2\nu }^{2\nu +1} \ell _{k} ^{(n)}(x)\,\mathrm{d}x+\varepsilon \sum_{\nu =1}^{(m-1)/2} \int _{2 \nu -1}^{2\nu } \ell _{k}^{(n)}(x) \,\mathrm{d}x+\varepsilon \int _{m}^{x _{j}}\ell _{k}^{(n)}(x) \,\mathrm{d}x \\ & = \int _{0}^{1} \Biggl( w_{0} \ell _{k}^{(n)}(t)+\sum_{\nu =1}^{(m-1)/2} \bigl[ w_{\nu }\ell _{k}^{(n)}(2\nu +t)+\varepsilon \ell _{k}^{(n)} (2\nu -1+t) \bigr] \\ & \quad{} + \varepsilon (x_{j}-m)\ell _{k}^{(n)} \bigl((x_{j}-m)t+m\bigr) \Biggr) \,\mathrm{d}t. \end{aligned} $$
All integrals on the far right of these equations can be computed exactly by \(\lceil n/2 \rceil \)-point Gauss–Legendre quadrature on \([0,1]\). The first pitfall mentioned in Example 8, associated with computing the floor of \(x_{j}\), is no longer an issue since the midpoint is now \(N+1/2\), a half-integer, not an integer.
The (\(N+1\))-block-discrete Hahn weight function with parameters \(\alpha =\beta =0\) and \(p_{n}\) with \(2\leq n\leq N\).
This is the weight function (43) with \(w_{0}=w_{1} =\cdots =w_{N}=1\). To check the behavior of the eigenvalues in this case, we have run the script run_Uconj_ext_blockhahn.m using the function Uconj_ext_blockhahn.m and \(\mathtt{epsilon}=0\) for \(N=1:10\) and \(2\leq n\leq 30\) for each N. It was found that the extended Stenger conjecture is still true for \(2\leq n\leq 30\) (and probably for all \(n\geq 2\)) when \(N=1\), i.e., for a 2-block-discrete Hahn weight function. When \(N>1\), however, eigenvalues with negative real parts again show up, starting from some \(n\geq 9\), and frequently, but not always, thereafter. The values of N and n, for which this occurs, are shown in Table 7. There is usually one pair of delinquent complex conjugate eigenvalues, but in some cases there are two such pairs. These are identified by an asterisk in Table 7.
Table 7 The presence of delinquent eigenvalues of \(U_{n}\) in the case of a block-discrete weight function
The validity of the extended Stenger conjecture for \(N=1\) is interesting. It may well be for the same (unknown) reason that validates the conjecture in the case of the two-interval weight function of Sect. 6.6; cf. Example 7.
To illustrate, we show in Fig. 17 the eigenvalues in the cases \((N,n)=(2,30),(5,28), (10,26)\).
Eigenvalues of \(U_{n}\) in the case of the \((N+1)\)-block-discrete Hahn weight functions with \((N,n)=(2,30),(5,28),(10,26)\) (from left to right)
The restricted Stenger conjecture, in this example, fares much better, though failing also in a few cases. Using the routines run_Uconj_restr_blockhahn.m and Uconj_restr_blockhahn.m for \(N=1:10\), \(2\leq n\leq 30\), we found the conjecture to be true for \(N=[1,2,3,4,9]\), \(2\leq n\leq 30\), and false in only the five cases: \((N,n)=(5,30),(6,28),(7,30),(8,28),(10,25)\). To rule out the presence of severe numerical instabilities as a cause for this unexpected behavior, all cases have been rerun, and confirmed, in 32-digit arithmetic. The double-precision eigenvalues were compared with those obtained in 32-digit precision and found to agree to 5–15 digits, the delinquent ones always to at least 11 digits.
For illustration, we show in Fig. 18 the eigenvalues in the cases \((N,n)=(2,30),(6,28), (10,25)\), the last two containing a pair of eigenvalues with negative real part.
The presence of delinquent eigenvalues in this example, strictly speaking, does not invalidate the extended Stenger conjecture, since the weight function (43) does not satisfy the positivity a.e. condition imposed by Stenger. However, the matrix \(U_{n}\) associated with the weight function (44), depending on the positive parameter ε, by a continuity argument will have the same pattern of delinquent eigenvalues as the matrix \(U_{n}\) associated with the weight function (43) when ε is sufficiently small. This then shows that the extended Stenger conjecture cannot be valid for all admissible weight functions. We illustrate this with the final example,
The \((N+1)\)-ε-block-discrete weight function (44) for \(N=2\), \(\varepsilon = 1/100\), and \(n=9\).
This relates to the first item in Table 7. The routine run_Uconj_ext_epsilon_blockhahn_N2_n9.m, using r_blockhahn to generate the required recurrence coefficients by an (\(N+1\))-component discretization procedure (\(N=2\)) implemented by the routines mcdis.m and quad_blockhahn.m, computes the eigenvalues of \(U_{n}\) for \(n=9\). They are shown in Table 8.
Table 8 The eigenvalues \(\lambda _{k}\) of \(U_{n}\), \(n=9\), for the weight function of Example 10
Recomputing them in 32-digit arithmetic proves them correct to all digits shown.
All Matlab routines referenced in this paper, and all textfiles used, can be accessed at CONJS of the website https://www.cs.purdue.edu/archives/2002/wxg/codes.
Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Appl. Math. Ser., vol. 55 National Bureau of Standards, Washington (1964)
Gautschi, W.: On some orthogonal polynomials of interest in theoretical chemistry. BIT Numer. Math. 24, 473–483 (1984) [Also in Selected Works, v. 2, 101–111.]
Gautschi, W.: Orthogonal Polynomials: Computation and Approximation, Numerical Mathematics and Scientific Computation. Oxford University Press, Oxford (2004)
Gautschi, W. Orthogonal Polynomials in MATLAB: Exercises and Solutions. SIAM, Philadelphia (2016)
Gautschi, W.: A Software Repository for Orthogonal Polynomials. SIAM, Philadelphia (2018)
Hairer, E., Nørsett, P., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd rev. edn. Springer Series in Computational Mathematics, vol. 8. Springer, Berlin (1993)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd rev. edn. Springer Series in Computational Mathematics, vol. 14. Springer, Berlin (1996)
Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and Schrödinger in \(3 + 1\). In: Sabin, J.R., Cabrera-Trujillo, R. (eds.) Advances in Quantum Chemistry, pp. 265–298. Academic Press, San Diego (2015) Ch. 11
Szegö, G.: Orthogonal Polynomials, 4th edn. Colloquium Publications, vol. 23. Am. Math. Soc., Providence (1975)
The authors thank Martin J. Gander for having alerted them to the sensitivity, when n is large, of the eigenvalues of the matrices \(U_{n}\), \(V_{n}\) to small changes in their elements, and they acknowledge helpful correspondence with Frank Stenger.
Department of Computer Science, Purdue University, West Lafayette, USA
Walter Gautschi
Section de mathématiques, Université de Genève, Genève, Switzerland
Ernst Hairer
The authors have equal contributions. All authors read and approved the final manuscript.
Correspondence to Walter Gautschi.
Dedicated to Gradimir V. Milovanović on his 70th birthday
Appendix: Relation to Runge–Kutta methods
Let \(x_{1},x_{2}, \ldots ,x_{n}\) be distinct real numbers (typically in the interval \([0,1]\)). The corresponding (collocation) Runge–Kutta method (see [6, Theorem II. 7.7] is then given by the coefficients
$$ a_{jk}= \int _{0}^{x_{j}} \ell _{k}(x)\,\mathrm{d}x, \qquad b_{k}= \int _{0} ^{1} \ell _{k}(x)\, \mathrm{d}x, $$
where \(\ell _{k}(x)\) is the kth elementary Lagrange interpolation polynomial of degree \(n-1\). We collect the coefficients in the \(n\times n\) matrix \(A=(a_{jk})_{j,k=1}^{n}\), in the column vector \(b=(b_{k})_{k=1}^{n}\), and we denote the column vector with all elements equal to 1 by 1.
An application of the Runge–Kutta method with step size h to the Dahlquist test equation \(\dot{y}=\lambda y\) yields (with \(z=h\lambda \))
y 1 =R(z) y 0 ,R(z)=1+z b T ( I − z A ) − 1 1,
where \(R(z)\) is the stability function of the method. Note that for an invertible matrix A, its eigenvalues are the reciprocal of the poles of the rational function \(R(z)\).
The adjoint method of (45) is given by the coefficients (cf. [6, Theorem II. 8.3])
$$ a_{n+1-j,n+1-k}^{*}=b_{k}-a_{jk}= \int _{x_{j}}^{1} \ell _{k}(x) \,\mathrm{d}x, \qquad b_{n+1-k}^{*}=b_{k} . $$
Its stability function is related to that of (45) by \(R^{*}(z)=1/R(-z)\).
Connection to the Stenger conjecture. The \(n\times n\) matrix with coefficients \(a_{jk}\) of (45) is equal to the matrix \(U_{n}\) (with \(a=0\)) of (2) in Sect. 1, and the matrix with coefficients \(a_{jk}^{*}\) of (47) is equal to \(V_{n}\) (with \(b=1\)). Since the nonzero eigenvalues of A are the reciprocal of the poles of the stability function (46), there is a close connection between the Stenger conjecture and A-stability of a Runge–Kutta method.
The (shifted) Legendre polynomials are orthogonal with respect to the constant weight function \(w(x)=1\) on \([0,1]\). The corresponding collocation Runge–Kutta method is the so-called Gauss method of order 2n, which is A-stable (see [7, Section IV.5]). Its stability function is the diagonal Padé approximation \(R_{n,n}(x)\), for which all poles are in the right half of the complex plane. This provides another proof of the Stenger conjecture for Legendre polynomials.
Gautschi, W., Hairer, E. On conjectures of Stenger in the theory of orthogonal polynomials. J Inequal Appl 2019, 159 (2019). https://doi.org/10.1186/s13660-019-2107-6
Zeros of orthogonal polynomials
Lagrange interpolation
Matrix eigenvalues
Conjectured location of eigenvalues in the complex plane
Recent Advances in General Inequalities on Pure & Applied Mathematics and Related Areas
|
CommonCrawl
|
Your favorite surprising connections in mathematics
There are certain things in mathematics that have caused me a pleasant surprise -- when some part of mathematics is brought to bear in a fundamental way on another, where the connection between the two is unexpected. The first example that comes to my mind is the proof by Furstenberg and Katznelson of Szemeredi's theorem on the existence of arbitrarily long arithmetic progressions in a set of integers which has positive upper Banach density, but using ergodic theory. Of course in the years since then, this idea has now become enshrined and may no longer be viewed as surprising, but it certainly was when it was first devised.
Another unexpected connection was when Kolmogorov used Shannon's notion of probabilistic entropy as an important invariant in dynamical systems.
So, what other surprising connections are there out there?
big-list
big-picture
$\begingroup$ It should be mentioned that the connection you refer to is due to Furstenberg (ams.org/mathscinet-getitem?mr=498471). Later Furstenberg and Katznelson together used this connection to derive other combinatorial results, including a multidimensional extension of Szemeredi's theorem and a density version of the Hales-Jewett's theorem. $\endgroup$
– Joel Moreira
Prev 1
The fact that the circumference of a unit circle is used to normalize the bell curve. Elementary compared to the other examples, yes, but how shocking was it when you first learned it?
answered Feb 22, 2010 at 2:17
Chad Groft
$\begingroup$ To me, this isn't really shocking. It's a natural consequence of the cute (and, yes, maybe even surprising) fact that the square of $\int e^{x^2}\;dx$ is equal to $\int e^{x^2 + y^2}\;dx\;dy$, the integral of a function whose level sets are circles. $\endgroup$
– Vectornaut
$\begingroup$ @Vectornaut: your point is that this connection can be understood; but it still strikes me as initially surprising. $\endgroup$
– Benoît Kloeckner
$\begingroup$ @BenoîtKloeckner, I see. I was never surprised because, if I recall correctly, I never knew the normalization factor before being shown how to find it. $\endgroup$
$\begingroup$ I think James Stirling may have been the first to know this. $\endgroup$
– Michael Hardy
There exist two binary trees with rotation distance $2n-6$. The proof is unexpected and based on hyperbolic geometry (Sleator, Tarjan, Thurston (1988), "Rotation distance, triangulations, and hyperbolic geometry").
3 revs, 3 users 50%
Alexey Ustinov
$\begingroup$ What is $n{}{}$? $\endgroup$
– Gerry Myerson
Taniyama-Shimura-Weil connecting error terms counting number of points on an elliptic curve over finite fields and the Fourier coefficients of modular forms. It's less surprising these days because it's almost as famous as the two things it connects.
2 revisions, 2 users
Jamie Weigandt 67%
The Curry-Howard isomorphism linking various lambda calculi with intuitionistic logics; its extension to the classic logic via the concept of continuations.
The conncetion between Borel hierarchy and arithmetical hierarchy.
Fagin's theorem --- and later the whole branch of descriptive complexity --- linking well-known complexity classes with logics over finite models.
Michal R. Przybylek
$\begingroup$ Nice timing, I was just thinking about posting points one and three of your list :) $\endgroup$
The connection between homotopy groups of S2, Brunnian braids over the sphere, and Brunnian braids. This knocked me off my chair when I first heard about it. I know no conceptual explanation of this connection.
A. Berrick, F. R. Cohen, Y. L. Wong and J. Wu, Configurations, braids and homotopy groups, J. Amer. Math. Soc., 19 (2006), 265-326. Also available at http://www.math.nus.edu.sg/~matwujie/BCWWfinal.pdf (Wayback Machine) See also http://www.math.nus.edu.sg/~matwujie/cohen.wu.GT.revised.29.august.2007.pdf (Wayback Machine)
Daniel Moskovich
$\begingroup$ I'd like to find a more geometric proof of their result. There's a lot of geometric constructions that lead me to suspect such a result but I haven't found anything quite right. The main idea is to consider the closure of a Brunnian braid then look at things like the Koschorke invariants. mathoverflow.net/questions/234/… $\endgroup$
– Ryan Budney
Feb 8, 2010 at 7:17
Paul Vojta's discovery of the unexpected parallels between value distribution theory (Nevanlinna theory) in complex analysis and Diophantine approximation in number theory. See, e.g., Vojta's paper "Recent Work on Nevanlinna Theory and Diophantine Approximation". Serge Lang and William Cherry discuss the matter in their book Topics in Nevanlinna Theory.
The analogy, still not understood to the full I think, between prime numbers and knots.
See Arithmetic topology in Wikipedia.
A most condensed picture is given by the Kapranov-Reznikov-Mazur dictionary
This is actually closely related to several answers here, and in fact initially I mentioned it in a comment to one of the answers but then still decided to make a separate entry.
მამუკა ჯიბლაძე
I don't know whether people will consider this surprising or not.
I think it may have been in the earliest part of the 20th century that it was shown that random walks in $n$ dimensions are recurrent if $n\le2$ and transient if $n\ge3.$
Then in the 1950s it was shown that the maxmimum-likelihood estimator of the expected value of a multivariate normal distribution in $n$ dimensions is an admissible estimator, in the decision-theoretic sense, when $n\le2$ but (a surprise) not when $n\ge3.$
Around 1990 or so, Morris L. (Joe) Eaton showed that these two propositions both say essentially the same thing.
$\begingroup$ Interesting - could you provide references? $\endgroup$
– R W
$\begingroup$ @RW jstor.org/stable/2242007 $\endgroup$
$\begingroup$ @RW : "A Statistical Diptych: Admissible Inferences—Recurrence of Symmetric Markov Chains", Morris L. Eaton, The Annals of Statistics, Vol. 20, No. 3 (September, 1992), pp. 1147–1179 (33 pages) $\endgroup$
Root systems, which are completely combinatorial objects, have a lot to do with topological objects, such as compact Lie groups, and linear algebraic objects, such as Lie algebras. Not just that, they classify semisimple ones among them!
I agree with Zavosh that Jones' linking of Von Neumann algebras to knot theory is one of the great connections in modern times. Closer to home for me is Pisier's use of a theorem of Beurling on holomorphic semigroups to prove the duality of type and cotype of B-convex Banach spaces.
There are several surprises regarding convex polytopes:
A) There are combinatorial types of polytopes that cannot be realized with rational coordinates (first discovered by Perles). This is not the case in three dimension but by now there are examples in every dimension greater equal 4. This adds to several examples on the wild combinatorial nature of convex polytopes in dim at least 4.
B) The applications of commutative algebra to the study of face-numbers of polytopes - Stanley proofs of the upper bound theorem using the Cohen-Macaulay argument and many subsequent results. Also surprising is the application of algebraic geometry: toric varieties, Hard Lefschetz theorem, intersection homology etc.
C) It is a special surprise that some proofs regarding the face number of polytopes applies only to polytopes with rational coordinates.
That mechanical vibrations (mass-spring-dashpot systems) satisfy the same differential equations as electrical systems (inductor-resistor-capacitor circuits).
John D. Cook
$\begingroup$ Yes! This is a fundamental "surprise". :) $\endgroup$
– paul garrett
$\begingroup$ Maxwell himself presented this connection before he dispensed with it in a later paper. $\endgroup$
– Tom Copeland
Another surprising connection is the Ax-Kochen theorem. Let $\mathcal{F}_{p,n,d}$ denote the set of homogeneous polynomials ("forms") in $n$ variables over the $p$-adics $\mathbb{Q}_p$ of degree $d$. The Ax-Kochen theorem is: For every positive integer $d$ there is a finite set $Y_d$ of "bad" prime numbers such that if $p$ is a "good" prime for $d$ (i.e. not in $Y_d$) then every $f \in \mathcal{F}_{p,d^2+1,d}$ has a non-trivial zero.
This was proved using model theory.
$\begingroup$ This would be easier to read as "for each degree $d$, for all sufficiently large primes $p$, any homogenous polynomial of degree $d$ in at least $d^2+1$ variables has a non-trivial zero in the $p$-adic numbers." $\endgroup$
– Matt F.
Being a physicist I'm still puzzled by the connection between:
Wick theorem -- which is combinatorics (for me).
Multivariate Gaussian integrals -- which is calculus (for me).
Determinants and eigensystems -- which is linear algebra (for me).
Kostya
$\begingroup$ And these are intimately related to the Hermite polynomials whose moments are the perfect matchings of the vertices of the hypertetrahedra/hypertriangles/n-simplices with applications in combinatorics, geometry, analysis, and, of course, physics. $\endgroup$
Grothendieck's dessins d'enfants: the Galois group $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ acts on certain graphs (with a decoration) on 2-dimensional topological surfaces.
I'd like to share the very elementary fact (so elementary that I found surprising only after I taught Calculus course) that all the elementary functions are analytic in the global way. Of course, that's no surprise for polynomials. But I found no intuition why the trigonometric functions and the exponential functions, in the way they are originally considered by human, turn out to be equal to their Taylor expansions everywhere. Consider again the fact that Taylor expansion uses only the information on an infinitesimal neighborhood at a point, a function which is not originally defined by power series should be of extremely little chance to equal its Taylor expansion. I don't know if I'm right, but I finally told my students this is really a miracle.
Liren Lin
$\begingroup$ You are completely correct, that this is in some way an astonishing thing. The downvotes are an expression of the absence of this astonishment from the official account of things... So, by accident, it is not surprising that you'd get downvotes. But I think you are perfectly correct... $\endgroup$
$\begingroup$ This is part of a larger miracle that complex numbers are so useful in maths, and mathematicians tend to forget how it is miraculous and non-trivial @paulgarrett $\endgroup$
– reuns
$\begingroup$ "Official" accounts consist of CURRICULA, in which it is (dishonestly) decreed that EVERYONE must, or should, follow the curriculum, and this practice anesthetizes most people against astonishment, by a mechanism that is blindingly obvious and that we all see in operation daily. That the story of the naked emperor is not exaggerated is seen in the fact that nearly all otherwise intelligent people don't see this. $\endgroup$
$\begingroup$ You seem to take advantage of the fact that elementary function can mean pretty much anything you want it to mean. For example, most of the "elementary functions" listed here are not entire: en.wikipedia.org/wiki/Elementary_function $\endgroup$
The connection between rational homotopy theory and local algebra has been very useful, I was told. See Section 3 of this survey by Kathryn Hess and the references therein, especially Anick's counterexample to a conjecture of Serre.
Hailong Dao
Another post reminded me of the following fact. The Poisson summation formula is a special case of the trace formula. Also the Frobenius reciprocity for finite groups follows from another special case of the trace formula, where the groups in question are finite. I find that these two theorems are related in such a way very surprising.
$\begingroup$ For me, the Frobenius reciprocity formula follows from $\left(A\otimes_R B\right)\otimes_S C\cong A\otimes_R\left(B\otimes_S C\right)$, where $R$ and $S$ are two unital (not necessarily commutative) rings, $A$ is a $\left(\mathbb Z,R\right)$-bimodule, $B$ is a $\left(R,S\right)$-bimodule, and $C$ is a $\left(S,\mathbb Z\right)$-bimodule. The "other" Frobenius formula is simply the trace of the former. Is this what you mean? But then I wouldn't really call it a connection. $\endgroup$
– darij grinberg
$\begingroup$ The connection that I was talking about is the following. The Arthur-Selberg trace formula is an identity of distributions for a pair of groups(with some conditions). When the groups are R and Z, then the trace formula reduces to Poisson summation. When the groups are finite, and with the right choice of a test function, the trace formula reduces to Frobenius reciprocity. $\endgroup$
– MBN
$\begingroup$ There is some subtlety in making the tensor-product associativity be the complete answer... for the topological vector space end of the analogy. Too technical, and maybe not immediately interesting, but P. Cartier's 1973/4 Sem. Bourb. talk/article explains how certain technical points (at a later "perfect" extreme the Dixmier-Malliavin theorem) make heuristics into theorems in such regards. Maybe the fact that the heuristics are "obvious" makes the actual surprise less? $\endgroup$
Stone duality usually refers to the equivalence between the category of Boolean algebras and the category of compact totally disconnected spaces. This duality intertwines the theory of Boolean algebras with general topology so much that Boolean algebras cannot be studied in depth without mentioning general topology and compact totally disconnected spaces cannot be studied in great detail without mentioning their relation with Boolean algebras. For example, the free Boolean algebras and free $\sigma$-complete Boolean algebras are normally represented not in terms of generators and relations, but as clopen sets (Baire sets) on the cantor cube $2^{I}$ for some set $I$.
Stone duality was originally a very surprising result, and it is probably a bit surprising to people seeing this result for the first time as well. Around 1937 when Marshall Stone formulated this duality it was difficult to imagine nice topological spaces that arose from algebraic structures rather than geometric or analytic structures.
Besides Stone duality, there are many dualities (equivalences of categories) similar in nature to Stone duality that relate different structures to each other and hence relate different areas of mathematics to each other (I have developed some of these dualities myself). For instance, one can relate topologies satisfying higher separation axioms with topologies that are not even $T_{1}$. One can also relate structures such as proximity spaces and uniform spaces with algebras of sets and Boolean algebras. There are also many dualities relating different in order theory to each other.
Joseph Van Name
$\begingroup$ Here's my terse expository account of Stone's duality: the totally disconnected compact Hausdorff space associated with a Boolean algebra $A$ is the space of all homomorphisms from $A$ into the two-element Boolean algebra, with the topology of pointwise convergence of nets of such homomorphisms. The Boolean algebra associated with a totally disconnected compact Hausdorff space is the set of all clopen subsets with the meet and join operations. To every homomorphism of Boolean algebras there naturally corresponds a continuous mapping beteween these spaces, going in the opposite direction. $\endgroup$
The connection between the sphere packing problem and modular forms which was brought to light by recent breakthrough work of Viazovska (https://arxiv.org/abs/1603.04246) is very surprising, in my opinion.
Sam Hopkins
$\begingroup$ See also this survey from Henry Cohn: arxiv.org/abs/1611.01685. $\endgroup$
– Sam Hopkins
I believe the way R. Schoen solved Yamabe problem (after the contributions of Yamabe, Trudinger, Obata and Aubin) is truly impressive: after a long series of computations, he unexpectedly related the constant term in the expansion of certain Green functions associated to Yamabe problem (a Differential Geometry problem) with the so-called ADM mass in General Relativity (from Mathematical Physics); thus, he "reduced" the (remaining cases of) Yamabe problem to the infamous positive mass theorem, a result S.-T. Yau and himself proved (using Differential Geometry) to answer a (seemingly unrelated) central problem in General Relativity. See the survey of Lee and Parker for a nice account on this surprising connection between Differential Geometry and General Relativity.
$\begingroup$ Well but... isn't GR a part of DG in some sense? $\endgroup$
– Qfwfq
The chromatic number of the Kneser graph $KG_{n,k}$ is equal exactly $2n-k+2$. There are very simple proof based on Borsuk-Ulam theorem.
Arseniy Akopyan
The surprising application of algebra into solving the problem of classification of manifolds or topological spaces, from which arose such concepts as fundamental group, homology groups, etc..
I think a lot of things will be "surprising" like this. I think the creations of most of the important topics or active areas of research in math arose out of some such "surprising" connection.
edited Feb 8, 2010 at 0:21
$\begingroup$ No, I don't think that everything is really surprising. There are a lot of theorems that have been based on hard work, but within the existing circle of ideas surrounding that result. What I'm after is when disparate parts of mathematics are brought together in unexpected ways. $\endgroup$
– Victor Miller
$\begingroup$ The fundamental group and homology were designed to study manifolds. So again this isn't remotely surprising that they'd be useful in the study of manifolds. $\endgroup$
$\begingroup$ Yes of course. But I meant, if you look at a manifold, who on earth would imagine that a group or module over a ring would help to classify them? When I took my first course in topology, this notion was a big surprise to me. $\endgroup$
– Feb7
$\begingroup$ Well personally it was surprising for me. I knew something about point set topology from one book, and from the next book I knew about groups, rings, linear algebra and so on. Then I go and sit in algebraic topology course because it was mandatory for some reason, and lo and behold! $\endgroup$
$\begingroup$ Hi Ryan, would you consider it obvious that the obstruction to promoting a homotopy equivalence to a simple homotopy equivalence should live in a group? And if so, could you have guessed which group? I think there are many places in topology where algebra is surprisingly effective. And for at least half a century mathematicians studied topological spaces, and manifolds in particular, before beginning to apply algebra to these questions. In 1942 the field was still referred to, at least by some, as "combinatorial topology" rather than "algebraic topology". $\endgroup$
– Tom Church
The inverse calculus of a slope is the calculation of an area.
Barrow's Lemma: https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
mikitov
Application of ``thermodynamic formalism'' to questions of Analysis. Thermodynamic formalism have its origin in equilibrium statistical mechanics. First unexpected thing was its application to the theory of smooth dynamical systems, see beautiful paper of Ruelle, Is our mathematics natural? in BAMS. Later unexpected applications were discovered to problems of analysis which have nothing to do with dynamical systems, statistical mechanics or mathematical physics. One example is Astala's theorem on the area distortion under quasiconformal mappings. There is a very simple, self-contained proof of this theorem in MR1283548, using no dynamical considerations. But it is hard to imagine how could this proof be invented without dynamical and "thermodynamical" considerations.
Alexandre Eremenko
$\begingroup$ Of course (:-) the equilibrium statistical mechanics (general Ising models) is applied to the topological knot theory. $\endgroup$
– Włodzimierz Holsztyński
This is much fuzzier than many of the other answers, but the connections between graph theory, arithmetic, and geometry are breathtaking. (IMHO, anyone working anywhere even close to the intersection of these fields who hasn't read [at least some of] Serre's Trees needs to. Really everyone should read Trees though.)
Harrison Brown
Stumbled on the following couple of days ago, when searching a good picture for a general 3-step filtration in an abelian category (in fact, there are similar structures in triangulated categories which I am ultimately up to):
After feasting my eyes on it for a while I suddenly realized that what I am actually staring at is the Desargues configuration (in the form of five generic planes in 3-space)!
Not sure if this has any significance or whether one can do anything with it, but I certainly find it amusing.
$\begingroup$ I guess it also solves the problem of placing 9 points in the plane so there are 9 lines with 3 points on a line. $\endgroup$
$\begingroup$ @GerryMyerson If I'm not mistaken that one is the Pappus configuration. Desargues' is $10_310_3$, i. e. ten points three on a line, ten lines three at a point (as five generic planes in 3-space produce $\binom53$ points and $\binom52$ lines intersecting like that) $\endgroup$
– მამუკა ჯიბლაძე
$\begingroup$ Yes, I missed a line. 10 it is. $\endgroup$
$\begingroup$ I don't know what the precise definition of $\text{"} 10_3 10_3 \text{"}$ is, but there exists at least one configuration of 10 lines and 10 points with each line passing through three of the 10 points and each point lying on three of the 10 lines, that is not incidence-isomorphic to the Desargues configuration, so I wonder whether that notation is enough to specify a particular configuration. $\endgroup$
$\begingroup$ @MichaelHardy Thank you! I should remember, definitely have seen it before. What I did not notice before is that Wikipedia says there are still seven more! $\endgroup$
It seems that no one gave this one yet, although it probably hides behind many of previous answers.
The fact that in $\mathbb{C}$, product by a fixed complex number corresponds to a similarity is an incredible and far-reaching connection between algebra and geometry.
Among other things, it ties holomorphic functions with conformal maps of surfaces, so that for example one can identify a Riemann surface with a surface having a Riemannian metric of curvature $-1$, $0$ or $1$; more generally it allows for the use of complex analysis to study a number of problems in the geometry of surfaces.
Benoît Kloeckner
$\begingroup$ I originally learned this as the DEFINITION of multiplication of complex numbers, so when I encountered it presented as a theorem, I found it puzzling and wondered what the difference was between the theorem and the definition. $\endgroup$
The amazing connection between $\eta$-identities and affine root systems, due to Macdonald and further elaborated upon by Kac! These identities encompass the Jacobi triple product identity, Euler's pentagonal number identity and many others. And these have connections to Complex simple Lie algebras.
The connection between 'Electric-magnetic duality in string theory' and Langlands Program. See e.g. Witten-Kapustin. Not exactly a connection between two mathematics areas, but I think it nevertheless partially qualifies.
Important formulas in combinatorics
Irreducibility of polynomials in two variables
Applications of finite continued fractions
What are some interesting corollaries of the classification of finite simple groups?
Cases where the number field case and the function field (with positive characteristic) are different
Large cardinal axioms and Grothendieck universes
exotic differentiable structures on manifolds in dimensions 5 and 6
Why is there a connection between enumerative geometry and nonlinear waves?
A $k$-component link defines a map $T^k\rightarrow \operatorname{Conf}_k S^3$. Does the homotopy type capture Milnor's invariants?
Relating face polytopes of permutohedra to integer partitions
What are your favorite instructional counterexamples?
What is your favorite proof of Tychonoff's Theorem?
Hecke-algebras in your field of mathematics
Proofs that require fundamentally new ways of thinking
Contest problems with connections to deeper mathematics
Your favorite papers on geometric group theory
|
CommonCrawl
|
Social and Political Philosophy > Social Philosophy > Social Phenomena > Games
Edited by C. Thi Nguyen (University of Utah)
About this topic
Summary What is a game, and what is the value of games in human life? For some, playing games is a trivial endeavor. For others, playing games can turn out to be quite valuable, and central part of our lives. The discussion of the nature and the value of games has been conducted in several different fields, both in philosophy and next to it. In the philosophy of art, philosophers have focused on largely on questions of whether games are a form of art, and if so, what their relationship is to other more familiar art forms. Some have argued that videogames are a kind of fiction, or interactive cinema. In the philosophy of sport, philosophers have focused on questions of the value and purpose of sport. There, philosophers have suggested, variously, that the purpose of sport is to develop human excellence, or offer a venue for human achievement, or to create opportunities for dramas of hope and redemption. Much of the discussion of games has occurred the interdisciplinary field called Games Studies - much of whose roots lie in various literary critical and anthropological approaches, often emphasizing approaches from continental philosophy.
Key works The modern discussion about games is usually taken to proceed from Johan Huizinga's Homo Ludens. There, Huizinga suggests that games are connected with theater, sport, and religious ritual, in being set apart from ordinary life, inside a magic circle of play. Roger Caillois' Man, Play, and Games offers a pluralist view of play, distinguishing between competitive play, mimetic play, luck play, and vertigo play. In analytic philosophy, the central work is Bernard Suits' The Grasshopper (Suits & Hurka 1978). Suits there, claims that games are the voluntary attempt to overcome unnecessary obstacle. Suits' account ends with an argument that games are the purpose of life - since, in utopia, all we would do with our time is to play games. Thomas Hurka has offered an extension of Suits' account, whereby the value of games is to be spelled out in terms of difficult achievement (Hurka 2006). This style of account has recently been developed in great detail by Gwen Bradford (Bradford 2015). In the philosophy of sport, some have argued that there are norms of play, which arise from the distinctive aim of sport - what is called the ethos of sport. Robert Simon has argued that the ethos of sport can be derived from looking at what the rules aim at (Simon 2000). J. S. Russell has argued that the ethos of sport is the development of human excellence (Russell 2004); William Morgan has offered some crucial critical responses (Morgan 2004). In the philosophy of art, the discussion has centered around whether games are art, and, if so, what art form they might be. Grant Tavinor has developed an account of games as a form of fiction (Tavinor 2009). Dominic Lopes has developed an account of interactive computer art (Lopes 2009). The philosophical discussion of videogames has also raised some key questions in ethics, especially the interactive representation of evil acts (Luck 2009, Bartel 2012, Patridge 2013). Maria Lugones' influential account of play as shifting between worlds includes an important criticism of competitive games (Lugones 1987). Philosophers should also certainly take note of interdisciplinary work in Game Studies. Key figures in that field include Janet Murray, Espen Aarseth, Gonzolo Frasca, Markku Eskelinen, Mary Flanagan, Mia Consalvo, Jaakko Stenros, Jane McGonigal, Ian Bogost (Bogost 2007), and Miguel Sicart (Sicart 2009). Early work in that field focused on the so-called "ludology vs. narratology" wars, which focused on whether games should primarily be approached as a form of narrative, or whether games should be approached as a unique, non-narrative artifact. A good place to start with Game Studies is Jesper Juul's well-known book, Half-Real: Video Games Between Real Rules and Fictional Worlds.
Introductions C. Thi Nguyen's recent Philosophy Compass article, "Philosophy of Games" offers a survey of recent work in the philosophy of games, surveying work in game studies, the philosophy of sport, aesthetics, and applied ethics (Nguyen 2017). Grant Tavinor's Philosophy Compass, "Videogames and Aesthetics", covers specific issues in the aesthetics of video games (Tavinor 2010). Jaakko Stenros's "In Defense of a Magic Circle: The Social and Mental Boundaries of Play" offers a critical survey of the concept of a "magic circle" - that is, the idea that games and play occupy a special separate space, separated from ordinary life. Randolph Feezell's "A Pluralist Conception of Play" offers a useful survey of the philosophy of play (Feezell 2010). Jesper Juul's Half-Real: Video Games Between Real Rules and Fictional Worlds is an excellent introduction to work in the interdisciplinary field of Game Studies.
Finance (69 | 3)
Philosophy of Food and Drink (309 | 2)
Philosophy of Sexuality (3,431 | 458)
Philosophy of Sport (2,879)
Social Groups (196)
Social Practices (17)
Social Activities, Misc (2)
University of Lucerne
Full Professor of Philosophy in the Field of Theoretical Philosophy
Embedded EthiCS Fellowship in Philosophy 2022-2024
Departmental Lecturer in Political Philosophy and Public Policy
Provably Games.J. P. Aguilera & D. W. Blue - forthcoming - Journal of Symbolic Logic:1-22.details
We isolate two abstract determinacy theorems for games of length $\omega_1$ from work of Neeman and use them to conclude, from large-cardinal assumptions and an iterability hypothesis in the region of measurable Woodin cardinals thatif the Continuum Hypothesis holds, then all games of length $\omega_1$ which are provably $\Delta_1$ -definable from a universally Baire parameter are determined;all games of length $\omega_1$ with payoff constructible relative to the play are determined; andif the Continuum Hypothesis holds, then there is a model of (...) ${\mathsf{ZFC}}$ containing all reals in which all games of length $\omega_1$ definable from real and ordinal parameters are determined. (shrink)
Games in Social and Political Philosophy
Mathematical Logic in Formal Sciences
Dialogue Games for Minimal Logic.Alexandra Pavlova - forthcoming - Logic and Logical Philosophy:1.details
In this paper, we define a class of dialogue games for Johansson's minimal logic and prove that it corresponds to the validity of minimal logic. Many authors have stated similar results for intuitionistic and classical logic either with or without actually proving the correspondence. Rahman, Clerbout and Keiff [17] have already specified dialogues for minimal logic; however, they transformed it into Fitch-style natural deduction only. We propose a different specification for minimal logic with the proof of correspondence between the existence (...) of winning strategies for the Proponent in this class of games and the sequent calculus for minimal logic. (shrink)
Logics, Misc in Logic and Philosophy of Logic
Proof Theory in Logic and Philosophy of Logic
Game, Player, Ethics: A Virtue Ethics Approach to Computer Games.Miguel Angel Sicart Vila - forthcoming - International Review of Information Ethics.details
Moral Character in Normative Ethics
Interactivity, Fictionality, and Incompleteness.Nathan Wildman & Richard Woodward - forthcoming - In Grant Tavinor & Jon Robson (eds.), The Aesthetics of Videogames. Routledge.details
Digital Video in Aesthetics
Philosophy of Specific Arts in Aesthetics
Truth in Fiction in Aesthetics
Games and the Fluidity of Layered Agency.Luca Ferrero - 2021 - Journal of the Philosophy of Sport 48 (3):344-355.details
What can the philosophy of agency learn from Nguyen's book on games? The most important lesson concerns, to use Nguyen's terms, the 'layered' structure of our agency and the 'fluidity' requ...
Agency in Philosophy of Action
Philosophy of Sport in Social and Political Philosophy
Fictional Games and Utopia: The Case of Azad.Stefano Gualeni - 2021 - Science Fiction Film and Television 14 (2):187-207.details
'Fictional games' are playful activities and ludic artefacts that were conceptualised to be part of fictional worlds. These games cannot – or at least were not originally meant to – be actually played. This interdisciplinary article discusses fictional games, focusing on those appearing in works of sf. Its objective is that of exploring how fictional games can function as utopian devices. Drawing on game studies, utopian studies, and sf studies, the first half of the article introduces the notion of fictional (...) games and provides an initial articulation of their utopian potential. The second half focuses, instead, on the analysis of one (science-)fictional game in particular: the game of Azad, described in Iain M. Banks's 1988 sf novel The Player of Games. This analysis is instrumental in clarifying the utopian qualities that are inherent in the activity of play such as its being uncertain and contingent. By presenting relationships of power through a game (and, finally, as a game), utopian fictional games such as Azad serve as a reminder that every socio-political situation – even the most dystopian ones – is ultimately indeterminate, and retains the possibility of change. (shrink)
Fiction, Misc in Aesthetics
Political Realism and Utopianism in Social and Political Philosophy
Ludic Unreliability and Deceptive Game Design.Stefano Gualeni & Nele Van de Mosselaer - 2021 - Journal of the Philosophy of Games 3 (1):1-22.details
Drawing from narratology and design studies, this article makes use of the notions of the 'implied designer' and 'ludic unreliability' to understand deceptive game design as a specific sub-set of transgressive game design. More specifically, in this text we present deceptive game design as the deliberate attempt to misguide players' inferences about the designers' intentions. Furthermore, we argue that deceptive design should not merely be taken as a set of design choices aimed at misleading players in their efforts to understand (...) the game, but also as decisions devised to give rise to experiential and emotional effects that are in the interest of players. Finally, we propose to introduce a distinction between two varieties of deceptive design approaches based on whether they operate in an overt or a covert fashion in relation to player experience. Our analysis casts light on expressive possibilities that are not customarily part of the dominant paradigm of user-centered design, and can inform game designers in their pursuit of wider and more nuanced creative aspirations. (shrink)
Deception in Applied Ethics
Design in Aesthetics
Video Games in Aesthetics
Existential Ludology and Peter Wessel Zapffe.Stefano Gualeni & Daniel Vella - 2021 - In Victor Navarro-Remesal & Oliver Perez-Latorre (eds.), Perspectives on the European Videogame. Amsterdam (The Netherlands): Amsterdam University Press. pp. 175-192.details
A relatively common approach in game studies understands gameworlds as constituting an existential situation for the player. Taking that stance, which is rooted in the European philosophical tradition of Existentialism, in this chapter we investigate the relationships and similarities between our existence within and without gameworlds. To do so, we first provide a review of existing literature in 'existential ludology' - work in game studies which considers our engagement with gameworlds from an existential perspective. In the second part of the (...) chapter, we then engage with some of the most notable ideas of the Norwegian philosopher Peter Wessel Zapffe. Zapffe understood human life as inherently meaningless and identified four ways in which human beings typically protect themselves from the existential panic that accompanies the awareness of that meaninglessness: isolation, anchoring, distraction, and sublimation. These four categories are used as the foundation for an examination of gameworlds as technologies for repressing existential panic. (shrink)
Existentialism in Continental Philosophy
$125.97 used $127.98 new (collection) Amazon page
How Twitter Gamifies Communication.C. Thi Nguyen - 2021 - In Jennifer Lackey (ed.), Applied Epistemology. Oxford University Press. pp. 410-436.details
Twitter makes conversation into something like a game. It scores our communication, giving us vivid and quantified feedback, via Likes, Retweets, and Follower counts. But this gamification doesn't just increase our motivation to communicate; it changes the very nature of the activity. Games are more satisfying than ordinary life precisely because game-goals are simpler, cleaner, and easier to apply. Twitter is thrilling precisely because its goals have been artificially clarified and narrowed. When we buy into Twitter's gamification, then our values (...) shift from the complex and pluralistic values of communication, to the narrower quest for popularity and virality. Twitter's gamification bears some resemblance with the phenomena of echo chambers and moral outrage porn. In all these phenomena, we are instrumentalizing our ends for hedonistic reasons. We have shifted our aims in an activity, not because the new aims are more valuable, but in exchange for extra pleasure. (shrink)
Philosophy of Technology in Philosophy of Computing and Information
Social Epistemology in Epistemology
The Canonical Pairs of Bounded Depth Frege Systems.Pavel Pudlák - 2021 - Annals of Pure and Applied Logic 172 (2):102892.details
The canonical pair of a proof system P is the pair of disjoint NP sets where one set is the set of all satisfiable CNF formulas and the other is the set of CNF formulas that have P-proofs bounded by some polynomial. We give a combinatorial characterization of the canonical pairs of depth d Frege systems. Our characterization is based on certain games, introduced in this article, that are parametrized by a number k, also called the depth. We show that (...) the canonical pair of a depth d Frege system is polynomially equivalent to the pair (Ad+2,Bd+2) where Ad+2 (respectively, Bd+1) are depth d+1 games in which Player I (Player II) has a positional winning strategy. Although this characterization is stated in terms of games, we will show that these combinatorial structures can be viewed as generalizations of monotone Boolean circuits. In particular, depth 1 games are essentially monotone Boolean circuits. Thus we get a generalization of the monotone feasible interpolation for Resolution, which is a property that enables one to reduce the task of proving lower bounds on the size of refutations to lower bounds on the size of monotone Boolean circuits. However, we do not have a method yet for proving lower bounds on the size of depth d games for d>1. (shrink)
Computational Complexity in Philosophy of Computing and Information
Aesthetics Naturalised: Schlick on the Evolution of Beauty and Art.Andreas Vrahimis - 2021 - Archiv für Geschichte der Philosophie.details
In his earliest philosophical work, Moritz Schlick developed a proposal for rendering aesthetics into a field of empirical science. His 1908 book Lebensweisheit developed an evolutionary account of the emergence of both scientific knowledge and aesthetic feelings from play. This constitutes the framework of Schlick's evolutionary psychological methodology for examining the origins of the aesthetic feeling of the beautiful he proposed in 1909. He defends his methodology by objecting to both experimental psychological and Darwinian reductionist accounts of aesthetics. Having countered (...) these approaches, Schlick applies Külpe's psychological distinction between stimulus-feelings and idea-feelings to collapse the traditional philosophical opposition between the agreeable and the beautiful. Both types of feeling, Schlick argues, result from humans' adaptation to their environment. Because of this adaptation, feelings that were once only stimuli for action can come to be enjoyed for their own sake. This thesis underlies Schlick's 1908 argument that art, qua mimesis, is necessarily inferior to aesthetic feelings directed towards the environment. Part of Schlick's justification for this view is that humans are, through a long evolutionary process, better adapted to their environment than to artworks. Schlick nevertheless concedes that mimetic art can involve ways of abstracting from the objects it copies to produce idealised regularities that are not found in the original. Schlick thus concludes that art teaches its audience how to perceive the world in this abstract and idealised manner. This type of environmental aesthetics constitutes a means for reaching Schlick's utopian ecological vision of a future in which culture will become harmonised with nature. (shrink)
Biological Sciences in Natural Sciences
Darwinism in Philosophy of Biology
Evolutionary Epistemology in Epistemology
Evolutionary Psychology in Philosophy of Cognitive Science
Logical Empiricism in 20th Century Philosophy
Psychology in Cognitive Sciences
Fσ Games and Reflection in L.J. P. Aguilera - 2020 - Journal of Symbolic Logic:1-22.details
Video Games, Violence, and the Ethics of Fantasy: Killing Time.Christopher Bartel - 2020 - London: Bloomsbury Academic.details
Is it ever morally wrong to enjoy fantasizing about immoral things? Many video games allow players to commit numerous violent and immoral acts. But, should players worry about the morality of their virtual actions? A common argument is that games offer merely the virtual representation of violence. No one is actually harmed by committing a violent act in a game. So, it cannot be morally wrong to perform such acts. While this is an intuitive argument, it does not resolve the (...) issue. -/- Focusing on why individual players are motivated to entertain immoral and violent fantasies, Video Games, Violence, and the Ethics of Fantasy advances debates about the ethical criticism of art, not only by shining light on the interesting and under-examined case of virtual fantasies, but also by its novel application of a virtue ethical account. Video games are works of fiction that enable players to entertain a fantasy. So, a full understanding of the ethical criticism of video games must focus attention on why individual players are motivated to entertain immoral and violent fantasies. -/- Video Games, Violence, and the Ethics of Fantasy engages with debates and critical discussions of games in both the popular media and recent work in philosophy, psychology, media studies, and game studies. (shrink)
Aesthetics and Ethics in Aesthetics
Applied Virtue Ethics in Normative Ethics
Computer Ethics, Misc in Applied Ethics
Internet Ethics in Applied Ethics
Moral Responsibility in Meta-Ethics
Philosophy of Specific Arts, Misc in Aesthetics
Virtual Pet: Trends of Development.Daria Bylieva, Nadezhda Almazova, Victoria Lobatyuk & Anna Rubtsova - 2020 - Advances in Intelligent Systems and Computing 1114:545-554.details
Information technologies are fundamentally changing modern society. Almost any human activity, including the caring for a pet, is acquiring new formats related to communication in the virtual space. The authors analyzed such a phenomenon as a virtual pet that has been developing since the early 90s of the 20th century on the basis of more than 100 different virtual pet modifications. The most popular among users and purchased more than 1 million times a year around the world are examined in (...) details in this research (video game Petz, Tamagotchi, Furby, Nintendogs, Neopets, My Talking Tom). Thus in this study the evolution of a digital pet is represented. We analyzed how a person models it according to his needs and identified the main trends of the virtual pet development. These basic directions are advanced similarity with a real pet, development of entertaining aspect, virtual pet applying for practical purposes. We proposed four main factors for analyzing changes in the human-virtual pet relationship: touch, game, communication and social interaction concerning the pet. (shrink)
Communication in Social Sciences
Simulation and Reality in Philosophy of Computing and Information
Social Relationships, Misc in Social and Political Philosophy
Code is Law: Subversion and Collective Knowledge in the Ethos of Video Game Speedrunning.Michael Hemmingsen - 2020 - Sport, Ethics and Philosophy 15 (3):435-460.details
Speedrunning is a kind of 'metagame' involving video games. Though it does not yet have the kind of profile of multiplayer e-sports, speedrunning is fast approaching e-sports in popularity. Aside from audience numbers, however, from the perspective of the philosophy of sport and games, speedrunning is particularly interesting. To the casual player or viewer, speedrunning appears to be a highly irreverent, even pointless, way of playing games, particularly due to the incorporation of "glitches". For many outside the speedrunning community, the (...) use of glitches appears to be cheating. For speedrunners, however, glitches are entirely within the bounds of acceptability. Because of this, however, speedrunning frequently involves sidestepping what are typically taken to be the core challenges of the game. By examining the distinction between the use of glitches and cheating in speedrunning, we can gain a greater understanding of the unique ethos of this activity; that is, we can make sense of what fundamentally constitutes speedrunning as a metagame. I argue that by understanding the code of the game not as rules but as physics, and by examining what actions are deemed impermissible by the speedrunning community – such as hardware modification and hacking – we can see that the ethos of speedrunning has three components: constitutive skills (including dexterity, memorisation and mental fortitude); a collective, fine-grained knowledge of the game and the desire to subvert the intentions of the programmers. Each of these components limits and structures the earlier ones: collective knowledge takes priority over constitutive skills, and subversion takes priority over both. These three components form the ethos that structures speedrunning as a metagame, expressing what speedrunners take to be its central aim. (shrink)
eSports in Social and Political Philosophy
Cheaters Never Prosper? Winning by Deception in Purely Professional Games of Pure Chance.Michael Hemmingsen - 2020 - Sport, Ethics and Philosophy 15 (2):266-284.details
I argue that in purely professional games of pure chance, such as slot machines, roulette, baccarat or pachinko, any instance of cheating that successfully deceives the judge can be 'part of the game'. I examine, and reject, various proposals for the 'ethos' that determines how we ought to interpret the formal rules of games of pure chance, such as being a test of skill, a matter of entertainment, a display of aesthetic beauty, an opportunity for hedonistic pleasure, and a fraternal (...) activity. Ultimately, I argue that 'winning the benefit' is the only ethos that can apply in purely professional games of pure chance, and that if we interpret the formal rules according to this ethos, cheating that is undertaken with respect for the judge's authority, but that attempts to cause the judge of the game to 'voluntarily' relinquish the benefit of the game by deceiving them into thinking that the formal rules of the game have been followed, is impermissible but acceptable cheating, and is therefore within, rather than outside, the game. Here, I define 'games of pure chance' as games in which chance is the only determinant of winning. (shrink)
Games: Agency as Art.C. Thi Nguyen - 2020 - New York: Oxford University Press.details
Games occupy a unique and valuable place in our lives. Game designers do not simply create worlds; they design temporary selves. Game designers set what our motivations are in the game and what our abilities will be. Thus: games are the art form of agency. By working in the artistic medium of agency, games can offer a distinctive aesthetic value. They support aesthetic experiences of deciding and doing. -/- And the fact that we play games shows something remarkable about us. (...) Our agency is more fluid than we might have thought. In playing a game, we take on temporary ends; we submerge ourselves temporarily in an alternate agency. Games turn out to be a vessel for communicating different modes of agency, for writing them down and storing them. Games create an archive of agencies. And playing games is how we familiarize ourselves with different modes of agency, which helps us develop our capacity to fluidly change our own style of agency. (shrink)
Aesthetics, Miscellaneous in Aesthetics
Art and Artworks in Aesthetics
Pop Culture in Aesthetics
Practical Reason in Philosophy of Action
The Arts of Action.C. Thi Nguyen - 2020 - Philosophers' Imprint 20 (14):1-27.details
The theory and culture of the arts has largely focused on the arts of objects, and neglected the arts of action – the "process arts". In the process arts, artists create artifacts to engender activity in their audience, for the sake of the audience's aesthetic appreciation of their own activity. This includes appreciating their own deliberations, choices, reactions, and movements. The process arts include games, urban planning, improvised social dance, cooking, and social food rituals. In the traditional object arts, the (...) central aesthetic properties occur in the artistic artifact itself. It is the painting that is beautiful; the novel that is dramatic. In the process arts, the aesthetic properties occur in the activity of the appreciator. It is the game player's own decisions that are elegant, the rock climber's own movement that is graceful, and the tango dancers' rapport that is beautiful. The artifact's role is to call forth and shape that activity, guiding it along aesthetic lines. I offer a theory of the process arts. Crucially, we must distinguish between the designed artifact and the prescribed focus of aesthetic appreciation. In the object arts, these are one and the same. The designed artifact is the painting, which is also the prescribed focus of appreciation. In the process arts, they are different. The designed artifact is the game, but the appreciator is prescribed to appreciate their own activity in playing the game. Next, I address the complex question of who the artist really is in a piece of process art — the designer or the active appreciator? Finally, I diagnose the lowly status of the process arts. (shrink)
Aesthetic Qualities in Aesthetics
Artworks in Aesthetics
Dance in Aesthetics
The Value of Art in Aesthetics
Constitutive Rules: Games, Language, and Assertion.Indrek Reiland - 2020 - Philosophy and Phenomenological Research 100 (1):136-159.details
Many philosophers think that games like chess, languages like English, and speech acts like assertion are constituted by rules. Lots of others disagree. To argue over this productively, it would be first useful to know what it would be for these things to be rule-constituted. Searle famously claimed in Speech Acts that rules constitute things in the sense that they make possible the performance of actions related to those things (Searle 1969). On this view, rules constitute games, languages, and speech (...) acts in the sense that they make possible playing them, speaking them and performing them. This raises the question what it is to perform rule-constituted actions (e. g. play, speak, assert) and the question what makes constitutive rules distinctive such that only they make possible the performance of new actions (e. g. playing). In this paper I will criticize Searle's answers to these questions. However, my main aim is to develop a better view, explain how it works in the case of each of games, language, and assertion and illustrate its appeal by showing how it enables rule-based views of these things to respond to various objections. (shrink)
Constitutive Rules in Social Ontology in Social and Political Philosophy
Norms of Assertion in Philosophy of Language
Rule-Based Theories of Meaning in Philosophy of Language
Rule-Following in Philosophy of Mind
Ethik des Computerspielens: Eine Grundlegung.Samuel Ulbricht - 2020 - Heidelberg, Deutschland: Springer Berlin - J.B. Metzler.details
Trotz der steigenden Zahl an Computerspielern weltweit markiert die moralische Einordnung von Computerspielhandlungen ein bislang ungelöstes Rätsel der philosophischen Ethik. Angesichts der Brisanz der Thematik im Alltag (zu sehen an der 'Killerspiel-Debatte') ist augenfällig, dass es einer differenzierten fachlichen Klärung des Phänomens bedarf: Kann das Spielen von Computerspielen unmoralisch sein? -/- Zur Beantwortung dieser Frage erörtert der Autor zunächst, was wir überhaupt tun, wenn wir Computerspiele spielen: Über welche Art von Handlung sprechen wir? Im zweiten Schritt erfolgt eine moralische Einordnung, (...) die erschließt, ob (und wenn ja, warum) manche Computerspielhandlungen moralisch problematisch sind. Die hier angestellten Überlegungen gewähren einen grundlegenden Einblick in die normative Dimension des Computerspielens. (shrink)
$5.82 new $62.77 used Amazon page
Me and My Avatar: Player-Character as Fictional Proxy.Matt Carlson & Logan Taylor - 2019 - Journal of the Philosophy of Games 1.details
Players of videogames describe their gameplay in the first person, e.g. "I took cover behind a barricade." Such descriptions of gameplay experiences are commonplace, but also puzzling because players are actually just pushing buttons, not engaging in the activities described by their first-person reports. According to a view defended by Robson and Meskin (2016), which we call the fictional identity view, this puzzle is solved by claiming that the player is fictionally identical with the player character. Hence, on this view, (...) if the player-character fictionally performs an action then, fictionally, the player performs that action. However, we argue that the fictional identity view does not make sense of players' gameplay experiences and their descriptions of them. We develop an alternative account of the relationship between the player and player-character on which the player-character serves as the player's fictional proxy, and argue that this account makes better sense of the nature of videogames as interactive fictions. (shrink)
Weakly Aggregative Modal Logic: Characterization and Interpolation.Jixin Liu, Yanjing Wang & Yifeng Ding - 2019 - In Patrick Blackburn, Emiliano Lorini & Meiyun Guo (eds.), Logic, Rationality, and Interaction 7th International Workshop, LORI 2019, Chongqing, China, October 18–21, 2019, Proceedings. pp. 153-167.details
Weakly Aggregative Modal Logic (WAML) is a collection of disguised polyadic modal logics with n-ary modalities whose arguments are all the same. WAML has some interesting applications on epistemic logic and logic of games, so we study some basic model theoretical aspects of WAML in this paper. Specifically, we give a van Benthem-Rosen characterization theorem of WAML based on an intuitive notion of bisimulation and show that each basic WAML system Kn lacks Craig Interpolation.
Epistemic Logic in Logic and Philosophy of Logic
Modal and Intensional Logic in Logic and Philosophy of Logic
$7.00 used $8.24 new (collection) Amazon page
The Right Way to Play a Game.C. Thi Nguyen - 2019 - Game Studies 19 (1).details
Is there a right or wrong way to play a game? Many think not. Some have argued that, when we insist that players obey the rules of a game, we give too much weight to the author's intent. Others have argued that such obedience to the rules violates the true purpose of games, which is fostering free and creative play. Both of these responses, I argue, misunderstand the nature of games and their rules. The rules do not tell us how (...) to interpret a game; they merely tell us what the game is. And the point of the rules is not always to foster free and creative play. The point can be, instead, to communicate a sculpted form of activity. And in games, as with any form of communication, we need some shared norms to ground communicative stability. Games have what has been called a "prescriptive ontology." A game is something more than simply a piece of material. It is some material as approached in a certain specified way. These prescriptions help to fix a common object of attention. Games share this prescriptive ontology with more traditional kinds of works. Novels are more than just a set of words on a page; they are those words read in a certain order. Games are more than just some software or cardboard bits; they are those bits interacted with according to certain rules. Part of a game's essential nature is the prescriptions for how we are to play it. What's more, we investigate the prescriptive ontology of games, we will uncover at least distinct prescriptive categories of games. Party games prescribe that we encounter the game once; heavy strategy games prescribe we encounter the game many times; and community evolution games prescribe that we encounter the game while embedded in an ongoing community of play. (shrink)
The Interpretation of Art in Aesthetics
Games and the Art of Agency.C. Thi Nguyen - 2019 - Philosophical Review 128 (4):423-462.details
Games may seem like a waste of time, where we struggle under artificial rules for arbitrary goals. The author suggests that the rules and goals of games are not arbitrary at all. They are a way of specifying particular modes of agency. This is what make games a distinctive art form. Game designers designate goals and abilities for the player; they shape the agential skeleton which the player will inhabit during the game. Game designers work in the medium of agency. (...) Game-playing, then, illuminates a distinctive human capacity. We can take on ends temporarily for the sake of the experience of pursuing them. Game play shows that our agency is significantly more modular and more fluid than we might have thought. It also demonstrates our capacity to take on an inverted motivational structure. Sometimes we can take on an end for the sake of the activity of pursuing that end. (shrink)
Topics in Aesthetics in Aesthetics
"Shining Lights, Even in Death": What Metal Gear Can Teach Us About Morality (Master's Thesis).Ryan Wasser - 2019 - Dissertation, West Chester Universitydetails
Morality has always been a pressing issue in video game scholarship, but became more contentious after "realistic" violence in games became possible. However, few studies concern themselves with how players experience moral dilemmas in games, choosing instead to focus on the way games affect postplay behavior. In my thesis I discuss the moral choices players encounter in the Metal Gear series of games; then, I analyze and compare the responses of players with and without martial career experiences. My argument is (...) that the moral choices players encounter during gameplay affect them differently, particularly if they have life experiences related to medical trauma, law enforcement, fire fighting, or military career fields, and that the behavior of those players will be observably different from players without the same experiences. In chapter one I present my personal history with Metal Gear, before moving on to the literature review in chapter two, which focuses on scholarship about the Metal Gear series of games and video game research as a whole, particularly studies concerned with how violent content affects players. In chapters three and four, I analyze Metal Gear Solid 3 (2004) and Metal Gear Solid V (2014/2015) in order to gain insight into the moral dilemmas posed by each game. In chapter five I report the results of a survey about player responses towards the game dilemmas given by martial and non-martial groups to identify observable patterns of behavior in how they act and react towards each scenario. -/- This is a preprint version of the official paper until an update is produced. The current official version is available at the West Chester Digital Commons in the external link section below. (shrink)
Literary Interpretation in Aesthetics
Moral Motivation in Meta-Ethics
Moral Reasoning and Motivation, Misc in Meta-Ethics
Ontology and Transmedial Games.Christopher Bartel - 2018 - In Jon Robson & Grant Tavinor (eds.), The Aesthetics of Videogames. New York, NY, USA: pp. 9-23.details
Some theorists claim that games are "transmedial", meaning that the same game can be played in different media. It is unclear, however, what are the limits of transmedial games. Are all games in-principle transmedial, or only some? One suggestion offered by Jesper Juul is that, if games are understood as sets of rules, then a game is transmedial if its rules can be either implemented or adapted into some new media. I argue against this view on the grounds that the (...) rules of many games are dependent on the game's media such that they cannot be adapted to a new medium. As such the games-as-rules view of transmediality is not restrictive enough. To add the necessary restriction, I suggest that games are transmedial, not only when they contain the same rules, but also when it requires the same set of skills to play each. I further argue that a skill-set view of transmediality is better able to account for many common intuitions about games. (shrink)
Ontology in Metaphysics
Game Spirituality: How Games Tell Us More Than We Might Think.Chad Carlson - 2018 - Sport, Ethics and Philosophy 12 (1):81-93.details
While we often see games as less serious or at least less transcendental than religion there is reason to believe that games can evoke similarly meaningful narratives that allow us to learn a great deal about ourselves and our world. And games do so often using the same symbolic and metaphorical mechanisms that generate meaning in religious experience. In this paper, I explore some of the ways in which game myths—the myths created from and through games—generate meaning in our lives. (...) People experience myths in games very similarly to how they might in religion. I first explain what myth means in contemporary literature and then show how the very make up of games opens them to a mythical reality. I highlight two ways in particular. I will argue that the inefficiencies within games promote a deep engagement with the world, and this gratuitous nature provides a system for creating myths and actualizing mythical potential. (shrink)
BOGOST, IAN. Play Anything: The Pleasure of Limits, the Uses of Boredom, and the Secret of Games. New York: Basic Books, 2016, Xiv + 267 Pp., $26.99 Cloth. [REVIEW]Casey Haskins - 2018 - Journal of Aesthetics and Art Criticism 76 (1):123-126.details
When Life Becomes a Game: A Moral Lesson From Søren Kierkegaard and Bernard Suits.Daniel M. Johnson - 2018 - Sport, Ethics and Philosophy 13 (3-4):419-431.details
ABSTRACTHidden among the many fascinating things that Bernard Suits says in his classic The Grasshopper is a passing observation he makes about one of the works of Søren Kierkegaard, the Seducer's...
Why Gamers Are Not Performers.Andrew Kania - 2018 - Journal of Aesthetics and Art Criticism 76 (2):187-199.details
I argue that even if video games are interactive artworks, typical video games are not works for performance and players of video games do not perform these games in the sense in which a musician performs a musical composition (or actors a play, dancers a ballet, and so on). Even expert playings of video games for an audience fail to qualify as performances of those works. Some exemplary playings may qualify as independent "performance-works," but this tells us nothing about the (...) ontology of video games or playings of them. The argument proceeds by clarifying the concepts of interactivity and work-performance, drawing particularly on recent work by Dominic Lopes, Berys Gaut, and David Davies. (shrink)
Ludic Constructivism: Or, Individual Life and the Fate of Humankind.Avery Kolers - 2018 - Sport, Ethics and Philosophy 13 (3-4):392-405.details
In The Grasshopper, Bernard Suits argues that the best life is the one whose essence is game-play. In fact, only through the concept of game-play can we understand how anything at all is worth doing. Yet this seems implausible: morality makes things worth doing independently of any game, and games are themselves subject to moral evaluation. So games must be logically posterior to morality. The current paper responds to these objections by developing the theory of Ludic Constructivism. Constructivist theories such (...) as Kant's explain normativity in a way that is both objective and cognitivist but also mind-dependent. Roughly, constructivists ground normative structures in rational procedures. But rational agency is diverse: it is realized in different ways and to different degrees by different agents. Yet Kantian Constructivism requires a strong identity of rational procedures across rational agents. Ludic Constructivism avoids this challenge by rejecting this strong identity of agency, instead building a normative framework out of the ingredients of Suits's definition of game-play. We want to play the best games we can. In order to do so we must play games with a certain structure: they must be nested multiplayer games in which everyone who is capable of self-originating activity is engaged as a fellow player rather than a plaything. Nested games – games that are constructed out of other games – go best when each game contributes to the value of each other game in the nest. Such game nests are "reciprocating value-maximization structures". Our lives go best when we design, play, and revise the game of our Individual Life and we also embed that game within the highest-order nested game of Fate of Humankind. In this way, Ludic Constructivism delivers a normative system that expands Kant's Kingdom of Ends, and a life that meets Aristotle's conception of pleasure. (shrink)
Moral Constructivism in Meta-Ethics
Gateways to Culture: Play, Games, Metaphors, and Institutions.Robert Scott Kretchmar - 2018 - Journal of Cognition and Culture 18 (1-2):47-65.details
In this essay I develop a case for games as a primitive form of culture and an early arrival at our ancestors' cultural gates. I analyze the modest intellectual prerequisites for game behavior including the use of metaphor, a reliance on constitutive rules, and an ability to understand the logic of entailment. In arguing for its early arrival during the late Middle and Upper Paleolithic, I develop a case for its powerful adaptive qualities in terms of both natural and sexual (...) selection. I accept ecological dominance coupled with an increasing sense of self as primary sources of selection pressure. I show how these two factors threatened homeostatic balances ranging from low arousal and atrophy to malaise, depression, and anomie. I suggest that an antidote or adaptation was found in culturally-enhanced forms of play — that is, formal, rule-governed games. The upshot of this analysis is a broadened discussion of cultural adaptation from one that often focuses on cooperation, social complexity, and language to other fundamental issues related to survival — namely, increased leisure time, enhanced arousal needs, and the health and physical skills required for a hunter-forager existence. (shrink)
Flow and Immersion in Video Games: The Aftermath of a Conceptual Challenge.Lazaros Michailidis, Emili Balaguer-Ballester & Xun He - 2018 - Frontiers in Psychology 9.details
Games and the Moral Transformation of Violence.C. Thi Nguyen - 2018 - In Jon Robson & Grant Tavinor (eds.), The Aesthetics of Videogames. Routledge. pp. 181-197.details
An Epistemic Condition for Playing a Game.Lukas Schwengerer - 2018 - Sport, Ethics and Philosophy 13 (3-4):293-306.details
In 'The Grasshopper' Suits proposes that 'playing a game' can be captured as an attempt to achieve a specific state of affairs (prelusory goal), using only means permitted by rules (lusory means). These rules prohibit more efficient means, and are accepted because they make the activity possible (lusory attitude). I argue these conditions are not jointly sufficient. The starting point for the argument is the idea that goals, means and attitudes can pick out their content in different ways. They can (...) involve a direct reference ('crossing this specific finish line'), or a description that picks out something ('crossing a line on the track after running 100 m'). I provide cases in which one's attitudes, accepted goals or accepted means pick out their content by a description such that the person does not play a game, even if Suits's conditions are satisfied. I show that this demands an epistemic condition for playing a game that also applies to commitment based accounts. Finally, I discuss what such an epistemic condition could be. I argue that the condition does not require personal knowledge of all goals and means, but merely enough epistemic access that the goal and permissible means can guide one's behavior safely enough. This might be satisfied by social extensions, such as access to tools (e.g. a rulebook) or other people (e.g. referees). (shrink)
Pre-Game Cheating and Playing the Game.Alex Wolf-Root - 2018 - Sport, Ethics and Philosophy 13 (3-4):334-347.details
There are well-known problems for formalist accounts of game-play with regards to cheating. Such accounts seem to be committed to cheaters being unable to win–or even play–the game, yet it seems that there are instances of cheaters winning games. In this paper, I expand the discussion of such problems by introducing cases of pre-game cheating, and see how a formalist–specifically a Suitsian–account can accommodate such problems. Specifically, I look at two (fictional) examples where the alleged game-players cheat prior to a (...) game-instance in such as a way as to cast doubt on whether the alleged game-players are truly playing the game. To escape the worries brought about by these examples of pre-game cheating, I will appeal to the concept of nested games. This concept will give us the needed tools to explain how the alleged players are cheating and how the alleged players are players. On the whole, this discussion should help illuminate some important issues with regards to cheating and rules on a Suitsian account of game-play, and help give support for formalist accounts more generally. (shrink)
Video Games and Ethics.Monique Wonderly - 2018 - In Joseph C. Pitt & Ashley Shew (eds.), Spaces for the Future: A Companion to Philosophy of Technology. New York, USA: Routledge. pp. 29-41.details
Historically, video games featuring content perceived as excessively violent have drawn moral criticism from an indignant (and sometimes, morally outraged) public. Defenders of violent video games have insisted that such criticisms are unwarranted, as committing acts of virtual violence against computer-controlled characters – no matter how heinous or cruel those actions would be if performed in real life – harm no actual people. In this paper, I present and critically analyze key aspects of this debate. I argue that while many (...) ethical objections to playing violent video games seem to miss their marks, there is sufficient reason to take modest steps in order to address the concerns that theorists have raised. (shrink)
$157.99 new $175.00 used (collection) Amazon page
Bernard Suits on Capacities: Games, Perfectionism, and Utopia.Christopher C. Yorke - 2018 - Journal of the Philosophy of Sport 45 (2):177-188.details
ABSTRACTAn essential and yet often neglected motivation of Bernard Suits' elevation of gameplay to the ideal of human existence is his account of capacities along perfectionist lines and the function of games in eliciting them. In his work Suits treats the expression of these capacities as implicitly good and the purest expression of the human telos. Although it is a possible interpretation to take Suits' utopian vision to mean that gameplay in his future utopia must consist of the logically inevitable (...) replaying of activities we conduct in the present for instrumental reasons, because gameplay for Suits is identical with the expression of sets of capacities specifically elicited by game rules, it is much more likely that he intends utopian gameplay to be an endless series of carefully crafted opportunities for the elicitation of special capacities, and thus embody his ideal of existence. This article therefore provides a new lens for understanding both... (shrink)
Ten Things Video Games Can Teach Us , by Jordan Erica Webber and Daniel Griliopoulos. [REVIEW]Joshua D. Crabill - 2017 - Teaching Philosophy 40 (4):486-490.details
A Kantian View of Suits' Utopia: 'A Kingdom of Autotelically-Motivated Game Players'.Francisco Javier Lopez Frias - 2017 - Journal of the Philosophy of Sport 44 (1):138-151.details
In this paper, I engage the debate on Suits' theory of games by providing a Kantian view of Utopia. I argue that although the Kantian aspects of Suits' approach are often overlooked in comparison to its Socratic-Platonic aspects, Kant's ideas play a fundamental role in Suits' proposal. In particular, Kant's concept of 'regulative idea' is the basis of Suits' Utopia. I regard Utopia as Suits' regulative idea on game playing. In doing so, I take Utopia to play a double role (...) in Suits' theory of games. First, it highlights the primary condition of possibility of game-playing, namely, the lusory attitude. Second, it provides a normative criterion that serves as a critical principle to evaluate instances of game playing and as a counterfactual assumption that makes game playing possible. I provide further support for my Kantian interpretation of Suits' Utopia by bringing to light the anthropological assumptions upon which Utopia is built. In doing so, I argue that both Suits' theory of games, in general, and his Utopia, in particular, lay out the conditions of possibility of game playing, not an analysis on the life most worth living. (shrink)
On the Relationship Between Philosophy and Game-Playing.Yuanfan Huang & Emily Ryall - 2017 - In Wendy Russell, Emily Ryall & Malcolm MacLean (eds.), The Philosophy of Play as Life. London: Routledge. pp. 80-93.details
This chapter focuses on the relation between 'philosophy' and 'games' and argues most of philosophy is a form of game-playing. Two approaches are considered: Wittgenstein's notion of family resemblance and Suits' analytic definition of a game. Both approaches support the assertion that the relationship is a close, if not categorical, one but it is the lusory attitude that is the ultimate determinant.
$49.95 new $57.03 used (collection) Amazon page
Games for Civic Renewal.Joshua Miller, Sarah Shugars & Daniel Levine - 2017 - The Good Society 26 (2).details
Game Theory and Political Philosophy in Philosophy of Action
Normative and Descriptive Game Theory in Philosophy of Action
Philosophy of Games.C. Thi Nguyen - 2017 - Philosophy Compass 12 (8):e12426.details
What is a game? What are we doing when we play a game? What is the value of playing games? Several different philosophical subdisciplines have attempted to answer these questions using very distinctive frameworks. Some have approached games as something like a text, deploying theoretical frameworks from the study of narrative, fiction, and rhetoric to interrogate games for their representational content. Others have approached games as artworks and asked questions about the authorship of games, about the ontology of the work (...) and its performance. Yet others, from the philosophy of sport, have focused on normative issues of fairness, rule application, and competition. The primary purpose of this article is to provide an overview of several different philosophical approaches to games and, hopefully, demonstrate the relevance and value of the different approaches to each other. Early academic attempts to cope with games tried to treat games as a subtype of narrative and to interpret games exactly as one might interpret a static, linear narrative. A faction of game studies, self-described as "ludologists," argued that games were a substantially novel form and could not be treated with traditional tools for narrative analysis. In traditional narrative, an audience is told and interprets the story, where in a game, the player enacts and creates the story. Since that early debate, theorists have attempted to offer more nuanced accounts of how games might achieve similar ends to more traditional texts. For example, games might be seen as a novel type of fiction, which uses interactive techniques to achieve immersion in a fictional world. Alternately, games might be seen as a new way to represent causal systems, and so a new way to criticize social and political entities. Work from contemporary analytic philosophy of art has, on the other hand, asked questions whether games could be artworks and, if so, what kind. Much of this debate has concerned the precise nature of the artwork, and the relationship between the artist and the audience. Some have claimed that the audience is a cocreator of the artwork, and so games are a uniquely unfinished and cooperative art form. Others have claimed that, instead, the audience does not help create the artwork; rather, interacting with the artwork is how an audience member appreciates the artist's finished production. Other streams of work have focused less on the game as a text or work, and more on game play as a kind of activity. One common view is that game play occurs in a "magic circle." Inside the magic circle, players take on new roles, follow different rules, and actions have different meanings. Actions inside the magic circle do not have their usual consequences for the rest of life. Enemies of the magic circle view have claimed that the view ignores the deep integration of game life from ordinary life and point to gambling, gold farming, and the status effects of sports. Philosophers of sport, on the other hand, have approached games with an entirely different framework. This has lead into investigations about the normative nature of games—what guides the applications of rules and how those rules might be applied, interpreted, or even changed. Furthermore, they have investigated games as social practices and as forms of life. (shrink)
Philosophy of Technology, Misc in Philosophy of Computing and Information
Competition as Cooperation.C. Thi Nguyen - 2017 - Journal of the Philosophy of Sport 44 (1):123-137.details
Games have a complex, and seemingly paradoxical structure: they are both competitive and cooperative, and the competitive element is required for the cooperative element to work out. They are mechanisms for transforming competition into cooperation. Several contemporary philosophers of sport have located the primary mechanism of conversion in the mental attitudes of the players. I argue that these views cannot capture the phenomenological complexity of game-play, nor the difficulty and moral complexity of achieving cooperation through game-play. In this paper, I (...) present a different account of the relationship between competition and cooperation. My view is a distributed view of the conversion: success depends on a large number of features. First, the players must achieve the right motivational state: playing for the sake of the struggle, rather than to win. Second, successful transformation depends on a large number of extra-mental features, including good game design, and social and institutional features. (shrink)
Applied Ethics, Misc in Applied Ethics
The Aesthetics of Rock Climbing.C. Thi Nguyen - 2017 - The Philosophers' Magazine 78:37-43.details
Video Games and Imaginative Identification.Stephanie Patridge - 2017 - Journal of Aesthetics and Art Criticism 75 (2):181-184.details
Still Self-Involved: A Reply to Patridge.Jon Robson & Aaron Meskin - 2017 - Journal of Aesthetics and Art Criticism 75 (2):184-187.details
Video Games and Virtual Reality.Robert Seddon - 2017 - In Anthony F. Beavers (ed.), Macmillan Interdisciplinary Handbooks: Philosophy: Technology. Macmillan Reference USA. pp. 191-216.details
Virtual Reality in Philosophy of Computing and Information
The Problem of Evil in Virtual Worlds.Brendan Shea - 2017 - In Mark Silcox (ed.), Experience Machines: The Philosophy of Virtual Worlds. Lanham, MD: Rowman & Littlefield. pp. 137-155.details
In its original form, Nozick's experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one's death. This intuitive objection finds some support in (...) recent research regarding the psychological effects of phenomena such as video games or social media use. After a brief discussion of these problems, I will consider a variation of the experience machine in which many of these deficits are remedied. In particular, I'll explore the consequences of a creating a virtual world populated with strongly intelligent AIs with whom users could interact, and that could be engineered to survive the user's death. The presence of these agents would allow for the cultivation of morally significant relationships, and the world's long-term persistence would help ground possibilities for a meaningful, purposeful life in a way that Nozick's original experience machine could not. While the creation of such a world is obviously beyond the scope of current technology, it represents a natural extension of the existing virtual worlds provided by current video games, and it provides a plausible "ideal case" toward which future virtual worlds will move. While this improved experience machine would seem to represent progress over Nozick's original, I will argue that it raises a number of new problems stemming from the fact that that the world was created to provide a maximally satisfying and meaningful life for the intended user. This, in turn, raises problems analogous in some ways to the problem(s) of evil faced by theists. In particular, I will suggest that it is precisely those features that would make a world most attractive to potential users—the fact that the AIs are genuinely moral agents whose well-being the user can significantly impact—that render its creation morally problematic, since they require that the AIs inhabiting the world be subject to unnecessary suffering. I will survey the main lines of response to the traditional problem of evil, and will argue that they are irrelevant to this modified case. I will close by considering by consider what constraints on the future creation of virtual worlds, if any, might serve to allay the concerns identified in the previous discussion. I will argue that, insofar as the creation of such worlds would allow us to meet morally valuable purposes that could not be easily met otherwise, we would be unwise to prohibit it altogether. However, if our processes of creation are to be justified, they must take account of the interests of the moral agents that would come to exist as the result of our world creation. (shrink)
Homo Ludens Revisited.Mark Silcox - 2017 - Southwest Philosophy Review 33 (1):1-14.details
|
CommonCrawl
|
Fourier transform with periodicity at the harmonic frequency
Let's suppose I have a signal $F(t)$ that is periodic, with two periodicities $P_1$ and $P_2$, with $P_1 > P_2$. Suppose that I know the values of the two periodicities.
Using the Fast Fourier transform I can show the two values as peaks in a power spectrum. Now, let's suppose the second periodicity $P_2$ (the faster one), has exactly the same value as the first harmonic of the fundamental value, or $P_2 = 2\times P_1$. This means that I will be not able to distinguish it by using the power spectrum, at least not by looking at the frequency of the peak.
My question is: is there a way to separate the contributions in such a case? For example, is it possible to predict the power of the first harmonic, so that the difference between the predicted power and the observed power of the harmonic peak gives a result significant enough (i.e., greater than $3\sigma$) to say that the first harmonic also "contains" the contribution from a periodicity?
fourier-transform signal-processing
Py-serPy-ser
$\begingroup$ Would Signal Processing be a better home for this question? $\endgroup$ – Qmechanic♦ Mar 20 '14 at 8:21
$\begingroup$ Simply adding the two equal frequency signals gives a single signal with a different amplitude and phase. The issue at hand plays a role in measuring harmonic distortion in audio amplifiers. Signal processing or Electronics would indeed be a better home. $\endgroup$ – Urgje Mar 20 '14 at 8:52
is there a way to separate the contributions in such a case?
If the second period is exactly twice the first, no. If there is a tiny variation, you might be able to pick up fluctuations as the phase the second beats against only the harmonics (but not the fundamental) of the other, but that strikes me as very difficult.
For example, is it possible to predict the power of the first harmonic, so that the difference between the predicted power and the observed power of the harmonic peak gives a result significant enough (i.e., greater than 3σ) to say that the first harmonic also "contains" the contribution from a periodicity?
Not without knowing more than you've said. Even if we limit our set of functions to "ordinary"1 musical instruments, the amplitude of harmonics generally decay as the harmonic number gets higher, but that's just a generalization. Many instruments have strong first and/or second harmonics. Even an amature singer could be trained to produce tones with large amounts of first and second harmonics, and less fundamental. I don't know about things like motors, but I have had personal experience where harmonics induced into electronics from 60Hz power is much stronger than the fundamental. If you are dealing with a limited input set, you could do some experiments and find out for yourself. If not, then the answer is no, not to my knowledge.
As a side-note, it sometimes happens that notes missing their fundamental frequency and no one notices. This is usually due to signal processing (as in telephones) and is rare in real instruments, but it does happen. See the Missing Fundamental for more information.
1 I am not defining this term, but let's just say I mean pitched instruments with "normal" harmonics. Not, for example, percussion.
Bjorn RocheBjorn Roche
Not the answer you're looking for? Browse other questions tagged fourier-transform signal-processing or ask your own question.
confusion in discrete transform to solve kronig penney matrix equation in fourier space
Resolution in a Fourier transform spectroscopy setup
Continuous Fourier transform vs. Discrete Fourier transform
Why does the resonant frequency disappear for a ball in a potential well being jiggled by multiple frequencies?
Fourier transform and momentum space
|
CommonCrawl
|
Communications on Pure and Applied Analysis
2020, Volume 19, Issue 3: 1747-1793. Doi: 10.3934/cpaa.2020072
This issue Previous Article Global higher integrability of weak solutions of porous medium systems Next Article The mathieu differential equation and generalizations to infinite fractafolds
Bending-torsion moments in thin multi-structures in the context of nonlinear elasticity
Rita Ferreira1, , and
Elvira Zappale2,
King Abdullah University of Science and Technology, 4700 KAUST, CEMSE Division, Thuwal 23955-6900, Saudi Arabia
Università degli Studi di Salerno, Dipartimento di Ingegneria Industriale, Via Giovanni Paolo II, 132 Fisciano (SA), Italy
Abstract Full Text(HTML) Figure(1) Related Papers Cited by
Here, we address a dimension-reduction problem in the context of nonlinear elasticity where the applied external surface forces induce bending-torsion moments. The underlying body is a multi-structure in $\mathbb{R}^3$ consisting of a thin tube-shaped domain placed upon a thin plate-shaped domain. The problem involves two small parameters, the radius of the cross-section of the tube-shaped domain and the thickness of the plate-shaped domain. We characterize the different limit models, including the limit junction condition, in the membrane-string regime according to the ratio between these two parameters as they converge to zero.
Multi-structures,
dimension reduction,
nonlinear elasticity,
bending-torsion moments,
Γ-convergence,
Mathematics Subject Classification: Primary: 49J45, 74B20, 74K30.
Full Text(HTML)
Figure 1. Ωε - reference configuration
Download: Full-size image PowerPoint slide
[1] E. Acerbi, G. Buttazzo and D. Percivale, A variational definition of the strain energy for an elastic string, J. Elasticity, 25 (1991), 137-148. doi: 10.1007/BF00042462.
[2] E. Acerbi and N. Fusco, Semicontinuity problems in the calculus of variations, Arch. Rational Mech. Anal., 86 (1984), 125-145. doi: 10.1007/BF00275731.
[3] J.-F. Babadjian, E. Zappale and H. Zorgati, Dimensional reduction for energies with linear growth involving the bending moment, J. Math. Pures Appl., 90 (2008), 520-549. doi: 10.1016/j.matpur.2008.07.003.
[4] D. Blanchard and G. Griso, Junction between a plate and a rod of comparable thickness in nonlinear elasticity, J. Elasticity, 112 (2013), 79-109. doi: 10.1007/s10659-012-9401-6.
[5] M. Bocea and I. Fonseca, A Young measure approach to a nonlinear membrane model involving the bending moment, Proc. Roy. Soc. Edinburgh Sect. A, 134 (2004), 845-883. doi: 10.1017/S0308210500003516.
[6] G. Bouchitté, I. Fonseca and M. L. Mascarenhas, Bending moment in membrane theory, J. Elasticity, 73 (2003), 75-99. doi: 10.1023/B:ELAS.0000029996.20973.92.
[7] G. Bouchitté, I. Fonseca and M. L. Mascarenhas, The Cosserat vector in membrane theory: a variational approach, J. Convex Anal., 16 (2009), 351-365.
[8] R. Bunoiu, G. Cardone and S. Nazarov, Scalar problems in junctions of rods and a plate. Ⅱ. self-adjoint extensions and simulation models, ESAIM: Mathematical Modelling and Numerical Analysis, 52 (2018), 481-508. doi: 10.1051/m2an/2017047.
[9] G. Carita, J. Matias, M. Morandotti and D. R. Owen, Dimension reduction in the context of structured deformations, Journal of Elasticity, 133 (2018), 1-35. doi: 10.1007/s10659-018-9670-9.
[10] N. Chaudhuri and S. Müller, Rigidity estimate for two incompatible wells, Calc. Var. Partial Differential Equations, 19 (2004), 379-390. doi: 10.1007/s00526-003-0220-2.
[11] P. G. Ciarlet, Mathematical Elasticity: Three-dimensional Elasticity, vol. Ⅰ, North-Holland, Amsterdam, 1988.
[12] P. G. Ciarlet, Plates and Junctions in Elastic Multi-structures, vol. 14 of Recherches en Mathématiques Appliquées [Research in Applied Mathematics], Masson, Paris; Springer-Verlag, Berlin, 1990, An asymptotic analysis.
[13] P. G. Ciarlet, Theory of Plates. Mathematical Elasticity, vol. Ⅱ, North-Holland, Amsterdam, 1997.
[14] S. Conti and B. Schweizer, Rigidity and gamma convergence for solid-solid phase transitions with SO(2) invariance, Comm. Pure Appl. Math., 59 (2006), 830-868. doi: 10.1002/cpa.20115.
[15] G. Dal Maso, An Introduction to Γ-convergence, Progress in Nonlinear Differential Equations and their Applications, 8, Birkhäuser Boston, Inc., Boston, MA, 1993. doi: 10.1007/978-1-4612-0327-8.
[16] C. De Lellis and L. Székelyhidi, Simple proof of two-well rigidity, C. R. Math. Acad. Sci. Paris, 343 (2006), 367-370. doi: 10.1016/j.crma.2006.07.008.
[17] R. Ferreira, Redução Dimensional em Elasticidade Não Linear Através da Γ-Convergência (Dimensional Reduction in Non-linear Elasticity via Γ-Convergence), Master's thesis, Faculty of Sciences of the University of Lisbon (FCUL), 2006.
[18] I. Fonseca, D. Kinderlehrer and P. Pedregal, Energy functionals depending on elastic strain and chemical composition, Calc. Var. Partial Differential Equations, 2 (1994), 283-313. doi: 10.1007/BF01235532.
[19] I. Fonseca and G. Leoni, Modern Methods in the Calculus of Variations: Lp Spaces, Springer Monographs in Mathematics, Springer, New York, 2007.
[20] I. Fonseca, S. Müller and P. Pedregal, Analysis of concentration and oscillation effects generated by gradients, SIAM J. Math. Anal., 29 (1998), 736-756. doi: 10.1137/S0036141096306534.
[21] D. D. Fox, A. Raoult and J. C. Simo, A justification of nonlinear properly invariant plate theories, Arch. Rational Mech. Anal., 124 (1993), 157-199. doi: 10.1007/BF00375134.
[22] L. Freddi, M. G. Mora and R. Paroni, Nonlinear thin-walled beams with a rectangular cross-section–Part I, Math. Models Methods Appl. Sci., 22 (2012), 1150016, 34. doi: 10.1142/S0218202511500163.
[23] L. Freddi, M. G. Mora and R. Paroni, Nonlinear thin-walled beams with a rectangular cross-section–Part Ⅱ, Math. Models Methods Appl. Sci., 23 (2013), 743-775. doi: 10.1142/S0218202512500595.
[24] G. Friesecke, R. D. James and S. Müller, A theorem on geometric rigidity and the derivation of nonlinear plate theory from three-dimensional elasticity, Comm. Pure Appl. Math., 55 (2002), 1461-1506. doi: 10.1002/cpa.10048.
[25] G. Friesecke, R. D. James and S. Müller, A hierarchy of plate models derived from nonlinear elasticity by gamma-convergence, Arch. Ration. Mech. Anal., 180 (2006), 183-236. doi: 10.1007/s00205-005-0400-7.
[26] G. Gargiulo and E. Zappale, A remark on the junction in a thin multi-domain: the non convex case, NoDEA Nonlinear Differential Equations Appl., 14 (2007), 699-728. doi: 10.1007/s00030-007-5046-8.
[27] A. Gaudiello, B. Gustafsson, C. Lefter and J. Mossino, Asymptotic analysis for monotone quasilinear problems in thin multidomains, Differential Integral Equations, 15 (2002), 623-640.
[28] A. Gaudiello, B. Gustafsson, C. Lefter and J. Mossino, Asymptotic analysis of a class of minimization problems in a thin multidomain, Calc. Var. Partial Differential Equations, 15 (2002), 181-201. doi: 10.1007/s005260100114.
[29] A. Gaudiello, R. Monneau, J. Mossino, F. Murat and A. Sili, On the junction of elastic plates and beams, C. R. Math. Acad. Sci. Paris, 335 (2002), 717-722. doi: 10.1016/S1631-073X(02)02543-8.
[30] A. Gaudiello, R. Monneau, J. Mossino, F. Murat and A. Sili, Junction of elastic plates and beams, ESAIM Control Optim. Calc. Var., 13 (2007), 419-457. doi: 10.1051/cocv:2007036.
[31] A. Gaudiello and A. Sili, Asymptotic analysis of the eigenvalues of an elliptic problem in an anisotropic thin multidomain, Proc. Roy. Soc. Edinburgh Sect. A, 141 (2011), 739-754. doi: 10.1017/S0308210510000521.
[32] A. Gaudiello and E. Zappale, Junction in a thin multidomain for a fourth order problem, Math. Models Methods Appl. Sci., 16 (2006), 1887-1918. doi: 10.1142/S0218202506001753.
[33] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics, Springer-Verlag, Berlin, 2001
[34] I. Gruais, Modeling of the junction between a plate and a rod in nonlinear elasticity, Asymptotic Anal., 7 (1993), 179-194.
[35] J. Heinonen, T. Kilpeläinen and O. Martio, Nonlinear Potential Theory of Degenerate Elliptic Equations, Oxford Mathematical Monographs, The Clarendon Press, Oxford University Press, New York, 1993, Oxford Science Publications.
[36] W. Laskowski and H. T. Nguyen, Effective energy integral functionals for thin films with bending moment in the Orlicz-Sobolev space setting, in Function Spaces X, vol. 102 of Banach Center Publ. doi: 10.4064/bc102-0-10.
[37] W. Laskowski and H. T. Nguyen, Effective energy integral functionals for thin films with three dimensional bending moment in the Orlicz-Sobolev space setting, Discuss. Math. Differ. Incl. Control Optim., 36 (2016), 7-31. doi: 10.7151/dmdico.1179.
[38] H. Le Dret, Problèmes variationnels dans les multi-domaines, vol. 19 of Recherches en Mathématiques Appliquées [Research in Applied Mathematics], Masson, Paris, 1991, Modélisation des jonctions et applications. [Modeling of junctions and applications].
[39] H. Le Dret and A. Raoult, The nonlinear membrane model as variational limit of nonlinear three-dimensional elasticity, J. Math. Pures Appl., 74 (1995), 549-578.
[40] H. Le Dret and A. Raoult, Variational convergence for nonlinear shell models with directors and related semicontinuity and relaxation results, Arch. Ration. Mech. Anal., 154 (2000), 101-134. doi: 10.1007/s002050000100.
[41] J. Matos, Young measures and the absence of fine microstructures in a class of phase transitions, European J. Appl. Math., 3 (1992), 31-54. doi: 10.1017/S095679250000067X.
[42] M. G. Mora and S. Müller, Derivation of the nonlinear bending-torsion theory for inextensible rods by Γ-convergence, Calc. Var. Partial Differential Equations, 18 (2003), 287-305. doi: 10.1007/s00526-003-0204-2.
[43] M. G. Mora and S. Müller, Derivation of a rod theory for multiphase materials, Calc. Var. Partial Differential Equations, 28 (2007), 161-178. doi: 10.1007/s00526-006-0039-8.
[44] F. Murat and A. Sili, Comportement asymptotique des solutions du système de l'élasticité linéarisée anisotrope hétérogène dans des cylindres minces, C. R. Acad. Sci. Paris Sér. I Math., 328 (1999), 179–184. doi: 10.1016/S0764-4442(99)80159-1.
[45] F. Murat and A. Sili, Effets non locaux dans le passage 3d–1d en élasticité linéarisée anisotrope hétérogène, C. R. Acad. Sci. Paris Sér. I Math., 330 (2000), 745–750. doi: 10.1016/S0764-4442(00)00232-9.
[46] L. Scardia, Asymptotic models for curved rods derived from nonlinear elasticity by Γ-convergence, Proc. Roy. Soc. Edinburgh Sect. A, 139 (2009), 1037-1070. doi: 10.1017/S0308210507000194.
[47] L. Trabucho and J. Viano, Mathematical modelling of rods, in Handbook of Numerical Analysis, Vol. IV, Handb. Numer. Anal., Ⅳ, North-Holland, Amsterdam, 1996,487–974.
[48] V. Šverák, On the problem of two wells, in Microstructure and Phase Transition, vol. 54 of IMA Vol. Math. Appl., Springer, New York, 1993,183–189. doi: 10.1007/978-1-4613-8360-4_11.
Figures(1)
HTML views(180) PDF downloads(200) Cited by(0)
Rita Ferreira
Elvira Zappale
Ωε - reference configuration
|
CommonCrawl
|
Advancing Human Assessment pp 19–46Cite as
A Review of Developments and Applications in Item Analysis
Tim Moses5
This chapter summarizes ETS contributions concerning the development and application of item analysis procedures. The focus is on dichotomously scored items, which allows for a simplified presentation that is consistent with the focus of the original developments and which has straightforward application to polytomously scored items.
Item Analysis
SEE ALSO (S.A.)
Average Item Score
Total Test Score
Biserial Correlation
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This chapter summarizes contributions ETS researchers have made concerning the applications of, refinements to, and developments in item analysis procedures. The focus is on dichotomously scored items, which allows for a simplified presentation that is consistent with the focus of the developments and which has straightforward applications to polytomously scored items. Item analysis procedures refer to a set of statistical measures used by testing experts to review and revise items, to estimate the characteristics of potential test forms, and to make judgments about the quality of items and assembled test forms. These procedures and statistical measures have been alternatively characterized as conventional item analysis (Lord 1961, 1965a, b), traditional item analysis (Wainer 1989), analyses associated with classical test theory (Embretson and Reise 2000; Hambleton 1989; Tucker 1987; Yen and Fitzpatrick 2006), and simply item analysis (Gulliksen 1950; Livingston and Dorans 2004). This chapter summarizes key concepts of item analysis described in the sources cited. The first section describes item difficulty and discrimination indices . Subsequent sections review discussions about the relationships of item scores and test scores , visual displays of item analysis, and the additional roles item analysis methods have played in various psychometric contexts. The key concepts described in each section are summarized in Table 2.1.
Table 2.1 Summary key item analysis concepts
1 Item Analysis Indices
In their discussions of item analysis, ETS researchers Lord and Novick (1968, p. 327) and, two decades later, Wainer (1989, p. 2) regarded items as the building blocks of a test form being assembled. The assembly of a high-quality test form depends on assuring that the individual building blocks are sound. Numerical indices can be used to summarize, evaluate, and compare a set of items, usually with respect to their difficulties and discriminations. Item difficulty and discrimination indices can also be used to check for potential flaws that may warrant item revision prior to item use in test form assembly. The most well-known and utilized difficulty and discrimination indices of item analysis were developed in the early twentieth century (W. W. Cook 1932; Guilford 1936; Horst 1933; Lentz et al. 1932; Long and Sandiford 1935; Pearson 1909; Symonds 1929; Thurstone 1925). Accounts of ETS scientists Tucker (1987, p. ii), Livingston and Dorans (2004) have described how historical item analysis indices have been applied and adapted at ETS from the mid-1940s to the present day.
1.1 Item Difficulty Indices
In their descriptions of item analyses, Gulliksen (1950) and Tucker (1987) listed two historical indices of item difficulty that have been the focus of several applications and adaptations at ETS. These item difficulty indices are defined using the following notation:
i is a subscript indexing the i = 1 to I items on Test Y,
j is a subscript indexing the j = 1 to N examinees taking Test Y,
x ij indicates a score of 0 or 1 on the ith dichotomously scored Item i from examinee j (all N examinees have scores on all I items).
The most well-known item difficulty index is the average item score, or, for dichotomously scored items, the proportion of correct responses, the "p-value" or "P + " (Gulliksen 1950; Hambleton 1989; Livingston and Dorans 2004; Lord and Novick 1968; Symonds 1929; Thurstone 1925; Tucker 1987; Wainer 1989):
$$ {\overline{x}}_i=\frac{1}{N}\sum_j^N{x}_{ij}. $$
Estimates of the quantity defined in Eq. 2.1 can be obtained with several alternative formulas.Footnote 1 A more complex formula that is the basis of developments described in Sect. 2.2.1 can be obtained based on additional notation, where.
k is a subscript indexing the k = 0 to I possible scores of Test Y (y k ),
\( {\widehat{p}}_k \) is the observed proportion of examinees obtaining test score y k ,
\( {\overline{x}}_{ik} \) is the average score on Item i for examinees obtaining test score y k .
With the preceding notation, the average item score as defined in Eq. 2.1 can be obtained as
$$ {\overline{x}}_i=\sum_k{\hat{p}}_k{\overline{x}}_{ik}. $$
Alternative item difficulty indices that use a transformation based on the inverse of the cumulative distribution function (CDF) of the normal distribution for the \( {\overline{x}}_i \) in Eq. 2.1 have been proposed by ETS scientists (Gulliksen 1950; Horst 1933) and others (Symonds 1929; Thurstone 1925). The transformation based on the inverse of the CDF of the normal distribution is used extensively at ETS is the delta index developed by Brolyer (Brigham 1932; Gulliksen 1950):
$$ {\widehat{\varDelta}}_i=13-4{\varPhi}^{-1}\left({\overline{x}}_i\right), $$
where Φ−1(p) represents the inverse of the standard normal cumulative distribution corresponding to the pth percentile. ETS scientists Gulliksen (1950, p. 368), Fan (1952, p. 1), Holland and Thayer (1985, p. 1), and Wainer (1989, p. 7) have described deltas as having features that differ from those of average item scores:
The delta provides an increasing expression of an item's difficulty (i.e., is negatively associated with the average item score).
The increments of the delta index are less compressed for very easy or very difficult items.
The sets of deltas obtained for a test's items from two different examinee groups are more likely to be linearly related than the corresponding sets of average item scores.
Variations of the item difficulty indices in Eqs. 2.1 and 2.2 have been adapted and used in item analyses at ETS to address examinee group influences on item difficulty indices . These variations have been described both as actual item difficulty parameters (Gulliksen 1950, pp. 368–371) and as adjustments to existing item difficulty estimates (Tucker 1987, p. iii). One adjustment is the use of a linear function to transform the mean and standard deviation of a set of \( {\widehat{\varDelta}}_i \) values from one examinee group to this set's mean and standard deviation from the examinee group of interest (Gulliksen 1950; Thurstone 1925, 1947; Tucker 1987):
$$ {\widehat{e}}_2\left({\widehat{\varDelta}}_{i,1}\right)={\overline{\varDelta}}_{.,2}+\frac{{\widehat{\sigma}}_{.,2}\left(\varDelta \right)}{{\widehat{\sigma}}_{.,1}\left(\varDelta \right)}\left({\widehat{\varDelta}}_{i,1}-{\overline{\varDelta}}_{.,1}\right). $$
Equation 2.3 shows that the transformation of Group 1's item deltas to the scale of Group 2's deltas, \( {\widehat{e}}_2\left({\varDelta}_{i,1}\right) \), is obtained from the averages, \( {\overline{\varDelta}}_{.,1} \) and \( {\overline{\varDelta}}_{.,2} \), and standard deviations, \( {\widehat{\sigma}}_{.,1}\left(\varDelta \right) \) and \( {\widehat{\sigma}}_{.,2}\left(\varDelta \right) \), of the groups' deltas. The "mean sigma" adjustment in Eq. 2.3 has been exclusively applied to deltas (i.e., "delta equating"; Gulliksen 1950; Tucker 1987, p. ii) due to the higher likelihood of item deltas to reflect linear relationships between the deltas obtained from two examinee groups on the same set of items. Another adjustment uses Eq. 2.1 to estimate the average item scores for an examinee group that did not respond to those items but has available scores and \( {\widehat{p}}_k \) estimates on a total test (e.g., Group 2). Using Group 2's \( {\widehat{p}}_k \) estimates and the conditional average item scores from Group 1, which actually did respond to the items and also has scores on the same test as Group 2 (Livingston and Dorans 2004; Tucker 1987), the estimated average item score for Item i in Group 2 is
$$ {\overline{x}}_{i,2}=\sum_k{\hat{p}}_{k,2}{\overline{x}}_{ik,1}. $$
The Group 2 adjusted or reference average item scores produced with Eq. 2.4 can be subsequently used with Eq. 2.2 to obtain delta estimates for Group 2.
Other measures have been considered as item difficulty indices in item analyses at ETS but have not been used as extensively as those in Eqs. 2.1, 2.2, 2.3, and 2.4. The motivation for considering the additional measures was to expand the focus of Eqs. 2.1, 2.2, and 2.3 beyond item difficulty to address the measurement heterogeneity that would presumably be reflected in relatively low correlations with other items, test scores, or assumed underlying traits (Gulliksen 1950, p. 369; Tucker 1948, 1987, p. iii). Different ways to incorporate items' biserial correlations (described in Sect. 2.1.2) have been considered, including the estimation of item–test regressions to identify the test score that predicts an average item score of 0.50 in an item (Gulliksen 1950). Other proposals to address items' measurement heterogeneity were attempts to incorporate heterogeneity indices into difficulty indices, such as by conducting the delta equating of Eq. 2.3 after dividing the items' deltas by the items' biserial correlations (Tucker 1948) and creating alternative item difficulty indices from the parameter estimates of three-parameter item characteristic curves (Tucker 1981). These additional measures did not replace delta equating in historical ETS practice, partly because of the computational and numerical difficulties in estimating biserial correlations (described later and in Tucker 1987, p. iii), accuracy loss due to computational difficulties in estimating item characteristic curves (Tucker 1981), and interpretability challenges (Tucker 1987, p. vi). Variations of the delta statistic in Eq. 2.2 have been proposed based on logistic cumulative functions rather than normal ogives (Holland and Thayer 1985). The potential benefits of logistic cumulative functions include a well-defined standard error estimate, odds ratio interpretations, and smoother and less biased estimation. These benefits have not been considered substantial enough to warrant a change to wide use of logistic cumulative functions, because the difference between the values of the logistic cumulative function and the normal ogive cumulative function is small (Haley, cited in Birnbaum 1968, p. 399). In other ETS research by Olson , Scheuneman , and Grima (1989), proposals were made to study items' difficulties after exploratory and confirmatory approaches are used to categorize items into sets based on their content, context, and/or task demands.
1.2 Item Discrimination Indices
Indices of item discrimination summarize an item's relationship with a trait of interest. In item analysis, the total test score is almost always used as an approximation of the trait of interest. On the basis of the goals of item analysis to evaluate items, items that function well might be distinguished from those with flaws based on whether the item has a positive versus a low or negative association with the total score. One historical index of the item–test relationship applied in item analyses at ETS is the product moment correlation (Pearson 1895; see also Holland 2008; Traub 1997):
$$ \widehat{r}\left({x}_i,y\right)=\frac{\widehat{\sigma}\left({x}_i,y\right)}{\widehat{\sigma}\left({x}_i\right)\widehat{\sigma}(y)}, $$
where \( \widehat{\sigma}\left({x}_i,y\right) \), \( \hat{\widehat{\sigma}}\left({x}_i\right) \), and \( \widehat{\sigma}(y) \) denote the estimated covariance and standard deviations of the item scores and test scores. For the dichotomously scored items of interest in this chapter, Eq. 2.5 is referred to as a point biserial correlation, which may be computed as
$$ {\hat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,y\right)=\frac{\frac{1}{N}{\sum}_k{N}_k{\overline{x}}_{ik}{y}_k-{\overline{x}}_i\overline{y}}{\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}\hat{\sigma}(y)}, $$
where N and N k denote the sample sizes for the total examinee group and for the subgroup of examinees obtaining total score y k and \( {\overline{x}}_i \) and \( \overline{y} \) are the means of Item i and the test for the total examinee group. As described in Sect. 2.2.1, the point biserial correlation is a useful item discrimination index due to its direct relationship with respect to test score characteristics.
In item analysis applications, ETS researcher Swineford (1936) described how the point biserial correlation can be a "considerably lowered" (p. 472) measure of item discrimination when the item has an extremely high or low difficulty value. The biserial correlation (Pearson 1909) addresses the lowered point biserial correlation based on the assumptions that (a) the observed scores of Item i reflect an artificial dichotomization of a continuous and normally distributed trait (z), (b) y is normally distributed, and (c) the regression of y on z is linear. The biserial correlation can be estimated in terms of the point biserial correlation and is itself an estimate of the product moment correlation of z and y:
$$ {\widehat{r}}_{\mathrm{biserial}}\left({x}_i,y\right)={\widehat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,y\right)\frac{\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}}{\varphi \left({\overset{\frown }{q}}_i\right)}\approx {\widehat{r}}_{zy}, $$
where \( \varphi \left({\hat{q}}_i\right) \) is the density of the standard normal distribution at \( {\widehat{q}}_i \) and where \( {\widehat{q}}_i \)is the assumed and estimated point that dichotomizes z into x i (Lord and Novick 1968). Arguments have been made for favoring the biserial correlation estimate over the point biserial correlation as a discrimination index because the biserial correlation is not restricted in range due to Item i's dichotomization and because the biserial correlation is considered to be more invariant with respect to examinee group differences (Lord and Novick 1968, p. 343; Swineford 1936).
Despite its apparent advantages over the point biserial correlation (described earlier), ETS researchers and others have noted several drawbacks to the biserial correlation. Some of the potential drawbacks pertain to the computational complexities the \( \varphi \left({\widehat{q}}_i\right) \) in Eq. 2.7 presented for item analyses conducted prior to modern computers (DuBois 1942; Tucker 1987). Theoretical and applied results revealed the additional problem that estimated biserial correlations could exceed 1 (and be lower than −1, for that matter) when the total test scores are not normally distributed (i.e., highly skewed or bimodal) and could also have high standard errors when the population value is very high (Lord and Novick 1968; Tate 1955a, b; Tucker 1987).
Various attempts have been made to address the difficulties of computing the biserial correlation. Prior to modern computers, these attempts usually involved different uses of punch card equipment (DuBois 1942; Tucker 1987). ETS researcher Turnbull (1946) proposed the use of percentile categorizations of the total test scores and least squares regression estimates of the item scores on the categorized total test scores to approximate Eq. 2.7 and also avoid its computational challenges. In other ETS work, lookup tables were constructed using the average item scores of the examinee groups falling below the 27th percentile or above the 73rd percentile on the total test and invoking bivariate normality assumptions (Fan 1952). Attempts to normalize the total test scores resulted in partially improved biserial correlation estimates but did not resolve additional estimation problems due to the discreteness of the test scores (Tucker 1987, pp. ii–iii, v). With the use of modern computers, Lord (1961) used simulations to evaluate estimation alternatives to Eq. 2.7, such as those proposed by Brogden (1949) and Clemens (1958). Other correlations based on maximum likelihood, ad hoc, and two-step (i.e., combined maximum likelihood and ad hoc) estimation methods have also been proposed and shown to have accuracies similar to each other in simulation studies (Olsson , Drasgow , and Dorans 1982).
The biserial correlation estimate eventually developed and utilized at ETS is from Lewis , Thayer , and Livingston (n.d.; see also Livingston and Dorans 2004). Unlike the biserial estimate in Eq. 2.7, the Lewis et al. method can be used with dichotomously or polytomously scored items, produces estimates that cannot exceed 1, and does not rely on bivariate normality assumptions. This correlation has been referred to as an r-polyreg correlation, an r-polyserial estimated by regression correlation (Livingston and Dorans 2004, p. 14), and an r-biserial correlation for dichotomously scored items. The correlation is based on the assumption that the item scores are determined by the examinee's position on an underlying latent continuous variable z. The distribution of z for candidates with a given criterion score y is assumed to be normal with mean β i y and variance 1, implying the following probit regression model:
$$ P\left({x}_i\le 1|y\right)=P\left(z{\le}_i\alpha |y\right)=\varphi \left({a}_i-{\beta}_iy\right), $$
where α i is the value of z corresponding to x i = 1, Φ is the standard normal cumulative distribution function, and a i and β i are intercept and slope parameters. Using the maximum likelihood estimate of β i , the r-polyreg correlation can be computed as
$$ {\widehat{r}}_{\mathrm{polyreg}}\left({x}_i,y\right)=\frac{\sqrt{{\widehat{\beta}}_i^2{\widehat{\sigma}}_y^2}}{\sqrt{{\widehat{\beta}}_i^2{\widehat{\sigma}}_y^2+1}}, $$
where \( {\widehat{\sigma}}_y \) is the standard deviation of scores on criterion variable y and is estimated in the same group of examinees for which the polyserial correlation is to be estimated. In Olsson et al.'s (1982) terminology, the \( {\widehat{r}}_{\mathrm{polyreg}}\left({x}_i,y\right) \) correlation might be described as a two-step estimator that uses a maximum likelihood estimate of β i and the traditional estimate of the standard deviation of y.
Other measures of item discrimination have been considered at ETS but have been less often used than those in Eqs. 2.5, 2.6, 2.7 and 2.9. In addition to describing relationships between total test scores and items' correct/incorrect responses, ETS researcher Myers (1959) proposed the use of biserial correlations to describe relationships between total test scores and distracter responses and between total test scores and not-reached responses. Product moment correlations are also sometimes used to describe and evaluate an item's relationships with other items (i.e., phi correlations; Lord and Novick 1968). Alternatives to phi correlations have been developed to address the effects of both items' dichotomizations (i.e., tetrachoric correlations; Lord and Novick 1968; Pearson 1909). Tetrachoric correlations have been used less extensively than phi correlations for item analysis at ETS, possibly due to their assumption of bivariate normality and their lack of invariance advantages (Lord and Novick 1968, pp. 347–349). Like phi correlations, tetrachoric correlations may also be infrequently used as item analysis measures because they describe the relationship of only two test items rather than an item and the total test.
2 Item and Test Score Relationships
Discussions of the relationships of item and test score characteristics typically arise in response to a perceived need to expand the focus of item indices. For example, in Sect. 2.1.2, item difficulty indices have been noted as failing to account for items' measurement heterogeneity (see also Gulliksen 1950, p. 369). Early summaries and lists of item indices (W. W. Cook 1932; Guilford 1936; Lentz et al. 1932; Long and Sandiford 1935; Pearson 1909; Richardson 1936; Symonds 1929), and many of the refinements and developments of these item indices from ETS, can be described with little coverage of their implications for test score characteristics. Even when test score implications have been covered in historical discussions, this coverage has usually been limited to experiments about how item difficulties relate to one or two characteristics of test scores (Lentz et al. 1932; Richardson 1936) or to "arbitrary indices" (Gulliksen 1950, p. 363) and "arbitrarily defined" laws and propositions (Symonds 1929, p. 482). In reviewing the sources cited earlier, Gulliksen (1950) commented that "the striking characteristic of nearly all the methods described is that no theory is presented showing the relationship between the validity or reliability of the total test and the method of item analysis suggested" (p. 363).
Some ETS contributions to item analysis are based on describing the relationships of item characteristics to test score characteristics. The focus on relationships of items and test score characteristics was a stated priority of Gulliksen's (1950) review of item analysis: "In developing and investigating procedures of item analysis, it would seem appropriate, first, to establish the relationship between certain item parameters and the parameters of the total test" (p. 364). Lord and Novick (1968) described similar priorities in their discussion of item analysis and indices: "In mental test theory, the basic requirement of an item parameter is that it have a definite (preferably a clear and simple) relationship to some interesting total-test-score parameter" (p. 328). The focus of this section's discussion is summarizing how the relationships of item indices and test form characteristics were described and studied by ETS researchers such as Green Jr. (1951), Gulliksen (1950), Livingston and Dorans (2004), Lord and Novick (1968), Sorum (1958), Swineford (1959), Tucker (1987), Turnbull (1946), and Wainer (1989).
2.1 Relating Item Indices to Test Score Characteristics
A test with scores computed as the sum of I dichotomously scored items has four characteristics that directly relate to average item scores and point biserial correlations of the items (Gulliksen 1950; Lord and Novick 1968). These characteristics include Test Y's mean (Gulliksen 1950, p. 367, Eq. 5; Lord and Novick 1968, p. 328, Eq. 15.2.3),
$$ \overline{y}=\sum_i{\overline{x}}_i, $$
Test Y's variance (Gulliksen 1950, p. 377, Equation 19; Lord and Novick 1968, p. 330, Equations 15.3.5 and 15.3.6),
$$ {\widehat{\sigma}}^2(y)=\sum_i{\widehat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,y\right)\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}\widehat{\sigma}(y)=\sum_i\widehat{\sigma}\left({x}_i,y\right), $$
Test Y's alpha or KR-20 reliability (Cronbach 1951; Gulliksen 1950, pp. 378–379, Eq. 21; Kuder and Richardson 1937; Lord and Novick 1968, p. 331, Eq. 15.3.8),
$$ \widehat{r}\mathrm{el}(y)=\left(\frac{I}{I-1}\right)\left\{1-\frac{\sum_i{\overline{x}}_i\left(1-{\overline{x}}_i\right)}{{\left[\sum_i{\hat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,y\right)\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}\right]}^2}\right\}, $$
and Test Y's validity as indicated by Y's correlation with an external criterion, W (Gulliksen 1950, pp. 381–382, Eq. 24; Lord and Novick 1968, p. 332, Eq. 15.4.2),
$$ {\widehat{r}}_{wy}=\frac{\sum_i{\widehat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,w\right)\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}}{\sum_i{\widehat{r}}_{\mathrm{point}\ \mathrm{biserial}}\left({x}_i,y\right)\sqrt{{\overline{x}}_i\left(1-{\overline{x}}_i\right)}}. $$
Equations 2.10–2.13 have several implications for the characteristics of an assembled test. The mean of an assembled test can be increased or reduced by including easier or more difficult items (Eq. 2.10). The variance and reliability of an assembled test can be increased or reduced by including items with higher or lower item–test correlations (Eqs. 2.11 and 2.12, assuming fixed item variances). The validity of an assembled test can be increased or reduced by including items with lower or higher item–test correlations (Eq. 2.13).
The test form assembly implications of Eqs. 2.10, 2.11, 2.12 and 2.13 have been the focus of additional research at ETS. Empirical evaluations of the predictions of test score variance and reliability from items' variances and correlations with test scores suggest that items' correlations with test scores have stronger influences than items' variances on test score variance and reliability (Swineford 1959). Variations of Eq. 2.12 have been proposed that use an approximated linear relationship to predict test reliability from items' biserial correlations with test scores (Fan , cited in Swineford 1959). The roles of item difficulty and discrimination have been described in further detail for differentiating examinees of average ability (Lord 1950) and for classifying examinees of different abilities (Sorum 1958). Finally, the correlation of a test and an external criterion shown in Eq. 2.13 has been used to develop methods of item selection and test form assembly based on maximizing test validity (Green 1951; Gulliksen 1950; Horst 1936).
2.2 Conditional Average Item Scores
In item analyses, the most detailed descriptions of relationships of items and test scores take the form of \( {\overline{x}}_{ik} \), the average item score conditional on the kth score of total test Y (i.e., the discussion immediately following Eq. 2.1). ETS researchers have described these conditional average item scores as response curves (Livingston and Dorans 2004, p. 1), functions (Wainer 1989, pp. 19–20), item–test regressions (Lord 1965b, p. 373), and approximations to item characteristic SeeSeeICC curves (Tucker 1987, p. ii). Conditional average item scores tend to be regarded as one of the most fundamental and useful outputs of item analysis, because the \( {\overline{x}}_{ik} \) are useful as the basis to calculate in item difficulty indices such as the overall average item score (the variation of Eq. 2.1), item difficulties estimated for alternative examinee groups (Eq. 2.4), and item discrimination indices such as the point biserial correlation (Eq. 2.6). Because the \( 1-{\overline{x}}_{ik} \) scores are also related to the difficulty and discrimination indices, the percentages of examinees choosing different incorrect (i.e., distracter) options or omitting the item making up the \( 1-{\overline{x}}_{ik} \) scores can provide even more information about the item. Item reviews based on conditional average item scores and conditional proportions of examinees choosing distracters and omitting the item involve relatively detailed presentations of individual items rather than tabled listings of all items' difficulty and discrimination indices for an entire test. The greater detail conveyed in conditional average item scores has prompted consideration of the best approaches to estimation and display of results.
The simplest and most direct approach to estimating and presenting \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \) is based on the raw, unaltered conditional averages at each score of the total test. This approach has been considered in very early item analyses (Thurstone 1925) and also in more current psychometric investigations by ETS researchers Dorans and Holland (1993), Dorans and Kulick (1986), and Moses et al. (2010). Practical applications usually reveal that raw conditional average item scores are erratic and difficult to interpret without reference to measures of sampling instabilities (Livingston and Dorans 2004, p. 12).
Altered versions of \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \) have been considered and implemented in operational and research contexts at ETS. Operational applications favored grouping total test scores into five or six percentile categories, with equal or nearly equal numbers of examinees, and reporting conditional average item scores and percentages of examinees choosing incorrect options across these categories (Tucker 1987; Turnbull 1946; Wainer 1989). Other, less practical alterations of the \( {\overline{x}}_{ik} \) were considered in research contexts based on very large samples (N > 100,000), where, rather than categorizing the y k scores, the \( {\overline{x}}_{ik} \) values were only presented at total test scores with more than 50 examinees (Lord 1965b). Questions remained about how to present \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \) at the uncategorized scores of the total test while also controlling for sampling variability (Wainer 1989, pp. 12–13).
Other research about item analysis has considered alterations of \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \) (Livingston and Dorans 2004; Lord 1965a, b; Ramsay 1991). Most of these alterations involved the application of models and smoothing methods to reveal trends and eliminate irregularities due to sampling fluctuations in \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \). Relatively strong mathematical models such as normal ogive and logistic functions have been found to be undesirable in theoretical discussions (i.e., the average slope of all test items' conditional average item scores does not reflect the normal ogive model; Lord 1965a) and in empirical investigations (Lord 1965b). Eventually,
the developers of the ETS system chose a more flexible approach—one that allows the estimated response curve to take the shape implied by the data. Nonmonotonic curves, such as those observed with distracters, can be easily fit by this approach. (Livingston and Dorans 2004, p. 2)
This approach utilizes a special version of kernel smoothing (Ramsay 1991) to replace each \( {\overline{x}}_{ik} \) or \( 1-{\overline{x}}_{ik} \) value with a weighted average of all k = 0 to I values:
$$ KS\left({\overline{x}}_{ik}\right)={\left(\sum_{l=0}^I{w}_{kl}\right)}^{-1}\sum_{l=0}^I{w}_{kl}{\overline{x}}_{il}. $$
The w kl values of Eq. 2.14 are Gaussian weights used in the averaging,
$$ {w}_{kl}=\exp \left[\frac{-1}{2h}{\frac{\left({y}_l-{y}_k\right)}{{\widehat{\sigma}}^2(y)}}^2\right]{n}_l, $$
where exp denotes exponentiation, n l is the sample size at test score y l , and h is a kernel smoothing bandwidth parameter determining the extent of smoothing (usually set at 1.1N −0.2; Ramsay 1991). The rationale of the kernel smoothing procedure is to smooth out sampling irregularities by averaging adjacent \( {\overline{x}}_{ik} \) values, but also to track the general trends in \( {\overline{x}}_{ik} \) by giving the largest weights to the \( {\overline{x}}_{ik} \) values at y scores closest to y k and at y scores with relatively large conditional sample sizes, n l . As indicated in the preceding Livingston and Dorans (2004) quote, the kernel smoothing in Eqs. 2.14 and 2.15 is also applied to the conditional percentages of examinees omitting and choosing each distracter that contribute to \( 1-{\overline{x}}_{ik} \). Standard errors and confidence bands of the raw and kernel-smoothed versions of \( {\overline{x}}_{ik} \) values have been described and evaluated in Lewis and Livingston (2004) and Moses et al. (2010).
3 Visual Displays of Item Analysis Results
Presentations of item analysis results have reflected increasingly refined integrations of indices and conditional response information. In this section, the figures and discussions from the previously cited investigations are reviewed to trace the progression of item analysis displays from pre-ETS origins to current ETS practice.
The original item analysis example is Thurstone's (1925) scaling study for items of the Binet –Simon test, an early version of the Stanford–Binet test (Becker 2003; Binet and Simon 1905). The Binet–Simon and Stanford–Binet intelligence tests represent some of the earliest adaptive tests , where examiners use information they have about an examinee's maturity level (i.e., mental age) to determine where to begin testing and then administer only those items that are of appropriate difficulty for that examinee. The use of multiple possible starting points, and subsets of items, results in limited test administration time and maximized information obtained from each item but also presents challenges in determining how items taken by different examinees translate into a coherent scale of score points and of mental age (Becker 2003).
Thurstone (1925) addressed questions about the Binet–Simon test scales by developing and applying the item analysis methods described in this chapter to Burt's (1921) study sample of 2764 examinees' Binet–Simon test and item scores. Some steps of these analyses involved creating graphs of each of the test's 65 items' proportions correct, \( {\overline{x}}_{ik} \), as a function of examinees' chronological ages, y. Then each item's "at par" (p. 444) age, y k , is found such that 50% of examinees answered the item correctly, \( {\overline{x}}_{ik}=0.5 \). Results of these steps for a subsample of the items were presented and analyzed in terms of plotted \( {\overline{x}}_{ik} \) values (reprinted in Fig. 2.1).
Thurstone's (1925) Figure 5, which plots proportions of correct response (vertical axis) to selected items from the Binet–Simon test among children in successive age groups (horizontal axis)
Thurstone's (1925) analyses included additional steps for mapping all 65 items' at par ages to an item difficulty scale for 3.5-year-old examinees:
First the proportions correct of the items taken by 3-year-old, 4-year-old, …, 14-year-old examinees were converted into indices similar to the delta index shown in Eq. 2.2. That is, Thurstone's deltas were computed as \( {\hat{\varDelta}}_{ik}=0-(1){\varPhi}^{-1}\left({\overline{x}}_{ik}\right) \), where the i subscript references the item and the k subscript references the age group responding to the item.
For the sets of common items administered to two adjacent age groups (e.g., items administered to 8-year-old examinees and to 7-year-old examinees), the two sets of average item scores, \( {\overline{x}}_{i7} \) and \( {\overline{x}}_{i8} \), were converted into deltas, \( {\widehat{\varDelta}}_{i7} \) and \( {\widehat{\varDelta}}_{i8} \).
The means and standard deviations of the two sets of deltas from the common items administered to two adjacent age groups (e.g., 7- and 8-year-old examinees) were used with Eq. 2.3 to transform the difficulties of items administered to older examinees to the difficulty scale of items administered to the younger examinees,
\( {\widehat{e}}_7\left({\widehat{\varDelta}}_{i8}\right)={\overline{\varDelta}}_{.,7}+\frac{{\widehat{\sigma}}_{.,7}\left(\varDelta \right)}{{\widehat{\sigma}}_{.,8}\left(\varDelta \right)}\left({\widehat{\varDelta}}_{i8}-{\overline{\varDelta}}_{.,8}\right). \)
Steps 1–3 were repeated for the two sets of items administered to adjacent age groups from ages 3 to 14 years, with the purpose of developing scale transformations for the item difficulties observed for each age group to the difficulty scale of 3.5-year-old examinees.
The transformations obtained in Steps 1–4 for scaling the item difficulties at each age group to the difficulty scale of 3.5-year-old examinees were applied to items' \( {\widehat{\varDelta}}_{ik} \) and \( {\overline{x}}_{ik} \) estimates nearest to the items' at par ages. For example, with items at an at par age of 7.9, two scale transformations would be averaged, one for converting the item difficulties of 7-year-old examinees to the difficulty scale of 3.5-year-old examinees and another for converting the item difficulties of 8-year-old examinees to the difficulty scale of 3.5-year-old examinees. For items with different at par ages, the scale transformations corresponding to those age groups would be averaged and used to convert to the difficulty scale of 3.5-year-old examinees.
Thurstone (1925) used Steps 1–5 to map all 65 of the Binet–Simon test items to a scale and to interpret items' difficulties for 3.5-year-old examinees (Fig. 2.2). Items 1–7 are located to the left of the horizontal value of 0 in Fig. 2.2, indicating that these items are relatively easy (i.e., have \( {\overline{x}}_{i3.5} \) values greater than 0.5 for the average 3.5-year-old examinee). Items to the right of the horizontal value of 0 in Fig. 2.2 are relatively difficult (i.e., have \( {\overline{x}}_{i3.5} \) values less than 0.5 for the average 3.5-year-old examinee). The items in Fig. 2.2 at horizontal values far above 0 (i.e., greater than the mean item difficulty value of 0 for 3.5-year-old examinees by a given number of standard deviation units) are so difficult that they would not actually be administered to 3.5-year-old examinees. For example, Item 44 was actually administered to examinees 7 years old and older, but this item corresponds to a horizontal value of 5 in Fig. 2.2, implying that its proportion correct is estimated as 0.5 for 3.5-year-old examinees who are 5 standard deviation units more intelligent than the average 3.5-year-old examinee. The presentation in Fig. 2.2 provided empirical evidence that allowed Thurstone (1925) to describe the limitations of assembled forms of Burt–Simon items for measuring the intelligence of examinees at different ability levels and ages: "…the questions are unduly bunched at certain ranges and rather scarce at other ranges" (p. 448). The methods Thurstone (1925) developed, and displayed in Figs. 2.1 and 2.2, were adapted and applied in item analysis procedures used at ETS (Gulliksen 1950, p. 368; Tucker 1987, p. ii).
Thurstone's (1925) Figure 6, which represents Binet–Simon test items' average difficulty on an absolute scale
Turnbull's (1946) presentation of item analysis results for an item from a 1946 College Entrance Examination Board test features an integration of tabular and graphical results, includes difficulty and discrimination indices , and also shows the actual multiple-choice item being analyzed (Fig. 2.3). The graph and table in Fig. 2.3 convey the same information, illustrating the categorization of the total test score into six categories with similar numbers of examinees (n k = 81 or 82). Similar to Thurstone's conditional average item scores (Fig. 2.1), Turnbull's graphical presentation is based on a horizontal axis variable with few categories. The small number of categories limits sampling variability fluctuations in the conditional average item scores, but these categories are labeled in ways that conceal the actual total test scores corresponding to the conditional average item scores. In addition to presenting conditional average item scores, Turnbull's presentation reports conditional percentages of examinees choosing the item's four distracters. Wainer (1989, p. 10) pointed out that the item's correct option is not directly indicated but must be inferred to be the option with conditional scores that monotonically increase with the criterion categories. The item's overall average score (percentage choosing the right response) and biserial correlation, as well as initials of the staff who graphed and checked the results, are also included.
Turnbull's (1946) Figure 1, which reports a multiple-choice item's normalized graph (right) and table (left) for all of its response options for six groupings of the total test score
A successor of Turnbull's (1946) item analysis is the ETS version shown in Fig. 2.4 for a 1981 item from the PSAT/NMSQT ® test (Wainer 1989).Footnote 2 The presentation in Fig. 2.4 is completely tabular, with the top table showing conditional sample sizes of examinees choosing the correct option, the distracters, and omitting the item, at five categories of the total test scores (Tucker 1987). The lower table in Fig. 2.4 shows additional overall statistics such as sample sizes and PSAT/NMSQT scores for the group of examinees choosing each option and the group omitting the item, overall average PSAT/NMSQT score for examinees reaching the item (M TOTAL), observed deltas (Δ O ), deltas equated to a common scale using Eq. 2.3 (i.e., "equated deltas," Δ E ), percentage of examinees responding to the item (P TOTAL), percentage of examinees responding correctly to the item (P +), and the biserial correlation (r bis). The lower table also includes an asterisk with the number of examinees choosing Option C to indicate that Option C is the correct option. Wainer used Turnbull's item presentation (Fig. 2.3) as a basis for critiquing the presentation of Fig. 2.4, suggesting that Fig. 2.4 could be improved by replacing the tabular presentation with a graphical one and also by including the actual item next to the item analysis results.
Wainer's (1989) Exhibit 1, which illustrates a tabular display of classical item indices for a PSAT/NMSQT test's multiple-choice item's five responses and omitted responses from 1981
The most recent versions of item analyses produced at ETS are presented in Livingston and Dorans (2004) and reprinted in Figs. 2.5– 2.7. These analysis presentations include graphical presentations of conditional percentages choosing the item's correct option, distracters, omits, and not-reached responses at individual uncategorized criterion scores. The dashed vertical lines represent percentiles of the score distribution where the user can choose which percentiles to show (in this case, the 20th, 40th, 60th, 80th, and 90th percentiles). The figures' presentations also incorporate numerical tables to present overall statistics for the item options and criterion scores as well as observed item difficulty indices , item difficulty indices equated using Eqs. 2.3 and 2.4 (labeled as Ref. in the figures), r-biserial correlations (\( {\widehat{r}}_{\mathrm{polyreg}}\left({x}_i,y\right) \); Eq. 2.9), and percentages of examinees reaching the item. Livingston and Dorans provided instructive discussion of how the item analysis presentations in Figs. 2.5– 2.7 can reveal the typical characteristics of relatively easy items (Fig. 2.5), items too difficult for the intended examinee population (Fig. 2.6), and items exhibiting other problems (Fig. 2.7).
Livingston and Dorans's (2004) Figure 1, which demonstrates classical item analysis results currently used at ETS, for a relatively easy item
Livingston and Dorans's (2004) Figure 5, which demonstrates classical item analysis results currently used at ETS, for a relatively difficult item
Livingston and Dorans's (2004) Figure 7, which demonstrates classical item analysis results currently used at ETS, for a problematic item
The results of the easy item shown in Fig. 2.5 are distinguished from those of the more difficult items in Figs. 2.6 and 2.7 in that the percentages of examinees choosing the correct option in Fig. 2.5 is 50% or greater for all examinees, and the percentages monotonically increase with the total test score. The items described in Figs. 2.6 and 2.7 exhibit percentages of examinees choosing the correct option that do not obviously rise for most criterion scores (Fig. 2.6) or do not rise more clearly than an intended incorrect option (Fig. 2.7). Livingston and Dorans (2004) interpreted Fig. 2.6 as indicative of an item that is too difficult for the examinees, where examinees do not clearly choose the correct option, Option E, at a higher rate than distracter C, except for the highest total test scores (i.e., the best performing examinees). Figure 2.7 is interpreted as indicative of an item that functions differently from the skill measured by the test (Livingston and Dorans 2004), where the probability of answering the item correctly is low for examinees at all score levels, where it is impossible to identify the correct answer (D) from the examinee response data, and where the most popular response for most examinees is to omit the item. Figures 2.6 and 2.7 are printed with statistical flags that indicate their problematic results, where the "r" flags indicate r-biserial correlations that are very low and even negative and the "D" flags indicate that high-performing examinees obtaining high percentiles of the criterion scores are more likely to choose one or more incorrect options rather than the correct option.
4 Roles of Item Analysis in Psychometric Contexts
4.1 Differential Item Functioning , Item Response Theory, and Conditions of Administration
The methods of item analysis described in the previous sections have been used for purposes other than informing item reviews and test form assembly with dichotomously scored multiple-choice items. In this section, ETS researchers' applications of item analysis to psychometric contexts such as differential item functioning (DIF) , item response theory (IRT) , and evaluations of item order and context effects are summarized. The applications of item analysis in these areas have produced results that are useful supplements to those produced by the alternative psychometric methods.
4.2 Subgroup Comparisons in Differential Item Functioning
Item analysis methods have been applied to compare an item's difficulty for different examinee subgroups. These DIF investigations focus on "unexpected" performance differences for examinee subgroups that are matched in terms of their overall ability or their performance on the total test (Dorans and Holland 1993, p. 37). One DIF procedure developed at ETS is based on evaluating whether two subgroups' conditional average item scores differ from 0 (i.e., standardization; Dorans, and Kulick 1986):
$$ {\overline{x}}_{ik,1}-{\overline{x}}_{ik,2}\ne 0,k=0,\dots, I. $$
Another statistical procedure applied to DIF investigations is based on evaluating whether the odds ratios in subgroups for an item i differ from 1 (i.e., the Mantel–Haenszel statistic; Holland and Thayer 1988; Mantel and Haenszel 1959):
$$ \frac{{\overline{x}}_{ik,1}/\left(1-{\overline{x}}_{ik,1}\right)}{{\overline{x}}_{ik,2}/\left(1-{\overline{x}}_{ik,2}\right)}\ne 1,k=0,\dots, I. $$
Most DIF research and investigations focus on averages of Eq. 2.16 with respect to one "standardization" subgroup's total score distribution (Dorans and Holland 1993, pp. 48–49) or averages of Eq. 2.17 with respect to the combined subgroups' test score distributions (Holland and Thayer 1988, p. 134). Summary indices created from Eqs. 2.16 and 2.17 can be interpreted as an item's average difficulty difference for the two matched or standardized subgroups, expressed either in terms of the item's original scale (like Eq. 2.1) or in terms of the delta scale (like Eq. 2.2; Dorans and Holland 1993).
DIF investigations based on averages of Eqs. 2.16 and 2.17 have also been supplemented with more detailed evaluations, such as the subgroups' average item score differences at each of the total test scores indicated in Eq. 2.16. For example, Dorans and Holland (1993) described how the conditional average item score differences in Eq. 2.16 can reveal more detailed aspects of an item's differential functioning, especially when supplemented with conditional comparisons of matched subgroups' percentages choosing the item's distracters or of omitting the item. In ETS practice, conditional evaluations are implemented as comparisons of subgroups' conditional \( {\overline{x}}_{ik} \) and \( 1-{\overline{x}}_{ik} \) values after these values have been estimated with kernel smoothing (Eqs. 2.14 and 2.15). Recent research has shown that evaluations of differences in subgroups' conditional \( {\overline{x}}_{ik} \) values can be biased when estimated with kernel smoothing and that more accurate subgroup comparisons of the conditional \( {\overline{x}}_{ik} \) values can be obtained when estimated with logistic regression or loglinear models (Moses et al. 2010).
4.3 Comparisons and Uses of Item Analysis and Item Response Theory
Comparisons of item analysis and IRT with respect to methods, assumptions, and results have been an interest of early and contemporary psychometrics (Bock 1997; Embretson and Reise 2000; Hambleton 1989; Lord 1980; Lord and Novick 1968). These comparisons have also motivated considerations for updating and replacing item analysis procedures at ETS. In early years at ETS, potential IRT applications to item analysis were dismissed due to the computational complexities of IRT model estimation (Livingston and Dorans 2004) and also because of the estimation inaccuracies resulting from historical attempts to address the computational complexities (Tucker 1981). Some differences in the approaches' purposes initially slowed the adaptation of IRT to item analysis, as IRT methods were regarded as less oriented to the item analysis goals of item review and revision (Tucker 1987, p. iv). IRT models have also been interpreted to be less flexible in terms of reflecting the shapes of item response curves implied by actual data (Haberman 2009, p. 15; Livingston and Dorans 2004, p. 2).
This section presents a review of ETS contributions describing how IRT compares with item analysis. The contributions are reviewed with respect to the approaches' similarities, the approaches' invariance assumptions, and demonstrations of how item analysis can be used to evaluate IRT model fit. To make the discussions more concrete, the reviews are presented in terms of the following two-parameter normal ogive IRT model:
$$ \mathrm{prob}\left({x}_i=1|\theta, {a}_i,{b}_i\right)=\underset{-\infty }{\overset{a_i\left(\theta -{b}_i\right)}{\int }}\frac{1}{\sqrt{2\pi }}\exp \left(\frac{-{t}^2}{2}\right) dt $$
where the probability of a correct response to dichotomously scored Item i is modeled as a function of an examinee's latent ability , θ, Item i's difficulty, b i , and discrimination, a i (Lord 1980). Alternative IRT models are reviewed by ETS researchers Lord (1980), Yen and Fitzpatrick (2006), and others (Embretson and Reise 2000; Hambleton 1989).
4.3.1 Similarities of Item Response Theory and Item Analysis
Item analysis and IRT appear to have several conceptual similarities. Both approaches can be described as predominantly focused on items and on the implications of items' statistics for assembling test forms with desirable measurement properties (Embretson and Reise 2000; Gulliksen 1950; Wainer 1989; Yen and Fitzpatrick 2006). The approaches have similar historical origins, as the Thurstone (1925) item scaling study that influenced item analysis (Gulliksen 1950; Tucker 1987) has also been described as an antecedent of IRT methods (Bock 1997, pp. 21–23; Thissen and Orlando 2001, pp. 79–83). The kernel smoothing methods used to depict conditional average item scores in item analysis (Eqs. 2.14 and 2.15) were originally developed as an IRT method that is nonparametric with respect to the shapes of its item response functions (Ramsay 1991, 2000).
In Lord and Novick (1968) and Lord (1980), the item difficulty and discrimination parameters of IRT models and item analysis are systematically related, and one can be approximated by a transformation of the other. The following assumptions are made to show the mathematical relationships (though these assumptions are not requirements of IRT models):
The two-parameter normal ogive model in Eq. 2.18 is correct (i.e., no guessing).
The regression of x i on θ is linear with error variances that are normally distributed and homoscedastic.
Variable θ follows a standard normal distribution.
The reliability of total score y is high.
Variable y is linearly related to θ.
With the preceding assumptions, the item discrimination parameter of the IRT model in Eq. 2.18 can be approximated from the item's biserial correlation as
$$ {a}_i\approx \frac{r_{\mathrm{biserial}}\left({x}_i,y\right)}{\sqrt{1-{r}_{\mathrm{biserial}}{\left({x}_i,y\right)}^2}}. $$
With the preceding assumptions, the item difficulty parameter of the IRT model in Eq. 2.18 can be approximated as
$$ {b}_i\approx \frac{l{\varDelta}_i}{r_{\mathrm{biserial}}\left({x}_i,y\right)}, $$
where lΔ i is a linear transformation of the delta (Eq. 2.2). Although IRT does not require the assumptions listed earlier, the relationships in Eqs. 2.19 and 2.20 are used in some IRT estimation software to provide initial estimates in an iterative procedure to estimate a i and b i (Zimowski et al. 2003).
4.3.2 Comparisons and Contrasts in Assumptions of Invariance
One frequently described contrast of item analysis and IRT approaches is with respect to their apparent invariance properties (Embretson and Reise 2000; Hambleton 1989; Yen and Fitzpatrick 2006). A simplified statement of the question of interest is, When a set of items is administered to two not necessarily equal groups of examinees and then item difficulty parameters are estimated in the examinee groups using item analysis and IRT approaches, which approach's parameter estimates are more invariant to examinee group differences ? ETS scientists Linda L. Cook , Daniel Eignor , and Hessy Taft (1988) compared the group sensitivities of item analysis deltas and IRT difficulty estimates after estimation and equating using achievement test data, sets of similar examinee groups, and other sets of dissimilar examinee groups. L. L. Cook et al.'s results indicate that equated deltas and IRT models' equated difficulty parameters are similar with respect to their stabilities and their potential for group dependence problems. Both approaches produced inaccurate estimates with very dissimilar examinee groups, results which are consistent with those of equating studies reviewed by ETS scientists L. L. Cook and Petersen (1987) and equating studies conducted by ETS scientists Lawrence and Dorans (1990), Livingston , Dorans, and Nancy Wright (1990), and Schmitt , Cook, Dorans, and Eignor (1990). The empirical results showing that difficulty estimates from item analysis and IRT can exhibit similar levels of group dependence tend to be underemphasized in psychometric discussions, which gives the impression that estimated IRT parameters are more invariant than item analysis indices (Embretson and Reise 2000, pp. 24–25; Hambleton 1989, p. 147; Yen and Fitzpatrick 2006, p. 111).
4.3.3 Uses of Item Analysis Fit Evaluations of Item Response Theory Models
Some ETS researchers have suggested the use of item analysis to evaluate IRT model fit (Livingston and Dorans 2004; Wainer 1989). The average item scores conditioned on the observed total test score, \( {\overline{x}}_{ik} \), of interest in item analysis has been used as a benchmark for considering whether the normal ogive or logistic functions assumed in IRT models can be observed in empirical test data (Lord 1965a, b, 1970). One recent application by ETS scientist Sinharay (2006) utilized \( {\overline{x}}_{ik} \) to describe and evaluate the fit of IRT models by considering how well the IRT models' posterior predictions of \( {\overline{x}}_{ik} \) fit the \( {\overline{x}}_{ik} \) values obtained from the raw data. Another recent investigation compared IRT models' \( {\overline{x}}_{ik} \) values to those obtained from loglinear models of test score distributions (Moses 2016).
4.4 Item Context and Order Effects
A basic assumption of some item analyses is that items' statistical measures will be consistent if those items are administered in different contexts, locations, or positions (Lord and Novick 1968, p. 327). Although this assumption is necessary for supporting items' administration in adaptive contexts (Wainer 1989), examples in large-scale testing indicate that it is not always tenable (Leary and Dorans 1985; Zwick 1991). Empirical investigations of order and context effects on item statistics have a history of empirical evaluations focused on the changes in IRT estimates across administrations (e.g., Kingston and Dorans 1984). Other evaluations by ETS researchers Dorans and Lawrence (1990) and Moses et al. (2007) have focused on the implications of changes in item statistics on the total test score distributions from randomly equivalent examinee groups. These investigations have a basis in Gulliksen's (1950) attention to how item difficulty affects the distribution of the total test score (Eqs. 2.10 and 2.11). That is, the Dorans and Lawrence (1990) study focused on the changes in total test score means and variances that resulted from changes in the positions of items and intact sections of items. The Moses et al. (2007) study focused on changes in entire test score distributions that resulted from changes in the positions of items and from changes in the positions of intact sets of items that followed written passages.
4.5 Analyses of Alternate Item Types and Scores
At ETS, considerable discussion has been devoted to adapting and applying item analysis approaches to items that are not dichotomously scored. Indices of item difficulty and discrimination can be extended, modified, or generalized to account for examinees' assumed guessing tendencies and omissions (Gulliksen 1950; Lord and Novick 1968; Myers 1959). Average item scores (Eq. 2.1), point biserial correlations (Eq. 2.5), r-polyreg correlations (Eq. 2.9), and conditional average item scores have been adapted and applied in the analysis of polytomously scored items. Investigations of DIF based on comparing subgroups' average item scores conditioned on total test scores as in Eq. 2.16 have been considered for polytomously scored items by ETS researchers, including Dorans and Schmitt (1993), Moses et al. (2013), and Zwick et al. (1997). At the time of this writing, there is great interest in developing more innovative items that utilize computer delivery and are more interactive in how they engage examinees. With appropriate applications and possible additional refinements, the item analysis methods described in this chapter should have relevance for reviews of innovative item types and for attending to these items' potential adaptive administration contexts, IRT models , and the test forms that might be assembled from them.
Alternative expressions to the average item score computations shown in Eq. 2.1 are available in other sources. Expressions involving summations with respect to examinees are shown in Gulliksen (1950) and Lord and Novick (1968). More elaborate versions of Eq. 2.1 that address polytomously scored items and tests composed of both dichotomously and polytomously scored items have also been developed (J. Carlson, personal communication, November 6, 2013).
In addition to the item analysis issues illustrated in Fig. 2.4 and in Wainer (1989), this particular item was the focus of additional research and discussion, which can be found in Wainer (1983).
Becker, K. A. (2003). History of the Stanford–Binet intelligence scales: Content and psychometrics (Stanford–Binet intelligence scales, 5th Ed. Assessment Service Bulletin no. 1). Itasca: Riverside.
Binet, A., & Simon, T. (1905). Methodes nouvelles pour le diagnostic du nieveau intellectual anormoux [new methods for the diagnosis of levels of intellectual abnormality]. L'Année Psychologique, 11, 191–244. https://doi.org/10.3406/psy.1904.3675.
Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee's ability. In F. M. Lord & M. R. Novick (Eds.), Statistical theories of mental test scores (pp. 374–472). Reading: Addison-Wesley.
Bock, R. D. (1997). A brief history of item response theory. Educational Measurement: Issues and Practice, 16(4), 21–33. https://doi.org/10.1111/j.1745-3992.1997.tb00605.x.
Brigham, C. C. (1932). A study of error. New York: College Entrance Examination Board.
Brogden, H. E. (1949). A new coefficient: Application to biserial correlation and to estimation of selective efficiency. Psychometrika, 14, 169–182. https://doi.org/10.1007/BF02289151.
Burt, C. (1921). Mental and scholastic tests. London: King.
Clemens, W. V. (1958). An index of item-criterion relationship. Educational and Psychological Measurement, 18, 167–172. https://doi.org/10.1177/001316445801800118.
Cook, W. W. (1932). The measurement of general spelling ability involving controlled comparisons between techniques. Iowa City: University of Iowa Studies in Education.
Cook, L. L., & Petersen, N. S. (1987). Problems related to the use of conventional and item response theory equating methods in less than optimal circumstances. Applied Psychological Measurement, 11, 225–244. https://doi.org/10.1177/014662168701100302.
Cook, L. L., Eignor, D. R., & Taft, H. L. (1988). A comparative study of the effects of recency of instruction on the stability of IRT and conventional item parameter estimates. Journal of Educational Measurement, 25, 31–45. https://doi.org/10.1111/j.1745-3984.1988.tb00289.x.
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334. https://doi.org/10.1007/BF02310555.
Dorans, N. J., & Holland, P. W. (1993). DIF detection and description: Mantel–Haenszel and standardization. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 35–66). Hillsdale: Erlbaum.
Dorans, N. J., & Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the scholastic aptitude test. Journal of Educational Measurement, 23, 355–368. https://doi.org/10.1111/j.1745-3984.1986.tb00255.x.
Dorans, N. J., & Lawrence, I. M. (1990). Checking the statistical equivalence of nearly identical test editions. Applied Measurement in Education, 3, 245–254. https://doi.org/10.1207/s15324818ame0303_3.
DuBois, P. H. (1942). A note on the computation of biserial r in item validation. Psychometrika, 7, 143–146. https://doi.org/10.1007/BF02288074.
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Hillsdale: Erlbaum.
Fan, C.-T. (1952). Note on construction of an item analysis table for the high-low-27-per-cent group method (Research Bulletin no. RB-52-13). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1952.tb00227.x
Green, B. F., Jr. (1951). A note on item selection for maximum validity (Research Bulletin no. RB-51-17). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1951.tb00217.x
Guilford, J. P. (1936). Psychometric methods. New York: McGraw-Hill.
Gulliksen, H. (1950). Theory of mental tests. New York: Wiley. https://doi.org/10.1037/13240-000.
Haberman, S. J. (2009). Use of generalized residuals to examine goodness of fit of item response models (Research Report No. RR-09-15). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.2009.tb02172.x
Hambleton, R. K. (1989). Principles and selected applications of item response theory. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 147–200). Washington, DC: American Council on Education.
Holland, P. W. (2008, March). The first four generations of test theory. Paper presented at the ATP Innovations in Testing Conference, Dallas, TX.
Holland, P. W., & Thayer, D. T. (1985). An alternative definition of the ETS delta scale of item difficulty (Research Report No. RR-85-43). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2330-8516.1985.tb00128.x
Holland, P. W., & Thayer, D. T. (1988). Differential item performance and the Mantel–Haenszel procedure. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 129–145). Hillsdale: Erlbaum.
Horst, P. (1933). The difficulty of a multiple choice test item. Journal of Educational Psychology, 24, 229–232. https://doi.org/10.1037/h0073588.
Horst, P. (1936). Item selection by means of a maximizing function. Psychometrika, 1, 229–244. https://doi.org/10.1007/BF02287875.
Kingston, N. M., & Dorans, N. J. (1984). Item location effects and their implications for IRT equating and adaptive testing. Applied Psychological Measurement, 8, 147–154. https://doi.org/10.1177/014662168400800202.
Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2, 151–160. https://doi.org/10.1007/BF02288391.
Lawrence, I. M., & Dorans, N. J. (1990). Effect on equating results of matching samples on an anchor test. Applied Measurement in Education, 3, 19–36. https://doi.org/10.1207/s15324818ame0301_3.
Leary, L. F., & Dorans, N. J. (1985). Implications for altering the context in which test items appear: A historical perspective on an immediate concern. Review of Educational Research, 55, 387–413. https://doi.org/10.3102/00346543055003387.
Lentz, T. F., Hirshstein, B., & Finch, J. H. (1932). Evaluation of methods of evaluating test items. Journal of Educational Psychology, 23, 344–350. https://doi.org/10.1037/h0073805.
Lewis, C., & Livingston, S. A. (2004). Confidence bands for a response probability function estimated by weighted moving average smoothing. Unpublished manuscript.
Lewis, C., Thayer, D., & Livingston, S. A. (n.d.). A regression-based polyserial correlation coefficient. Unpublished manuscript.
Livingston, S. A., & Dorans, N. J. (2004). A graphical approach to item analysis (Research Report No. RR-04-10). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.2004.tb01937.x
Livingston, S. A., Dorans, N. J., & Wright, N. K. (1990). What combination of sampling and equating methods works best? Applied Measurement in Education, 3, 73–95. https://doi.org/10.1207/s15324818ame0301_6.
Long, J. A., & Sandiford, P. (1935). The validation of test items. Bulletin of the Department of Educational Research, Ontario College of Education, 3, 1–126.
Lord, F. M. (1950). Properties of test scores expressed as functions of the item parameters (Research Bulletin no. RB-50-56). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1950.tb00919.x
Lord, F. M. (1961). Biserial estimates of correlation (Research Bulletin no. RB-61-05). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1961.tb00105.x
Lord, F.M. (1965a). A note on the normal ogive or logistic curve in item analysis. Psychometrika, 30, 371–372. https://doi.org/10.1007/BF02289500
Lord, F.M. (1965b). An empirical study of item-test regression. Psychometrika, 30, 373–376. https://doi.org/10.1007/BF02289501
Lord, F.M. (1970). Item characteristic curves estimated without knowledge of their mathematical form—a confrontation of Birnbaum's logistic model. Psychometrika, 35, 43–50. https://doi.org/10.1007/BF02290592
Mantel, N., & Haenszel, W. M. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719–748.
Moses, T. (2016). Estimating observed score distributions with loglinear models. In W. J. van der Linder & R. K. Hambleton (Eds.), Handbook of item response theory (2nd ed., pp. 71–85). Boca Raton: CRC Press.
Moses, T., Yang, W., & Wilson, C. (2007). Using kernel equating to check the statistical equivalence of nearly identical test editions. Journal of Educational Measurement, 44, 157–178. https://doi.org/10.1111/j.1745-3984.2007.00032.x.
Moses, T., Miao, J., & Dorans, N. J. (2010). A comparison of strategies for estimating conditional DIF. Journal of Educational and Behavioral Statistics, 6, 726–743. https://doi.org/10.3102/1076998610379135.
Moses, T., Liu, J., Tan, A., Deng, W., & Dorans, N. J. (2013). Constructed response DIF evaluations for mixed format tests (Research Report No. RR-13-33) Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.2013.tb02340.x
Myers, C. T. (1959). An evaluation of the "not-reached" response as a pseudo-distracter (Research Memorandum No. RM-59-06). Princeton: Educational Testing Service.
Olson, J. F., Scheuneman, J., & Grima, A. (1989). Statistical approaches to the study of item difficulty (Research Report No. RR-89-21). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2330-8516.1989.tb00136.x
Olsson, U., Drasgow, F., & Dorans, N. J. (1982). The polyserial correlation coefficient. Psychometrika, 47, 337–347. https://doi.org/10.1007/BF02294164.
Pearson, K. (1895). Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material. Philosophical Transactions of the Royal Society, 186, 343–414. https://doi.org/10.1098/rsta.1895.0010.
Pearson, K. (1909). On a new method for determining the correlation between a measured character a, and a character B. Biometrika, 7, 96–105. https://doi.org/10.1093/biomet/7.1-2.96.
Ramsay, J. O. (1991). Kernel smoothing approaches to nonparametric item characteristic curve estimation. Psychometrika, 56, 611–630. https://doi.org/10.1007/BF02294494.
Ramsay, J. O. (2000). TESTGRAF: A program for the graphical analysis of multiple-choice test and questionnaire data [Computer software and manual]. Retrieved from http://www.psych.mcgill.ca/faculty/ramsay/ramsay.html
Richardson, M. W. (1936). Notes on the rationale of item analysis. Psychometrika, 1, 69–76. https://doi.org/10.1007/BF02287926.
Schmitt, A. P., Cook, L. L., Dorans, N. J., & Eignor, D. R. (1990). Sensitivity of equating results to different sampling strategies. Applied Measurement in Education, 3, 53–71. https://doi.org/10.1207/s15324818ame0301_5.
Sinharay, S. (2006). Bayesian item fit analysis for unidimensional item response theory models. British Journal of Mathematical and Statistical Psychology, 59, 429–449. https://doi.org/10.1348/000711005X66888.
Sorum, M. (1958). Optimum item difficulty for a multiple-choice test (Research memorandum no. RM-58-06). Princeton: Educational Testing Service.
Swineford, F. (1936). Biserial r versus Pearson r as measures of test-item validity. Journal of Educational Psychology, 27, 471–472. https://doi.org/10.1037/h0052118.
Swineford, F. (1959, February). Some relations between test scores and item statistics. Journal of Educational Psychology, 50(1), 26–30. https://doi.org/10.1037/h0046332.
Symonds, P. M. (1929). Choice of items for a test on the basis of difficulty. Journal of Educational Psychology, 20, 481–493. https://doi.org/10.1037/h0075650.
Tate, R. F. (1955a). Applications of correlation models for biserial data. Journal of the American Statistical Association, 50, 1078–1095. https://doi.org/10.1080/01621459.1955.10501293.
Tate, R. F. (1955b). The theory of correlation between two continuous variables when one is dichotomized. Biometrika, 42, 205–216. https://doi.org/10.1093/biomet/42.1-2.205.
Thissen, D., & Orlando, M. (2001). Item response theory for items scored in two categories. In D. Thissen & H. Wainer (Eds.), Test scoring (pp. 73–140). Mahwah: Erlbaum.
Thurstone, L. L. (1925). A method of scaling psychological and educational tests. Journal of Educational Psychology, 16, 433–451. https://doi.org/10.1037/h0073357.
Thurstone, L. L. (1947). The calibration of test items. American Psychologist, 3, 103–104. https://doi.org/10.1037/h0057821.
Traub, R. E. (1997). Classical test theory in historical perspective. Educational Measurement: Issues and Practice, 16(4), 8–14. https://doi.org/10.1111/j.1745-3992.1997.tb00603.x.
Tucker, L. R. (1948). A method for scaling ability test items taking item unreliability into account. American Psychologist, 3, 309–310.
Tucker, L. R. (1981). A simulation–Monte Carlo study of item difficulty measures delta and D.6 (Research Report No. RR-81-06). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1981.tb01239.x
Tucker, L. R. (1987). Developments in classical item analysis methods (Research Report No. RR-87-46). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2330-8516.1987.tb00250.x
Turnbull, W. W. (1946). A normalized graphic method of item analysis. Journal of Educational Psychology, 37, 129–141. https://doi.org/10.1037/h0053589.
Wainer, H. (1983). Pyramid power: Searching for an error in test scoring with 830,000 helpers. American Statistician, 37, 87–91. https://doi.org/10.1080/00031305.1983.10483095.
Wainer, H. (1989, Summer). The future of item analysis. Journal of Educational Measurement, 26, 191–208.
Yen, W. M., & Fitzpatrick, A. R. (2006). Item response theory. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 111–153). Westport: American Council on Education and Praeger.
Zimowski, M. F., Muraki, E., Mislevy, R. J., & Bock, R. D. (2003). BILOG-MG [computer software]. Lincolnwood: Scientific Software International.
Zwick, R. (1991). Effects of item order and context on estimation of NAEP Reading proficiency. Educational Measurement: Issues and Practice, 10, 10–16. https://doi.org/10.1111/j.1745-3992.1991.tb00198.x.
Zwick, R., Thayer, D. T., & Mazzeo, J. (1997). Describing and categorizing DIF in polytomous items (Research Report No. RR-97-05). Princeton: Educational Testing Service. http://dx.doi.org/10.1002/j.2333-8504.1997.tb01726.x
This manuscript was significantly improved from earlier versions thanks to reviews and suggestions from Jim Carlson, Neil Dorans, Skip Livingston and Matthias von Davier, and editorial work from Kim Fryer.
College Board, New York, NY, USA
Tim Moses
Correspondence to Tim Moses .
Moses, T. (2017). A Review of Developments and Applications in Item Analysis. In: Bennett, R., von Davier, M. (eds) Advancing Human Assessment. Methodology of Educational Measurement and Assessment. Springer, Cham. https://doi.org/10.1007/978-3-319-58689-2_2
|
CommonCrawl
|
Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers
Rob Eisinga ORCID: orcid.org/0000-0002-8349-52261,
Tom Heskes2,
Ben Pelzer1 &
Manfred Te Grotenhuis1
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation.
We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets.
We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip.
The Friedman [1] rank sum test is a widely-used nonparametric method for the analysis of several related samples in computational biology and other fields. It is used, for example, to compare the performance results of a set of (expression-based) classifiers over multiple datasets, covering case problems, benchmark functions, or performance indicators [2,3,4]. Some recent examples of the numerous applications of the Friedman test in bioinformatics include [5,6,7,8,9,10,11,12,13,14,15,16,17]. The test is supported by many statistical software packages and it is routinely discussed in textbooks on nonparametric statistics [18,19,20,21,22,23].
The Friedman test procedure is an analysis of variance by ranks, i.e., observed rank scores or rank scores obtained by ordering ordinal or numerical outcomes, that is used when one is not willing to make strong distributional assumptions. A common approach is to present the test as a method for identifying treatment effects of k different treatments in a so-called randomized complete block design. This design uses n sets, called blocks, of k homogeneous units matched on some relevant characteristic, for example patients matched on SNP genotype. The k treatments are assigned randomly to the k units within each block, with each treatment condition being administered once within a block. The Friedman test is also conducted if the samples concern a repeated measures design. In such design each experimental unit constitutes a block that serves in all treatment conditions. Examples are provided by experiments in which k different treatments (classifiers) are compared on a single experimental unit (dataset), or if k units (e.g., genes, products, candidates) are ranked in order by each of n 'judges' (algorithms, panelists). In all these settings the objective is to determine if the k populations from which the observations were made are identically distributed.
Applied to classifier comparison, the null hypothesis for the Friedman test is that the performance results of the k classifiers over n datasets are samples that have been drawn from the same population or, equivalently, from different populations with the same distribution [18]. To examine this hypothesis, the test determines whether the rank sums of the k classifiers included in the comparison are significantly different. After applying the omnibus Friedman test and observing that the rank sums are different, the next step is to compare all classifiers against each other or against a baseline classifier (e.g., newly proposed method or best performing algorithm). In doing so, a series of comparisons of rank sums (i.e., rank sum difference tests) is performed, adjusting the significance level using a Bonferroni correction or more powerful approaches to control the familywise Type-I error rate [3, 4].
Preferably one should use the exact null distribution of the rank sum difference statistic in these subsequent analyses. Only if the decision on the null hypothesis is based on the exact distribution is the probability of committing a Type-I error known exactly. However, the exact distribution and the associated true tail probabilities are not yet adequately known. To be sure, tables of exact critical values at standard significance levels (e.g., [18, 21, 22]) and of exact p-values [24] are available for small values of k and n, for which complete enumeration is possible. Also, the lower order moments of the exact sampling distribution have been documented in detail [25], and Stuart [26] proved the conjecture of Whitfield [24] that, on the null hypothesis, the difference between rank sum values is asymptotically normally distributed as n tends to infinity. Further, in a recent study Koziol [27] used symbolic computation for finding the distribution of absolute values of differences in rank sums. Apart from these important contributions there is, to the best of our knowledge, no publication available in the probability theory, rank statistics or other literature that addresses the exact distribution of the rank sum difference statistic.
As the null distribution in the general case is unknown and exact computation seemingly intractable, it is generally recommended to apply a large-sample approximation method to test the significance of the pairwise difference in rank sums. It is well known, however, that calculating probabilities using an asymptotic variant of an exact test may lead to inaccurate p-values when the sample size n is small, as in most applications of the Friedman test, and thereby to a false acceptance or false rejection of the null hypothesis. Furthermore, there are several large-sample tests available with different limiting distributions, and these tests may vary substantially in their results. Consequently, there is little agreement in the nonparametric literature over which approximate method is most appropriate to employ for the comparison of Friedman rank sums [22]. This statement refers both to approximate tests of significance for the comparison of all ( k2 ) = k(k − 1)/2 pairs of treatments, and to tests for the comparison of k − 1 treatments with a single control. Obviously, the utility of the asymptotic tests depends on their accuracy to approximate the exact sampling distribution of the discrete rank sum difference statistic.
The purpose of this note is twofold. The foremost aim is to provide an expression for calculating the exact probability mass function of the pairwise differences in Friedman rank sums. This expression may be employed to quickly calculate the exact p-value and associated statistics such as the critical difference value. The calculation does not require a complicated algorithm and as it is easily incorporated into a computer program, there is no longer need to resort to asymptotic p-values. We illustrate the exact method in the context of two recently published analyses of the performance of classifiers and data projection methods. The second aim is to examine under what circumstances the exact distribution and the associated exact statistics offer an improvement over traditional approximate methods for Friedman rank sum comparison.
It is important to note at the outset that this article is not concerned with the Friedman test itself. The Friedman test is an over-all test that evaluates the joint distribution of rank sums to examine equality in treatment distributions. Computation of the exact joint distribution under the null is discussed by van de Wiel [28], and an efficient algorithm for computing the exact permutation distribution of the Friedman test statistic is available in StatXact [29]. The present paper offers an over-all exact test for pairwise comparison of Friedman rank sums. The reason is essentially that researchers are usually not only interested in knowing whether any difference exists among treatments, but also in discovering which treatments are different from each other, and the Friedman test is not designed for this purpose. Although the type of test dealt with here is not the same as the Friedman test, we will briefly discuss the latter as the procedures have important elements in common, such as the global null hypothesis. Also, we assume in our discussion that the available data are such that it is appropriate to perform simultaneous rank sum tests. Hence, we ignore empirical issues such as insufficient power (too few datasets), selection bias (nonrandom selection of datasets), and like complications that, as noted by Boulesteix et al. ([30]; see also [31]), tend to invalidate statistical inference in comparative benchmarking studies of machine learning methods solving real-world problems. In ANOVA, the term 'treatment' is used as a common term for the grouping variable for which a response is measured. To accommodate the wide variety of applications of the Friedman test, the more general term 'group' is used instead of 'treatment' in the remainder of this paper. The term subject is used hereafter to include both objects and individuals.
Friedman data
To perform the Friedman test the observed data are arranged in the form of a complete two-way layout, as in Table 1A, where the k rows represent the groups (classifiers) and the n columns represent the blocks (datasets).
Table 1 Two-way layout for Friedman test
The data consist of n blocks with k observations within each block. Observations in different blocks are assumed to be independent. This assumption does not apply to the k observations within a block. The test procedure remains valid despite within-block dependencies [32]. The Friedman test statistic is defined on ranked data so unless the original raw data are integer-valued rank scores, the raw data are rank-transformed. The rank entries in Table 1B are obtained by first ordering the raw data {x ij ; i = 1, …, n, j = 1, …, k} in Table 1A column-wise from least to greatest, within each of the n blocks separately and independently, and then to assign the integers 1,…,k as the rank scores of the k observations within a block. The row sum of the ranks for any group j is the rank sum defined as R j = ∑ n i = 1 r ij .
The general null hypothesis of the Friedman test is that all the k blocked samples, each of size n, come from identical but unspecified population distributions. To specify this null hypothesis in more detail, let X ij denote a random variable with unknown cumulative distribution function F ij , and let x ij denote the realization of X ij .
The null hypothesis can be defined in two ways, depending on whether blocks are fixed or random [33]. If blocks are fixed, then all the k × n measurement values are independent. If there are k groups randomly assigned to hold k unrelated X ij within each block, as in a randomized complete block design, then the null hypothesis that the k groups have identical distributions may be formulated as
H 0 : F i1(x) = … = F ik (x) = F i (x) for each i = 1, …, n,
where F i (x) is the distribution of the observations in the ith block [28, 33]. The same hypothesis, but more specific, is obtained if the usual additive model is assumed to have generated the x ij in the two-way layout [23]. The additive model decomposes the total effect on the measurement value into an overall effect μ, block i effect β i , and group j effect τ j . If the distribution function is denoted F ij (x) = F(x − μ − β i − τ j ), the null hypothesis of no differences among the k groups may be stated as
$$ {H}_0:\kern0.5em {\tau}_1=\dots ={\tau}_k, $$
and the general alternative hypothesis as
\( {H}_1:\kern0.5em {\tau}_{j_1}\ne {\tau}_{j_2} \) for at least one (j 1, j 2) pair.
Note that this representation also asserts that the underlying distribution functions F i1(x), …, F ik (x) within block i are the same, i.e., that F i1(x) = … = F ik (x) = F i (x), for each fixed i = 1, …, n.
If blocks are random, measurements from the same random block will be positively correlated. For example, if a single subject forms a block and k observations are made on the subject, possibly in randomized order, the within-block observations are dependent. Such dependency occurs in a repeated measures design where n subjects are observed and each subject is tested under k conditions. Denote the joint distribution function of observations within block i by F i (x 1, …, x k ). Then the null hypothesis of no differences among the k groups is the hypothesis of exchangeability of the random variables X i1, …, X ik [28, 34], formulated as
H 0 : F i (x 1, …, x k ) = F i (x σ(1), …, x σ(k)) for i = 1, …, n,
where σ(1), …, σ(k) denotes any permutation of 1, …, k. The model underlying this hypothesis is that the random variables X ij have an exchangeable distribution. This is a suitable model for repeated measures, where it is not appropriate to assume independence within a block [32, 33]. We also note that this formulation of the null hypothesis and the one for fixed blocks are consistent against the same alternative, namely the negation of H 0. For a detailed discussion of this matter, see [35].
Whether blocks are fixed or random, if the null hypothesis is true, then all the permutations of 1, …, k are equally likely. There are k ! possible ways to assign k rank scores to the k groups within each block and all these intra-block permutations are equiprobable under H 0. As the same permutation argument applies to each of the n independent blocks, there are (k !)n equally likely rank configurations of the rank scores r ij in the two-way layout [23]. Each of these permutations has a probability of (k !)− n of being realized. This feature is used to evaluate the null distribution of the rank sums R j , by enumerating all the permutations of the two-way layout of ranks.
Friedman test statistic
Under the Friedman null hypothesis, the expected row sum of ranks for each group equals n(k + 1)/2. The Friedman test statistic
$$ {X}_r^2=\frac{12}{nk\left( k+1\right)}{\displaystyle \sum_{j=1}^k{\left\{{R}_j- n\left( k+1\right)/2\right\}}^2} $$
sums the squared deviations of the observed rank sums for each group, R j , from the common expected value for each group, n(k + 1)/2, under the assumption that the k group distributions are identical. For small values of k and n, the exact distribution of X 2 r has been tabled, for example, by Friedman [1]. An algorithm for computing the exact joint distribution of the Friedman rank sums under the null is discussed in [28]. For the special case of two paired samples, see [36].
Calculating the test statistic using the null distribution of the (k !)n possible permutations is time consuming if k is large. However, Friedman [1] showed that as n tends to infinity, X 2 r converges in distribution to χ 2 df = k − 1 , a chi-squared random variable with k − 1 degrees of freedom. This result is used in the asymptotic Friedman test. The Friedman test rejects H 0 at a pre-specified significance level α when the test statistic X 2 r exceeds the 100(1 − α)th percentile of the limiting chi-squared distribution of X 2 r with k − 1 degrees of freedom [1]. The test statistic needs to be adjusted if there are tied ranks within blocks [22, 23]. Also, various modifications of the Friedman test have been proposed, for example the F distribution as an alternative to the chi-squared distribution [37], as well as generalizations, such as the Skillings-Mack [38] test statistic for use in the presence of missing data. These and various other adjustments and nonparametric competitors to the Friedman test (e.g., Kruskal-Wallis, Quade, Friedman aligned ranks test) are not discussed here (see [4, 22, 23]).
Pairwise comparison tests and approximate critical difference
Frequently, researchers are not only interested in testing the global hypothesis of the equality of groups but also, or even more so, in inference on the equality of equality of pairs of groups. Further, even if one is mainly interested in H 0 and the hypothesis is rejected, a follow-up analysis may be conducted to determine possible reasons for the rejection. Such analysis may disclose group differences, but it might also reveal that none of the pairs is significantly different, despite a globally significant test result.
To address these issues it is expedient to test hypotheses of equality for pairs of groups using simultaneous comparison tests. These multiple comparison procedures may involve, in 1 × N (or many-one) comparisons, testing k − 1 hypotheses of equality of all non-control groups against the study control or, in N × N (all-pairs) comparisons, considering k(k − 1)/2 hypotheses of equality between all pairs of groups. For both types of comparisons, large-sample approximate tests have been designed. They are derived for the situation where n, the number of blocks (i.e., 'sample size'), is large.
Table 2 displays the critical difference (CD) approximate tests for 1 × N and N × N comparisons of Friedman rank sums, as recommended in highly-cited monographs and papers and popular textbooks on nonparametric statistics. The critical difference is the minimum required difference in rank sums for a pair of groups to differ at the pre-specified alpha level of significance. It is to note that in many publications the CD statistic is calculated using the difference in rank sum averages, i.e., R j /n, rather than rank sums. The results are identical, since each group has n observations, if the test statistic formulas are modified appropriately.
Table 2 Recommended critical difference (CD) approximate tests for 1 × N and N × N comparisons of Friedman rank sums
When the null hypothesis of equidistribution of ranks in n independent rankings is true, and the condition of a large sample size is met, the differences in rank sums are approximately normally distributed [26]. Let d = R i − R j , with i ≠ j, be the rank sum difference among a pair of groups i and j. The support of rank sum difference d is the closure [−n(k − 1), n(k − 1)]. Under the null hypothesis, the expected value E(d) = 0 and the variance Var(d) = nk(k + 1)/6 [18, 23, 25]. As the distribution of d is symmetric around E(d) = 0, the skewness is zero, as are all odd order moments. The kurtosis coefficient, derived by Whitfield [24] as
$$ \mathrm{Kurt}(d)=3-\frac{3}{5 n}-\frac{12}{5 n k}-\frac{6}{5 n k\left( k+1\right)}, $$
is less than 3 (i.e., negative excess kurtosis), implying that the discrete rank sum difference distribution has thinner tails than the normal. Notice, however, that the kurtosis tends to 3 with increasing n, thus a normal approximation is reasonable. This implies that d has an asymptotic N(0, Var(d)) distribution and that the normal deviate \( d/\sqrt{\mathrm{Var}(d)} \) is asymptotically N(0, 1).
As can be seen in Table 2, the normal approximate test is recommended by various authors when all groups are to be compared against each other pairwise. It is also discussed by Demšar [2] as a test statistic to be employed when all groups are compared with a single control. Note that the normal test procedures control the familywise Type-I error rate by dividing the overall level of significance α by the number of comparisons performed (i.e., c 1 in 1 × N, and c 2 in N × N comparisons). There are more powerful competitors to this Bonferroni-type correction available, such as the Holm, Hochberg, and Hommel procedures. These methods to control the overall false positive error rate are not elaborated in this paper. For a tutorial in the realm of classifier comparison, see Derrac et al. [4].
In addition to the ordinary normal approximation, simultaneous tests have been proposed that exploit the covariance structure of the distribution of the values of differences in rank sums. Whereas the n rankings are mutually independent under H 0, the rank sums and the rank sum differences are dependent and correlated as well. The correlation among the rank sum differences depends on the rank sums involved. Specifically, as reported by Miller [25], when the null hypothesis is true
$$ \mathrm{C}\mathrm{o}\mathrm{r}\left({R}_i-{R}_j,{R}_i-{R}_l\right)={\scriptscriptstyle \frac{1}{2}}\kern2.25em i\ne j\ne l $$
$$ \mathrm{C}\mathrm{o}\mathrm{r}\left({R}_i-{R}_j,{R}_l-{R}_m\right)=0\kern2.25em i\ne j\ne l\ne m. $$
Hence the correlation is zero for pairs of rank sum differences with no group in common, and 0.5 for pairs of differences with one group in common to both differences. The number of correlated pairs decreases as k increases. For a study involving k groups, the proportion of correlated pairs equals 4/(k + 1) [43]. Hence when k = 7, for example, 50% of the pairs are correlated, but when k = 79 only 5% are correlated.
As noted in various studies (e.g., [23, 25, 39]), for 1 × N comparisons this correlation structure implies that, when H 0 is true and n tends to infinity, the distribution of the differences between the k − 1 group rank sums and the control rank sum coincides with an asymptotic (k − 1) -variate normal distribution with zero means. The critical difference value can therefore be approximated by the test statistic labeled CD M in Table 2, where the constant \( {m}_{\alpha, df= k-1,\rho ={\scriptscriptstyle \frac{1}{2}}} \) is the upper αth percentile point for the distribution of the maximum value of (k − 1) equally correlated N(0,1) random variables with common correlation \( \rho ={\scriptscriptstyle \frac{1}{2}}. \) The procedure has an asymptotic familywise error rate equal to α [23, 25].
For N × N comparisons, it means that the covariance of the rank sum differences equals the covariance of the differences between k independent random variables with zero means and variances nk(k + 1)/12. Thus, the asymptotic distribution of \( max\left\{\left|{R}_i-{R}_j\right|\right\}/\sqrt{nk\left( k+1\right)/12} \) coincides with the distribution of the range (Q k,∞) of k independent N(0, 1) random variables. The associated test statistic is CD Q , where the constant q α,df = k,∞ is the upper αth percentile point of the Studentized range (q) distribution with (k, ∞) degrees of freedom [23, 25, 39]. Again, as the test considers the absolute difference of all k groups simultaneously, the asymptotic familywise error rate equals α [23, 25].
The Friedman statistic test itself gives rise to the simultaneous test mentioned in the bottom row of Table 2. The null hypothesis is accepted if the difference in rank sums fails to exceed the critical value \( C{D}_{\chi^2}. \) This asymptotic chi-squared approximation is recommended in some popular textbooks, although Miller [25] has argued that the probability statement is not the sharpest of tests.
Statistical power and alternative tests
Note that the CD test statistics presented in Table 2 do not require information about the within-block ranks as determined in the experiment. Rather, the simultaneous rank tests all assume that within each block each observation is equally likely to have any available rank. When this is true, the quantity (k + 1)(k − 1)/12 is the variance of the within-block rankings and nk(k + 1)/6 the variance of the difference between any two rank sums [25]. Hence the null distribution of d in the population has zero mean and known standard deviation. This is the precise reason why the normal approximate tests use the z-score as test statistic. However, it is important to emphasize in this context that the square root of nk(k + 1)/6 is the standard deviation of d when the overall null hypothesis is true, but not when it is false. It holds, similar to p-values, only in a particular model, i.e. H 0; a model that may or may not be true. If the null hypothesis is false, the quantity nk(k + 1)/6 is typically an over-estimate of the variance, and this causes simultaneous tests, approximate and exact, to lose power.
There are pairwise comparison tests for Friedman rank sums available that are computed on the observed rank scores rather than the rank sums. These tests, such as the Rosenthal-Ferguson test [44] and the popular Conover test [45, 46], use the t-score as test statistic. The pairwise t-tests are often more powerful than the simultaneous tests discussed above, however, there are also drawbacks. In brief, the Rosenthal-Ferguson test uses the observed variances and covariance of the rank scores of each individual pair of groups, to obtain a standard error of d for the test of significance of the pairwise rank sum difference. This standard error is valid whether the null hypothesis of no pairwise difference is true or not. However, next to the formal constraint of the test that n should be larger than k + 1, the variance of d may be estimated poorly, as there are typically few degrees of freedom available for (co-)variance estimation in small-sample Friedman test applications. Moreover, the observed (co-)variances are different for each pair of groups. Consequently, it does not follow from the significance of a difference of a given rank sum A from another rank sum B, that a third rank sum C, more different from A than B is, would also be significantly different. This is an unpleasant feature of the test.
The Conover test estimates the standard deviation of d by computing a pooled standard error from the (co-)variances of the observed rank scores of all groups, thus increasing statistical power. The method is similar to Fisher's protected Least Significant Difference (LSD) test, applied to rank scores. In this methodology, no adjustment for multiple testing is made to the p-values to preserve the familywise error rate at the nominal level of significance. Rather, the test is protected in the sense that no pairwise comparisons are performed unless the overall test statistic is significant. As in the Fisher protected LSD procedure, the Conover test has the property of incorporating the observed F-value of the overall test into the inferential decision process. However, in contrast to the Fisher protected LSD, which uses the observed F-value only in a 0–1 ('go/no go') manner, the Conover test uses the F-value in a smooth manner when computing the LSD. That is, it has the unusual characteristic that the larger the overall test statistic, the smaller the least significant difference threshold is for declaring a rank sum difference to be significant. The Duncan-Waller test [47] has this same characteristic, but this test advocates a Bayesian approach to multiple comparisons with Bayes LSD. As the comparison tests in the second stage are conditional on the result of the first stage, the nominal alpha level used in the pairwise Conover test has no real probabilistic meaning in the frequentist sense. As noted by Conover and Iman ([48]: 2), "Since the α level of the second-stage test is usually not known, it is no longer a hypothesis test in the usual sense but rather merely a convenient yardstick for separating some treatments from others."
Exact distribution and fast p-value calculation
We present an exact test for simultaneous pairwise comparison of Friedman rank sums. The exact null distribution is determined using the probability generating function method. Generating functions provide an elegant way to obtain probability or frequency distributions of distribution-free test statistics [27, 28]. Application of the generating function method gives rise to the following theorem, the proof of which is in Additional file 1.
Theorem 1 For n mutually independent integer-valued rankings, each with equally likely rank scores ranging from 1 to k, the exact probability to obtain pairwise difference d for any two rank sums equals
$$ P\left( D= d; k, n\right)={\left\{ k\left( k-1\right)\right\}}^{- n} W\left( D= d; k, n\right), $$
$$ W\left( D= d; k, n\right)={\left\{ k\left( k-1\right)\right\}}^n{\displaystyle \sum_{h=0}^n\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill h\hfill \end{array}\right)}\ \frac{1}{k^h{\left(1- k\right)}^n}{\displaystyle \sum_{i=0}^h{\displaystyle \sum_{j=0}^h{\left(-1\right)}^{\left( j- i\right)}}}\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill i\hfill \end{array}\right)\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill j\hfill \end{array}\right)\left(\begin{array}{c}\hfill k\left( j- i\right)- d+ h-1\hfill \\ {}\hfill k\left( j- i\right)- d- h\hfill \end{array}\right) $$
is the number of distinct ways a rank sum difference of d can arise, with d having support on d = [−n(k − 1), n(k − 1)].
Additional file 1 also offers a closed-form expression for the exact p-value of d. [49−51] The p-value is defined as the probability of obtaining a result at least as extreme as the one observed, given that the null hypothesis is true. It is obtained as the sum of the probabilities of all possible d, for the same k and n, that are as likely or less likely than the observed value of d under the null. The exact p-value is denoted P(D ≥ d; k, n), and it is computed using the expression
$$ \begin{array}{l} P\left( D\ge d; k, n\right)={\displaystyle \sum_{h=0}^n\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill h\hfill \end{array}\right)}\ \frac{1}{k^h{\left(1- k\right)}^n}{\displaystyle \sum_{i=0}^h{\displaystyle \sum_{j=0}^h{\left(-1\right)}^{\left( j- i\right)}}}\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill i\hfill \end{array}\right)\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill j\hfill \end{array}\right)\left(\begin{array}{c}\hfill k\left( j- i\right)- d+ h\hfill \\ {}\hfill k\left( j- i\right)- d- h\hfill \end{array}\right),\\ {}\kern27.5em d=- n\left( k-1\right),\dots, n\left( k-1\right).\end{array} $$
Calculating the exact p-value with this triple summation expression provides a speed-up of orders of magnitude over complete enumeration of all possible outcomes and their probabilities by a brute-force permutation approach. For larger values of n, however, exact calculation is somewhat time-consuming and to extend the practical range for performing exact tests, it is desirable to compute the p-value more efficiently.
Also, because in practice multiple comparison tests are concerned with absolute differences, it is expedient to compute the cumulative probability of the absolute value of differences in rank sums. As the number of mass points of the symmetric distribution of d is an integer of the form 2n(k − 1) + 1, the distribution has an odd number of probabilities. This implies that, as the probability mass function of d is symmetric around zero, the probability mass to the left of d = 0 may be folded over, resulting in a folded distribution of non-negative d. Consequently, the one-sided p-value of non-negative d in the range d = 1, …, n(k − 1) may be obtained as the sum of the two one-sided p-values of the symmetric distribution with support d = [−n(k − 1), n(k − 1)]. As doubling the one-sided p-value leads to a p-value for d = 0 that exceeds unity, the p-value for d = 0 (only) is computed as P(D ≥ 0; k, n) = P(D = 0) + P(D ≥ 1), and this is exactly equal to 1.
To accelerate computation, we transform the double summation over the indices i and j in the expression for P(D ≥ d; k, n) to a summation over a single index, s say, using Theorem 2. The proof is given in Additional file 2.
Theorem 2 For nonnegative integers d and k
$$ {\displaystyle \sum_{i=0}^h{\displaystyle \sum_{j=0}^h{\left(-1\right)}^{\left( j- i\right)}}}\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill i\hfill \end{array}\right)\left(\begin{array}{c}\hfill h\hfill \\ {}\hfill j\hfill \end{array}\right)\left(\begin{array}{c}\hfill k\left( j- i\right)- d+ h\hfill \\ {}\hfill k\left( j- i\right)- d- h\hfill \end{array}\right)={\displaystyle \sum_{s=0}^h{\left(-1\right)}^s}\left(\begin{array}{c}\hfill 2 h\hfill \\ {}\hfill h+ s\hfill \end{array}\right)\left(\begin{array}{c}\hfill k s- d+ h\hfill \\ {}\hfill k s- d- h\hfill \end{array}\right). $$
This reduction to a singly-sum function implies that the p-value can alternatively be calculated from the much simpler expression
$$ P\left( D\ge\ \left| d\right|; k, n\right)=\left\{\begin{array}{c}\hfill 2\ {\displaystyle \sum_{h=0}^n\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill h\hfill \end{array}\right)}\frac{1}{k^h{\left(1- k\right)}^n}{\displaystyle \sum_{s=0}^h{\left(-1\right)}^s\left(\begin{array}{c}\hfill 2 h\hfill \\ {}\hfill h+ s\hfill \end{array}\right)\left(\begin{array}{c}\hfill ks- d+ h\hfill \\ {}\hfill ks- d- h\hfill \end{array}\right)}, \kern1.8em d=1,\dots, n\left( k-1\right)\hfill \\ {}1\kern22.5em d=0,\kern3em \end{array}\right. $$
and, as we will show, even for larger values of n in a computationally fast manner.
Although the two expressions for the exact p-value are mathematically correct, straightforward computation may produce calculation errors. Even for moderate values of n (20 or so), the binomial coefficient that has d in the indices may become extremely large and storing these numbers for subsequent multiplication creates numerical overflow due to the precision limitation of fixed-precision arithmetic. One way to address this failure is to use a recurrence relation that satisfies the generating function [53, 54]. The recursions we examined were all computationally expensive to run, however, except for small values of n and/or k. A faster way to compute the exact p-value correctly is to use arbitrary-precision arithmetic computation to deal with numbers that can be of arbitrary large size, limited only by the available computer memory.
The calculation of the p-value of the absolute rank sum difference d given k and n is implemented in R [55]. The R code, which requires the package Rmpfr [56] for high precision arithmetic to be installed, is in Additional file 3. The script labeled pexactfrsd computes the exact p-value P(D ≥ |d|), and additionally affords the possibility to compute the probability P(D = |d|), and the (cumulative) number of compositions of d (i.e., W(D = |d|) and W(D ≥ |d|)). The R code and potential future updates are also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip.
To illustrate the derivations, Additional file 4 offers a small-sized numerical example (k = 3, n = 2), and Additional file 5 tabulates the number of compositions of d for combinations of k = n = 2,…,6, for inclusion in the OEIS [52]. As can be seen in Additional file 5, for small values of n the unfolded, symmetric distribution of d is bimodal, with modes at + 1 and − 1 [24]. This feature rapidly disappears as n increases, specifically, for k > 2 at n ≥ 6.
Hereafter, unless otherwise stated, we will consider the value of rank sum difference d to be either zero or positive, ranging from 0 to n(k − 1), and thus drop the absolute value symbol around d.
Incomplete rankings
Because the n rankings {1,2,…,k} are mutually independent, we may divide them into two (or more), equal or unequal sized parts, labeled (D 1; k, n 1) and (D 2; k, n 2), with ∑ 2 t = 1 D t = D, and D t denoting the differences in rank sums of the two parts. The exact p-value can be obtained using
$$ P\left( D\ge d; k, n\right)= P\left( D\ge d; k,{n}_1,{n}_2\right)={\displaystyle \sum_{i=-{n}_1\left( k-1\right)}^{n_1\left( k-1\right)} P\left({D}_1= i; k,{n}_1\right)}\times P\left({D}_2\ge \left( d- i\right); k,{n}_2\right), $$
where – as indicated by the summation's lower bound – calculation is performed using the p-value expression that allows for negative d. A unique and useful property of the exact method, which is not shared by the approximate methods discussed, is that it is easy to calculate p-value probabilities for designs with unequal block sizes k; e.g., designs in which n 1 has ranks {1, 2, …, k 1}, and n 2 ranks {1, 2, …, k 2}, with k 1 ≠ k 2. A general expression for calculating the exact p-value in incomplete designs with j unequal sized parts is
$$ \begin{array}{l} P\left( D\ge d;{k}_1,{n}_1,{k}_2,{n}_2,\cdots, {k}_j,{n}_j\right)={\displaystyle \sum_{i_1=-{n}_1\left({k}_1-1\right)}^{n_1\left({k}_1-1\right)}{\displaystyle \sum_{i_2=-{n}_2\left({k}_2-1\right)}^{n_2\left({k}_2-1\right)}\cdots {\displaystyle \sum_{i_{j-1}=-{n}_{j-1}\left({k}_{j-1}-1\right)}^{n_{j-1}\left({k}_{j-1}-1\right)}} P\left({D}_1={i}_1;{k}_1,{n}_1\right) \times }}\ \\ {}\kern4.25em \\ {}\kern4em P\left({D}_2={i}_2;{k}_2,{n}_2\right)\times \cdots \times P\left({D}_{j-1}={i}_{j-1};{k}_{j-1},{n}_{j-1}\right)\times P\left({D}_j\ge \left( d-{i}_1-{i}_2\cdots -{i}_{j-1}\right);{k}_j,{n}_j\right),\end{array} $$
where ∑ j t = 1 D t = D, and an example in which n is subdivided into three parts, each with a unique value of k (k 1, k 2, k 3), is
$$ \begin{array}{l} P\left( D\ge d;{k}_1,{n}_1,{k}_2,{n}_2,{k}_3,{n}_3\right)={\displaystyle \sum_{i=-{n}_1\left({k}_1-1\right)}^{n_1\left({k}_1-1\right)}{\displaystyle \sum_{j=-{n}_2\left({k}_2-1\right)}^{n_2\left({k}_2-1\right)} P\left({D}_1= i;{k}_1,{n}_1\right) \times }}\\ {}\\ {}\kern13.5em P\left({D}_2= j;{k}_2,{n}_2\right)\times P\left({D}_3\ge \left( d- i- j\right);{k}_3,{n}_3\right).\end{array} $$
Although the sum functions slow down calculation, this unique feature of exact p-value computation enables one to conduct valid simultaneous significance tests whenever some within-block ranks are missing by design. Such tests would be hard to accomplish using one of the large-sample approximation methods. An empirical example will be given in the Application section.
Exact and mid p-values
As pairwise differences with support on d = [−n(k − 1), n(k − 1)] are symmetrically distributed around zero under H 0, doubling one-sided p-value is the most natural and popular choice for an ordinary exact test. A test using exact p-value guarantees that the probability of committing a Type-I error does not exceed the nominal level of significance. However, as the Type-I error rate is always below the nominal level, a significance test with exact p-value is a conservative approach to testing, especially if the test involves a highly discrete distribution [57]. The mid p-value, commonly defined as half the probability of an observed statistic plus the probability of more extreme values, i.e.,
$$ {P}_{\mathrm{mid}}\left( D\ge d; k, n\right)={\scriptscriptstyle \frac{1}{2}} P\left( D= d\right)+ P\left( D> d\right), $$
ameliorates this problem. The mid p-value is always closer to the nominal level than the exact p-value, at the expense of occasionally exceeding the nominal size.
Tied rankings
The mid p-value may also be used to handle tied rankings. When ties occur within blocks, the midrank (i.e., average of the ranks) is commonly assigned to each tied value. If, as a result of tied ranks, the observed rank sum difference is an integer value d plus 0.5, the p-value may be obtained as the average of the exact p-values of the adjacent integers d and d + 1, i.e., \( {\scriptscriptstyle \frac{1}{2}}\left[ P\left( D\ge d\right)+ P\left( D\ge\ d+1\right)\right], \) and this is equivalent to the mid p-value. It is to note that the resulting probability is not exactly valid. Exact p-values represent exact frequency probabilities of certain events, and mid p-values have no such frequency interpretation. It may be argued, however, that this interpretational disadvantage is of little practical concern and that using mid p-values is an almost exact frequency approach. For a discussion of other treatments of ties in rank tests, see [21].
Time performance
The R program computes the exact p-value P(D ≥ d; k, n) at a fast speed. It takes about half a second, for example, to calculate the exact p-value for the rather demanding problem d = k = n = 100, on a HP desktop computer using the interpreted R language running under Windows 7 with an Intel Core i7 processor at 2.9GHz. To examine the effects of d, k and n on the algorithm's runtime, we measured the time it takes to calculate the exact p-value of d = 1 and d = n(k − 1) − 1, for n = 2, …, 100, and k = 10 and k = 100. The two support values next to the endpoints of the distribution were taken as the p-values of the lower and upper support boundaries can be trivially obtained as 1 and 2{k(k − 1)}− n, respectively. The computation time (in seconds) is shown in Fig. 1.
Computational time. Time (in seconds) for calculating the exact p-value of d = 1 and d = k(n − 1) − 1, for n = 2, …, 100 and k = 10 (black line) and k = 100 (red line)
The figure indicates that running time is no limitation when it comes to calculating the exact p-value, even for larger problems. Computation time is moderately affected by the magnitude of the computed p-value. The smaller the p-value is, the faster the computation speed. For rank sum difference d = 1 running time increases polynomially (of maximum order 3) with increasing n, and for d = n(k − 1) − 1 it increases virtually linearly. Also, for d = 1, the minor runtime difference between k = 10 and k = 100 increases slowly with increase in value of n. For d = n(k − 1) − 1 the time to do the calculation is essentially the same for k = 10 as for k = 100. In sum, these timing results show that the exact method admits an algorithm that is fast for all k and n values typically encountered in empirical studies testing differences in Friedman rank sums, such as those comparing classifiers. This quality makes the algorithm for exact calculation appealing, compared to alternative asymptotic approximations. Indeed, the algorithm is (considerably) faster than the one used here for evaluating the multivariate normal-approximate critical difference (CD M ).
Exact distribution examples
We present some examples to illustrate the frequency probability distribution of rank sum difference d. The left panel of Fig. 2a displays the mass point probabilities P(D = d; k, n) for k = 5 and n = 5, over the entire support interval d = [0, 20]. The right panel shows the exact p-values P(D ≥ d; k, n) for k = n = 5, i.e., the tail-probability at and beyond the value of d. The steps in the (cumulative) probability distributions are due to the discreteness of d, implying that events are concentrated at a few mass points. To adjust the p-values for discreteness, one might opt to obtain mid p-values. The mid p-value is less than the exact p-value by half the mass point probability of the observed result, and it behaves more like the p-value for a test statistic with a continuous distribution.
Distribution of exact mass point probabilities and exact p-values. a Exact mass point probabilities, and exact p-values, for k = n = 5. (b) Exact p-values, and log10-transformed exact (blue line) and normal approximate p-values (red line), for k = n = 10. (c) Histogram of simulated p-values under the overall null hypothesis with expected null frequency superimposed, and cumulative distribution function of the simulated 1 − p-values with diagonal line overlay, for k = 50, n = 5.
The jumps at the steps decrease with increase in value of k and/or n. To exemplify this point, the left panel of Fig. 2b displays the less discrete p-value distribution for k = n = 10. The powerful benefit of exact calculation is shown in the right panel of the same figure. The graph displays the log10-transformed p-values obtained by exact calculation, with the cumulative normal density superimposed. As can be seen, the continuous normal is imperfect for estimating probabilities in the long right tail, where d values are large and p-values are small. Note that the error increases as the p-values decline. Compared to exact calculation, the cumulative normal is overly conservative in that it tends to over-predict the true p-value and thus understate the evidence against the null.
For continuous test statistics, p-values are known to be uniformly distributed over the interval [0,1] when the null hypothesis is true [58]. Also, uniformly distributed p-values, with a mean of 0.5 and a variance of 1/12 ≈ 0.0833, produce a linear cumulative distribution function corresponding to the true overall null hypothesis, implying that points in the cumulative p-value plot exhibit a straight line. We generated n = 5 Monte Carlo permutations of k = 50 integers from 1 to k inclusive, and calculated the rank sums and the exact p-value of the rank sum differences. For this particular set of permutations, the mean of the ( k2 ) = 1, 225 p-values was 0.512 and the variance 0.0824. The left panel of Fig. 2c confirms the intuitive notion that the discrete p-values are approximately uniformly distributed under H 0. The right panel plots the 1 − p-value against the number of p-values (i.e., number of hypothesis tests), expressed in terms of proportions. As can be seen, the ensemble of p-values in the cumulative plot is close to the diagonal line, as is to be expected when null hypotheses are all true.
Exact versus approximate critical differences
Table 3 presents the unadjusted and the Bonferroni-adjusted exact and approximate critical differences for 1 × N and N × N comparisons of Friedman rank sums, for n = k = 5,10,25,50,100, at the familywise error rate of α=.05. The values for CD M were obtained using the R package mvtnorm [59], and the other approximate values using standard distributions available in the R stats package [55].
Table 3 Exact (CD) and approximate critical values of differences in rank sums, at the familywise error rate of α=.05
The first point to note from Table 3 is that, at the .05 level, the unadjusted normal-approximate critical differences (CD N ) are identical to the exact CD for almost all k and n. In the event one chooses not to control the familywise error rate, the underestimation by CD N amounts to 1 at most, at least for the values of k and n considered here.
The close correspondence of normal-approximate and exact CD deteriorates once the p-value threshold for significance is corrected for multiple testing. In 1 × N comparisons, the agreement is quite satisfactory as long as k is small relative to n, but the normal method overestimates the exact critical value if k is larger than n. The same goes for N × N comparisons, but worse. As can be seen, the normal approximation generally improves as n gets larger, for constant value of k, supporting large-sample normal theory. However, the normal method overestimates the exact critical value considerably if k is larger than n. The disparity is most pronounced if k is large and n is small. For example, for k = 25 and n = 5, the exact CD is 83, whereas the (rounded) normal approximate critical difference value equals 88. The normal approximation produces larger than exact p-values at the tails and larger than exact critical difference values.
The second point to note is that the ordinary normal method – while understating the evidence against the null hypothesis – is, by and large, the most accurate approximate test of the asymptotic variants studied here. The CD M for k − 1 comparisons with a control tends to underestimate the exact CD, even if n is large, which may lead one to incorrectly reject the null hypothesis. The same goes, but somewhat less so, for all-pairs comparisons with CD Q . The Studentized range critical value is seen to be too liberal in the sense that it underestimates the critical difference value, even for larger values of n, and especially if n outnumbers k. The asymptotic procedure that draws on the chi-squared distribution is seen to perform inadequately overall. As the inferences are suspect, this test statistic is not advocated as a criterion for judging whether differences in Friedman rank sums are significant.
Hence, in general, the normal approximation is overly conservative if n is smaller than k and the other approximations are too liberal if n is larger than k, and this holds even for relatively large values of n. For many parameter settings the biases are considerable. In any case, they are large enough to imply that if the observed rank sum difference is near to the critical value, the choice between exact and approximate methods can mean the difference between pairs of groups being considered significantly different or not. It is equally important to note that the above results apply to a familywise error rate of α=.05. The disparity between exact and asymptotic critical values increases, if the error rate is set to a lower value (e.g., .01). This issue is well visualized in the right panel of the earlier discussed Fig. 2b.
Type-I error and mid p-values
The critical difference values denoted CD in Table 3 were obtained by setting the bound on Type-I error at 5%. For the asymptotic approximate methods, with a continuous reference distribution, the maximum probability of rejecting the null when it is in fact true is equal to α=.05. An exact test, however, keeps the actual probability of a Type-I error below 5%, as there are only certain p-values possible when working with discrete data. Table 4 reports the actual probability of a Type-I error (i.e., exact p-value) and the mid p-value, for the unadjusted exact CD values presented in Table 3 (column 4).
Table 4 Exact and mid p-values for unadjusted exact CD values
Note that, whereas the alpha level was set at 5%, the actual probability of a Type-I error for the smallest n = k = 5 is a little above 3%. For larger values of k and n the ordinary exact test appears only slightly more conservative than the nominal level. Note further that the mid p-value minimizes the discrepancy between the exact p-value and the significance level. The mid p-value occasionally exceeds the nominal level, and still tends to somewhat underrate the nominal in other instances, although necessarily less so than using the exact p-value. As can be seen, the difference between exact and mid p-value diminishes as k and/or n increases and the discreteness of the sample distribution diminishes.
We emphasize in this context that the inferential conservativeness associated with exact p-values is introduced by testing at a pre-specified alpha level of significance. In practice, it might be preferable to report observed levels of significance rather than testing at a particular cut-off value.
Normal error and continuity correction
Because the discrete rank sum difference distribution is approximated by a continuous distribution, a correction for continuity is advocated by some (e.g., [24]), to bring the asymptotic probabilities into closer agreement with the exact discrete probabilities. We restrict the discussion to the normal approximation and calculate the percentage relative error of the normal p-values to the true p-values using
$$ R(d)=100\left\{\frac{P_{\mathrm{normal}}\left( d- c\right)-{P}_{\mathrm{exact}}(d)}{P_{\mathrm{exact}}(d)}\right\}, $$
where c is equal to 0.5 or 0 for the normal method with or without continuity correction, respectively. Figure 3 displays the percentage relative error R(d) versus exact p-values, for n = k = 10,100.
Error normal approximation. Percentage relative error R(d) of normal approximation with (red line) and without (black line) continuity correction versus exact p-value, for n = k = 10,100
The graphics indicate that the relative error decreases with increasing n, both for k = 10 and k = 100. They also show that, for k = 10 and n = 10,100, the normal approximation without continuity correction underestimates the true p-value if the exact probabilities are large. However, small true p-values are overestimated by the normal and this overestimation increases as the probabilities become smaller. Continuity correction brings large normal p-values into closer correspondence with the exact p-values, but for small p-values (i.e., significant results) it may worsen agreement and increase overestimation by the normal. For k = 100, the rank sum difference distribution is less discrete and therefore correction for continuity has little effect. This suggests that the neglect of the continuity correction is not a serious matter, and may, indeed, occasionally be an advantage.
Finally, as indicated, the large-sample approximations are derived for the situation where n is large. Frequently, however, the number of groups may be quite large whereas the number of replications per group is limited [60]. Such 'large k, small n' situation is fairly common in agricultural screening trials [61] for example, and it also occurs quite often in comparisons of classifiers using ranked data. Published examples in bioinformatics include classifier studies with dimensions k = 9 and n = 3 [62], k = 10 and n = 6 [63], and k = 13 and n = 4 [64]. A similar issue arises in the identification of k genes by ranking using n different algorithms, for example, k = 13 and n = 5 as in [65], and k = 88 and n = 12 as in [66]. Such 'large k, small n' data are common in gene-expression profiling studies [67, 68]. Particularly for these data conditions, the choice of an appropriate test statistic is vitally important to the validity of research inferences.
We present two data examples to illustrate potential non-equivalence of exact and approximate inference, and the benefit of exact calculation. Recall that we assume that the data are such that it is appropriate to perform the Friedman test. We pass no judgement on this, as that would require expertise in the substantive fields and detailed 'in-house' knowledge of selection and measurement procedures. For a proper statistical framework for comparison studies see Boulesteix et al. [30]. This review study also shows that real-world applications comparing classifiers are often underpowered. That is, in small-sample settings the differences between the performances of pairs of algorithms are sometimes so variable that one is unable to draw statistically meaningful conclusions.
To illustrate the benefit of exact calculation, Friedman rank data on the comparison of qPCR curve analysis methods were obtained from Ruijter et al. [69]. The aim of the comparison of the k = 11 methods was to test their performance in terms of the following (n = 4) indicators: bias, linearity, precision, and resolution in transcriptional biomarker identification. The null hypothesis is that there is no preferred ranking of the method results per gene for the performance parameters analyzed. The rank scores were obtained by averaging results across a large set of 69 genes in a biomarker data file.
Table 5 displays the Friedman rank sums of the methods and, in the upper top triangle, the absolute values of the differences in rank sums. We obtained the Bonferroni-adjusted normal-approximate p-value, Bonferroni-adjusted exact p-value, and Studentized range approximate p-value for the 55 rank sum differences. The results are presented in the upper bottom, lower bottom, and lower top triangles of the table, respectively.
Table 5 Friedman rank data for k = 11 methods and n = 4 performance indicators (Ruijter et al. [69])
Straightforward comparison shows that the approximations are conservative estimates of the true probabilities. That is, the smallest exact p-values are considerably smaller than both the normal and the Studentized range approximate p-values. According to the normal approximate test there is, at a familywise error rate of .05, no evidence that the methods perform differently, except for Cy0 and FPF-PCR, the pair of methods with the largest difference in rank sums. When applying the Studentized range distribution we find a rank sum difference of d = 31 or larger to be significant. The true p-values are smaller however, and exact calculation provides evidence that the critical difference value at α=.05 is d = 30, implying that four pairs of methods perform significantly different. This example illustrates the practical implication of using exact p-values in the sense that exact calculation uncovers more significantly different pairs of methods than the asymptotic approximations, and may thus lead to different conclusions.
We were reminded by the reviewers of this paper that the Friedman test assumes that the n blocks are independent, so that the measurement in one block has no influence on the measurements in any other block. This leads to questioning the appropriateness of the Friedman test in this application. We do not wish to make any firm judgement about this, other than making the observation that the rank scores presented in the source paper ([69]: Table 2) are strongly related. The same goes for the results of a similar analysis of much the same data by other researchers ([64]: Table 1).
The second illustration concerns exact calculation in incomplete designs. Zagar et al. [70] investigated the utility of k = 12 data transformation approaches and their predictive accuracy in a systematic evaluation on n = 10 cell differentiation datasets from different species (mouse, rat, and human) retrieved from the Gene Expression Omnibus. To compare the predictive accuracy performance of the k = 12 methods on the n = 10 datasets, they used the Friedman test. Table 6 presents the Friedman ranks obtained by ranking the raw scores presented in Table 1 of Zagar et al. [70].
Table 6 Friedman rank data for k = 12 methods and n = 10 cell differentiation datasets (Zagar et al. [70])
Note that the ranks of Pathrecon and PCA-Markers for dataset GDS2688 are missing. Zagar et al. [70] therefore decided to exclude all ranks within GDS2688 from the computation of the rank sums and restricted their analysis to n = 9 datasets. The rank sums excluding GDS2688 are displayed in the right-most column of Table 6.
Instead of deleting GDS2688, the missing data for Pathrecon and PCA-Markers could be dealt with by substitution, for example by imputing the mean of the observed raw scores, followed by re-ranking the 12 methods according to their scores on GDS2688. However, as noted by the authors, the score of PCA-Markers for GDS2688 is not given because "stem cell differentiation markers are not relevant for the process studied in this dataset" ([70]: 2549). Hence the rank score is missing by design, and thus imputation is inappropriate at least for the PCA-Markers method.
An alternative procedure is to divide the n = 10 independent ranking into two different parts, one consisting of k = 12 methods and n = 9 datasets and the other one having k = 10 methods and n = 1 dataset. The computation of exact p-values in such incomplete design is readily accomplished, since the probabilities are easily obtained by the method outlined above. These p-values afford the possibility to conduct valid significance tests using all available rank data.
The bottom part of Table 6 presents the exact p-values obtained for the comparison of the MCE-euclid-FC and the PLS-AREA-time methods. Additional file 6 has the R code to reproduce the results. The next-to-last row displays the exact p-values for the difference d = (73–36=)37 in rank sums, if the ranks for GDS2688 are not included in the sums. The bottom row shows the exact p-values for the rank sums difference d = ([73 + 10]-[36 + 1]=)46 if the two rank sums include the available ranks of the methods for GDS2688. Note that for this particular comparison at least, the latter p-values, whether adjusted or not, are considerable smaller than the p-values obtained after listwise deletion of missing rank data.
The p-value probabilities pertaining to difference of sums of all available rank data can also be estimated using permutation testing and most likely also with methodology such as Laplace approximation or the saddlepoint method. However, these stochastic and deterministic approximations tend to become rather complicated and more cumbersome to work with than the exact computation method described here.
We provide a combinatorial exact expression for obtaining the probability distribution of the discrete rank sum difference statistic for pairwise comparison of Friedman rank sums. The exact null distribution contributes to the improvement of tests of significance in the comparison of Friedman rank sums, and constitutes a framework for validating theoretical approximations to the true distribution. The numerical evaluations show that, in multiple comparison testing, determining the exact critical difference and the true p-value offers a considerable improvement over large-sample approximations in obtaining significance thresholds and achieved levels of significance. The empirical applications discussed exemplify the benefit, in practice, of using exact rather than asymptotic p-values.
Of the large-sample approximation methods considered in this study, the simple normal approximation corresponds most closely to the exact results, both for many-one and all-pairs comparisons. However, the difference between exact and normal approximate p-values can be large for significant events further in the tail of the distribution. Such events occur, in particular, whenever the number of groups k is large and the number of blocks n is small. In a multiple testing context with 'large k and small n', application of the normal approximation increases the probability of a Type-II error, hence false acceptance of the null hypothesis of 'no difference'. The exact p-values also greatly improve the ability to detect significant differences if the observed rank sum differences are close to the approximate critical value. In such situation, the choice between exact and approximate methods can mean the difference between pairs (classifiers) being considered significantly different or not. Further, we typically prefer tests that are as accurate as possible while still being fast to compute. As the exact p-values can be computed swiftly by the method outlined in this note, there is no longer need to resort to occasionally flawed approximations.
Finally, the rank sum and rank product statistics are widely used in molecular profiling to identify differentially expressed molecules (i.e., genes, transcripts, proteins, metabolites) [67, 68, 71]. Molecule selection by ranking is important because only a limited number of candidate molecules can usually be followed up in the biological downstream analysis for subsequent study. The non-parametric statistic discussed here is potentially an additional new tool in the toolbox of methods for making justified, reproducible decisions about which molecules to consider as significantly differentially expressed.
Critical difference
LSD:
Least significant difference
Friedman M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc. 1937;32:675–701.
Demšar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;7:1–30.
García S, Herrera F. An extension on "Statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. J Mach Learn Res. 2008;9:2677–94.
Derrac J, García S, Molina D, Herrera F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput. 2011;1:3–18.
Perrodou E, Chica C, Poch O, Gibson TJ, Thompson JD. A new protein linear motif benchmark for multiple sequence alignment software. BMC Bioinformatics. 2008;9:213.
Jones ME, Mayne GC, Wang T, Watson DI, Hussey DJ. A fixed-point algorithm for estimating amplification efficiency from a polymerase chain reaction dilution series. BMC Bioinformatics. 2014;15:372.
de Souto MCP, Jaskowiak PA, Costa IG. Impact of missing data imputation methods on gene expression clustering and classification. BMC Bioinformatics. 2015;16:64.
Carvalho SG, Guerra-Sá R, de C Merschmann LH. The impact of sequence length and number of sequences on promoter prediction performance. BMC Bioinformatics. 2015;16 Suppl 19:S5.
Frades I, Resjö S, Andreasson E. Comparison of phosphorylation patterns across eukaryotes by discriminative N-gram analysis. BMC Bioinformatics. 2015;16:239.
Staržar M, Žitnik M, Zupan B, Ule J, Curk T. Orthogonal matrix factorization enables integrative analysis of multiple RNA binding proteins. Bioinformatics. 2016;32:1527–35.
Bacardit J, Widera P, Márquez-Chamorro A, Divina F, Aguilar-Ruiz JS, Krasnogor N. Contact map prediction using a large-scale ensemble of rule sets and the fusion of multiple predicted structural features. Bioinformatics. 2012;28:2441–8.
Allhoff M, Seré K, Chauvistré H, Lin Q, Zenke M, Costa IG. Detecting differential peaks in ChIP-seq signals with ODIN. Bioinformatics. 2014;30:3467–75.
Gusmao EG, Dieterich C, Zenke M, Costa IG. Detection of active transcription factor binding sites with the combination of DNase hypersensitivity and histone modifications. Bioinformatics. 2014;30:3143–51.
Gong H, Liu H, Wu J, He H. Data construction for phosphorylation site prediction. Brief Bioinform. 2014;15:839–55.
Xue LC, Rodrigues JPGLM, Dobbs D, Honavar V, Bonvin AMJJ. Template-based protein–protein docking exploiting pairwise interfacial residue restraints. Brief Bioinform. 2016. doi:10.1093/bib/bbw027.
Iranzo J, Gómez MJ, López de Saro FJ, Manrubia S. Large-scale genomic analysis suggests a neutral punctuated dynamics of transposable elements in bacterial genomes. PLoS Comput Biol. 2014;10, e1003680.
Pontes B, Giráldez R, Aquilar-Ruiz JS. Configurable pattern-based evolutionary biclustering of gene expression data. Algorithm Mol Biol. 2013;8:4.
Siegel S, Castellan Jr NJ. Nonparametric Statistics for the Behavioral Sciences. 2nd ed. New York: McGraw-Hill; 1988.
Daniel WW. Applied Nonparametric Statistics. 2nd ed. Boston: Houghton Mifflin; 1990.
Zarr JH. Biostatistical analysis. 4th ed. Upper Saddle River: Prentice-Hall; 1999.
Gibbons JD, Chakraborti S. Nonparametric Statistical Inference. 4th ed. New York: Marcel Dekker; 2003.
Sheskin DJ. Handbook of parametric and nonparametric statistical procedures. 5th ed. Boca Raton: Chapman and Hall/CRC; 2011.
Hollander M, Wolfe DA, Chicken E. Nonparametric statistical methods. 3rd ed. New York: Wiley; 2014.
Whitfield JW. The distribution of the difference in total rank value for two particular objects in m rankings of n objects. Brit J Statist Psych. 1954;7:45–9.
Miller Jr RG. Simultaneous statistical inference. New York: McGraw-Hill; 1966.
Stuart A. Limit distributions for total rank values. Brit J Statist Psych. 1954;7:31–5.
Koziol JA. A note on multiple comparison procedures for analysis of ranked data. Universal Journal of Food and Nutrition Science. 2013;1:11–5.
van de Wiel MA. Exact null distributions of quadratic distribution-free statistics for two-way classification. J Stat Plan Infer. 2004;120:29–40.
Cytel. StatXact: Statistical Software for Exact Nonparametric Inference. Cambridge: Cytel Software Corporation; 2016.
Boulesteix A-L, Hable R, Lauer S, Eugster MJA. A statistical framework for hypothesis testing in real data comparison studies. Am Stat. 2015;69:201–12.
Boulesteix A-L. On representative and illustrative comparisons with real data in bioinformatics: response to the letter to the editor by Smith et al. Bioinformatics. 2013;20:2664–6.
Jensen DR. Invariance under dependence by mixing. In: Block HW, Sampson AR, Savits TH, editors. Topics in Statistical Dependence. Lectures Notes - Monograph Series Volume 16. Hayward: Institute of Mathematical Statistics; 1990. p. 283–94.
Hettmansperger TP. Statistical inference based on ranks. New York: Wiley; 1984.
Puri ML, Sen PK. Nonparametric methods in multivariate analysis. New York: Wiley; 1971.
Laurent RS, Turk P. The effects of misconceptions on the properties of Friedman's test. Commun Stat Simulat. 2013;42:1586–615.
Munzel U, Brunner E. An exact paired rank test. Biometrical J. 2002;44:584–93.
Iman RL, Davenport JM. Approximations of the critical region of the Friedman statistic. Comm Stat A Theor Meth. 1980;9:571–95.
Skillings JH, Mack GA. On the use of a Friedman-type statistic in balanced and unbalanced block designs. Technometrics. 1981;23:171–7.
Nemenyi PB. Distribution-free multiple comparisons, PhD thesis. Princeton: Princeton University; 1963.
Desu MM, Raghavarao D. Nonparametric statistical methods for complete and censored data. Boca Raton: Chapman and Hall/CRC; 2004.
Bortz J, Lienert GA, Boehnke K. Verteilungsfreie Methoden in der Biostatistik. Berlin: Springer; 1990.
Wike EL. Data analysis. A statistical primer for psychology students. New Brunswick: Aldine Transaction; 2006.
Saville DJ. Multiple comparison procedures: the practical solution. Am Stat. 1990;44:174–80. doi:10.2307/2684163.
Rosenthal I, Ferguson TS. An asymptotically distribution-free multiple comparison method with application to the problem of n rankings of m objects. Brit J Math Stat Psych. 1965;18:243–54.
Conover WJ. Practical x. 3rd ed. New York: Wiley; 1990.
Sprent P, Smeeton NC. Applied nonparametric statistical methods. 3rd ed. Boca Raton FL: Chapman and Hall/CRC; 2001.
Waller RA, Duncan DB. A Bayes rule for symmetric multiple comparisons problem. J Am Stat Assoc. 1969;64:1484–503. doi:10.2307/2286085.
Conover WJ, Iman RL. On multiple-comparisons procedures. Technical report LA-7677-MS. Los Alamos: Los Alamos Scientific Laboratory. 1979.
Feller W. An introduction to probability theory and its applications, volume I. New York: Wiley; 1968.
Koziol JA, Feng AC. A note on the genome scan meta-analysis statistic. Ann Hum Genet. 2004;68:376–80.
Szapudi I, Szalay A. Higher order statistics of the galaxy distribution using generating functions. Astrophys J. 1993;408:43–56.
OEIS Foundation Inc. The On-Line Encyclopedia of Integer Sequences, http://oeis.org; 2011.
Tsao CK. Distribution of the sum in random samples from a discrete population. Ann Math Stat. 1956;27:703–12.
Dobrushkin VA. Methods in algorithmic analysis. Boca Raton: Chapman and Hall/CRC; 2009.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2012.
Maechler M. Rmpfr: R MPFR – Multiple Precision Floating-Point Reliable, Version 0.6-0, December 4 2015, https://cran.r-project.org/web/packages/Rmpfr/index.html
Agresti A. Categorical data analysis. 2nd ed. New York: Wiley; 2002.
Schweder T, Spjøtvoll E. Plots of P-values to evaluate many tests simultaneously. Biometrika. 1982;69:493–502.
Genz A, Bretz F, Miwa T, Mi X, Leisch F, Scheipl F, Bornkamp B, Maechler M, Hothorn T. Mvtnorm: multivariate normal and t distribution. Version. 2016;1. https://cran.r-project.org/web/packages/mvtnorm/.
Bathke A, Lankowski D. Rank procedures for a large number of treatments. J Stat Plan Infer. 2005;133:223–38.
Brownie C, Boos DD. Type I error robustness of ANOVA and ANOVA on ranks when the number of treatments is large. Biometrics. 1994;50:542–9.
Walia RR, Caragea C, Lewis BA, Towfic F, Terribilini M, El-Manzalawy Y, Dobbs D, Honavar V. Protein-RNA interface residue prediction using machine learning: an assessment of the state of the art. BMC Bioinformatics. 2012;13:89.
Wilm A, Mainz I, Steger G. An enhanced RNA alignment benchmark for sequence alignment programs. Algorithms Mol Biol. 2006;1:19.
Bultmann CA, Weiskirschen R. MAKERGAUL: an innovative MAK2-based model and software for real-time PCR quantification. Clin Biochem. 2014;47:117–22.
Nascimento CS, Barbosa LT, Brito C, Fernandes RPM, Mann RS, Pinto APG, Oliviera HC, Dodson MV, Guimarães SEF, Duarte MS. Identification of suitable reference genes for real time quantitative polymerase chain reaction assays on Pectoralis major muscle in chicken (Gallus gallus). PLoS One. 2015;10, e0127935.
Hosseini I, Gama L, Mac Gabhann F. Multiplexed component analysis to identify genes contributing to the immune response during acute SIV infection. PLoS One. 2015;10, e0126843.
Eisinga R, Breitling R, Heskes T. The exact probability distribution of the rank product statistics for replicated experiments. FEBS Lett. 2013;587:677–82.
Heskes T, Eisinga R, Breitling R. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments. BMC Bioinformatics. 2014;15:367. doi:10.1186/s12859-014-0367-1.
Ruijter JM, Pfaffl MW, Zhao S, Spiess AN, Boggy G, Blom J, Rutledge RG, Sisti D, Lievens A, De Preter K, Derveaux S, Hellemans J, Vandesompele J. Evaluation of qPCR curve analysis methods for reliable biomarker discovery: bias, resolution, precision, and implications. Methods. 2013;59:32–46.
Zagar L, Mulas F, Garagna S, Zuccotti M, Bellazzi R, Zupan B. Stage prediction of embryonic stem cell differentiation from genome-wide expression data. Bioinformatics. 2011;27:2546–53. doi:10.1093/bioinformatics/btr422.
Breitling R, Armengaud P, Amtmann A, Herzyk P. Rank products: a simple, yet powerful, new method to detect differentially regulated genes in replicated microarray experiments. FEBS Lett. 2004;573:83–92.
The authors greatly appreciate comments by three reviewers leading to substantial improvements of the manuscript.
The rank data discussed in the main text were obtained from Table 2 in Ruijter et al. [69], and from Table 1 in Zagar et al. [70]. The R code in Additional file 3 and potential future updates are also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip.
RE designed the exact method, implemented the algorithm, and drafted the manuscript. BP assisted in the implementation in R and drafted the manuscript. TH and MTG supervised the study and drafted the manuscript. All authors read and approved the final manuscript.
Consent to publish
Department of Social Science Research Methods, Radboud University Nijmegen, PO Box 9104, , 6500 HE, Nijmegen, The Netherlands
Rob Eisinga, Ben Pelzer & Manfred Te Grotenhuis
Institute for Computing and Information Sciences, Radboud University Nijmegen, Nijmegen, The Netherlands
Tom Heskes
Rob Eisinga
Ben Pelzer
Manfred Te Grotenhuis
Correspondence to Rob Eisinga.
Additional file 1:
Proof of Theorem 1. (PDF 59 kb)
Friedmanrsd. A.zip file providing the script of the algorithm implemented in R. (ZIP 2 kb)
Numerical example for k = 3, n = 2. (PDF 67 kb)
Number of compositions of d for k,n = 2,…,6. (PDF 65 kb)
Computation of p-values presented in Table 6. A.zip file providing the R code to reproduce the exact p-values presented in Table 6. (ZIP 1 kb)
Eisinga, R., Heskes, T., Pelzer, B. et al. Exact p-values for pairwise comparison of Friedman rank sums, with application to comparing classifiers. BMC Bioinformatics 18, 68 (2017). https://doi.org/10.1186/s12859-017-1486-2
Exact p-value
Rank sum difference
Multiple comparison
Nonparametric statistics
Classifier comparison
|
CommonCrawl
|
Only show content I have access to (39)
Only show open access (13)
Forthcoming (1)
Last week (1)
Last month (1)
Last 3 months (3)
Last 3 years (32)
Earth and Environmental Sciences (15)
Statistics and Probability (12)
Journal of Materials Research (20)
Epidemiology & Infection (5)
The Journal of Navigation (5)
Geological Magazine (4)
Journal of Glaciology (4)
Journal of Applied Probability (3)
Numerical Mathematics: Theory, Methods and Applications (3)
Advances in Applied Mathematics and Mechanics (2)
Communications in Computational Physics (2)
Genetics Research (2)
Infection Control & Hospital Epidemiology (2)
Microscopy and Microanalysis (2)
Proceedings of the International Astronomical Union (2)
Public Health Nutrition (2)
Journal of the Australian Mathematical Society (1)
Intersentia (1)
Nutrition Society (9)
Global Science Press (8)
International Glaciological Society (5)
Applied Probability Trust (3)
International Astronomical Union (3)
Royal College of Speech and Language Therapists (2)
Society for Healthcare Epidemiology of America (SHEA) (2)
AMA Mexican Society of Microscopy MMS (1)
Australian Mathematical Society Inc (1)
BLI Birdlife International (1)
Canadian Mathematical Society (1)
EAAP (1)
Fauna & Flora International - Oryx (1)
Forum of Mathematics (1)
MSC - Microscopical Society of Canada (1)
Northeastern Agricultural and Resource Economics Association (1)
Southern Agricultural Economics Association (SAEA) (1)
Mathematical Analysis of Machine Learning Algorithms
Tong Zhang
Expected online publication date: July 2023
The mathematical theory of machine learning not only explains the current algorithms but can also motivate principled approaches for the future. This self-contained textbook introduces students and researchers of AI to the main mathematical techniques used to analyze machine learning algorithms, with motivations and applications. Topics covered include the analysis of supervised learning algorithms in the iid setting, the analysis of neural networks (e.g. neural tangent kernel and mean-field analysis), and the analysis of machine learning algorithms in the sequential decision setting (e.g. online learning, bandit problems, and reinforcement learning). Students will learn the basic mathematical tools used in the theoretical analysis of these machine learning problems and how to apply them to the analysis of various concrete algorithms. This textbook is perfect for readers who have some background knowledge of basic machine learning methods, but want to gain sufficient technical knowledge to understand research papers in theoretical machine learning.
The efficacy of immune checkpoint inhibitor monotherapy or combined with other small molecule-targeted agents in ovarian cancer
Munawaer Muaibati, Abasi Abuduyilimu, Tao Zhang, Yun Dai, Ruyuan Li, Fanwei Huang, Kexin Li, Qing Tong, Xiaoyuan Huang, Liang Zhuang
Journal: Expert Reviews in Molecular Medicine / Accepted manuscript
Published online by Cambridge University Press: 24 January 2023, pp. 1-47
Profiles of interpersonal relationship qualities and trajectories of internalizing problems among Chinese adolescents
Jianjie Xu, Ruixi Sun, Jingyi Shen, Yuchi Zhang, Wei Tong, Xiaoyi Fang
Journal: Development and Psychopathology , First View
Published online by Cambridge University Press: 08 November 2022, pp. 1-12
Adolescence is a significant period for the formation of relationship networks and the development of internalizing problems. With a sample of Chinese adolescents (N = 3,834, 52.01% girls, Mage = 16.68 at Wave 1), the present study aimed to identify the configuration of adolescents' relationship qualities from four important domains (i.e., relationship quality with mother, father, peers, and teachers) and how distinct profiles were associated with the development of internalizing problems (indicated by depressive and anxiety symptoms) across high school years. Latent profile analysis identified a five-profile configuration with four convergent profiles (i.e., relationship qualities with others were generally good or bad) and one "Father estrangement" profile (i.e., the relationship quality with others were relatively good but that with father was particularly poor). Further conditional latent growth curve analysis indicated the "Father estrangement" profile was especially vulnerable to an increase in the internalizing problems as compared with other relationship profiles. This study contributes to understanding the characteristics of interpersonal relationship qualities and their influences on adolescent internalizing problems in a non-Western context. Results were further discussed from a culturally specific perspective.
A comparison between three-dimensional, transient, thermomechanically coupled first-order and Stokes ice flow models
Zhan Yan, Wei Leng, Yuzhe Wang, Cunde Xiao, Tong Zhang
Journal: Journal of Glaciology , First View
Published online by Cambridge University Press: 03 October 2022, pp. 1-12
In this study, we investigate the differences between two transient, three-dimensional, thermomechanically coupled ice-sheet models, namely, a first-order approximation model (FOM) and a 'full' Stokes ice-sheet model (FSM) under the same numerical framework. For all numerical experiments, we take the FSM outputs as the reference values and calculate the mean relative errors in the velocity and temperature fields for the FOM over 100 years. Four different boundary conditions (ice slope, geothermal heat flux, basal topography and basal sliding) are tested, and by changing these parameters, we verify the thermomechanical behavior of the FOM and discover that the velocity and temperature biases of the FOM generally increase with increases in the ice slope, geothermal heat flux, undulation amplitude of the ice base, and with the existence of basal sliding. In addition, the model difference between the FOM and FSM may accumulate over time, and the spatial distribution patterns of the relative velocity and temperature errors are in good agreement.
Dual-channel LIDAR searching, positioning, tracking and landing system for rotorcraft from ships at sea
Tao Zeng, Hua Wang, Xiucong Sun, Hui Li, Zhen Lu, Feifei Tong, Hao Cheng, Canlun Zheng, Mengying Zhang
Journal: The Journal of Navigation / Volume 75 / Issue 4 / July 2022
Print publication: July 2022
To address the shortcomings of existing methods for rotorcraft searching, positioning, tracking and landing on a ship at sea, a dual-channel LIDAR searching, positioning, tracking and landing system (DCLSPTLS) is proposed in this paper, which utilises the multi-pulse laser echoes accumulation method and the physical phenomenon that the laser reflectivity of the ship deck in the near-infrared band is four orders of magnitude higher than that of the sea surface. The DCLSPTLS searching and positioning model, tracking model and landing model are established, respectively. The searching and positioning model can provide estimates of the azimuth angle, the distance of the ship relative to the rotorcraft and the ship's course. With the above parameters as inputs, the total tracking time and the direction of the rotorcraft tracking speed can be obtained by using the tracking model. The landing model can calculate the pitch and the roll angles of the ship's deck relative to the rotorcraft by using the least squares method and the laser irradiation coordinates. The simulation shows that the DCLSPTLS can realise the functions of rotorcraft searching, positioning, tracking and landing by using the above parameters. To verify the effectiveness of the DCLSPTLS, a functional test is performed using a rotorcraft and a model ship on a lake. The test results are consistent with the results of the simulation.
Relative Severi inequality for fibrations of maximal Albanese dimension over curves
Surfaces and higher-dimensional varieties
Families, fibrations
Yong Hu, Tong Zhang
Journal: Forum of Mathematics, Sigma / Volume 10 / 2022
Published online by Cambridge University Press: 16 June 2022, e45
Let $f: X \to B$ be a relatively minimal fibration of maximal Albanese dimension from a variety X of dimension $n \ge 2$ to a curve B defined over an algebraically closed field of characteristic zero. We prove that $K_{X/B}^n \ge 2n! \chi _f$ . It verifies a conjectural formulation of Barja in [2]. Via the strategy outlined in [4], it also leads to a new proof of the Severi inequality for varieties of maximal Albanese dimension. Moreover, when the equality holds and $\chi _f> 0$ , we prove that the general fibre F of f has to satisfy the Severi equality that $K_F^{n-1} = 2(n-1)! \chi (F, \omega _F)$ . We also prove some sharper results of the same type under extra assumptions.
The Early Cretaceous tectonic evolution of the Neo-Tethys: constraints from zircon U–Pb geochronology and geochemistry of the Liuqiong adakite, Gongga, Tibet
Yao Zhong, Wen-Guang Yang, Li-Dong Zhu, Long Xie, Yuan-Jun Mai, Nan Li, Yu Zhou, Hong-Liang Zhang, Xia Tong, Wei-Na Feng
Journal: Geological Magazine / Volume 159 / Issue 10 / October 2022
Published online by Cambridge University Press: 13 June 2022, pp. 1647-1662
The subduction model of the Neo-Tethys during the Early Cretaceous has always been a controversial topic, and the scarcity of Early Cretaceous magmatic rocks in the southern part of the Gangdese batholith is the main cause of this debate. To address this issue, this article presents new zircon U–Pb chronology, zircon Hf isotope, whole-rock geochemistry and Sr–Nd isotope data for the Early Cretaceous quartz diorite dykes with adakite affinity in Liuqiong, Gongga. Zircon U–Pb dating of three samples yielded ages of c. 141–137 Ma, indicating that the Liuqiong quartz diorite was emplaced in the Early Cretaceous. The whole-rock geochemical analysis shows that the Liuqiong quartz diorite is enriched in large-ion lithophile elements (LILEs) and light rare-earth elements (LREEs) and is depleted in high-field-strength elements (HFSEs), which are related to slab subduction. Additionally, the Liuqiong quartz diorite has high SiO2, Al2O3 and Sr contents, high Sr/Y ratios and low heavy rare-earth element (HREE) and Y contents, which are compatible with typical adakite signatures. The initial 87Sr/86Sr values of the Liuqiong adakite range from 0.705617 to 0.705853, and the whole-rock ϵNd(t) values vary between +5.78 and +6.24. The zircon ϵHf(t) values vary from +11.5 to +16.4. Our results show that the Liuqiong adakite magma was derived from partial melting of the Neo-Tethyan oceanic plate (mid-ocean ridge basalt (MORB) + sediment + fluid), with some degree of subsequent peridotite interaction within the overlying mantle wedge. Combining regional data, we favour the interpretation that the Neo-Tethyan oceanic crust was subducted at a low angle beneath the Gangdese during the Early Cretaceous.
Role of dietary resistant starch in the regulation of broiler immunological characteristics
Ying-Ying Zhang, Ying-Sen Liu, Jiao-Long Li, Tong Xing, Yun Jiang, Lin Zhang, Feng Gao
Published online by Cambridge University Press: 23 May 2022, pp. 1-10
Resistant starch (RS) has received increased attention due to its potential health benefits. This study was aimed to investigate the effects of dietary corn RS on immunological characteristics of broilers. A total of 320 broiler chicks were randomly allocated to five dietary treatments: normal corn–soyabean (NC) diet group, corn starch diet group, 4 %, 8 % and 12 % RS diet groups. This trial lasted for 42 d. The relative weights of spleen, thymus and bursa, the concentrations of nitric oxide (NO) and IL-4 in plasma at 21 d of age, as well as the activities of total nitric oxide synthase (TNOS) and inducible nitric oxide synthase (iNOS) in plasma at 21 and 42 d of age showed positive linear responses (P < 0·05) to the increasing dietary RS level. Meanwhile, compared with the birds from the NC group at 21 d of age, birds fed 4 % RS, 8 % RS and 12 % RS diets exhibited higher (P < 0·05) relative weight of bursa and concentrations of NO and interferon-γ in plasma. Birds fed 4 % RS and 8 % RS diets showed higher (P < 0·05) number of IgA-producing cells in the jejunum. While compared with birds from the NC group at 42 d of age, birds fed 12 % RS diet showed higher (P < 0·05) relative weight of spleen and activities of TNOS and iNOS in plasma. These findings suggested that dietary corn RS supplementation can improve immune function in broilers.
Risk factors for sporadic listeriosis in Beijing, China: a matched case–control study
Yan-Lin Niu, Tong-Yu Wang, Xiao-Ai Zhang, Yun-Chang Guo, Ye-Wu Zhang, Chao Wang, Yang-Bo Wu, Jin-Ru Jiang, Xiao-Chen Ma
Journal: Epidemiology & Infection / Volume 150 / 2022
Published online by Cambridge University Press: 21 February 2022, e62
Listeriosis is a rare but serious foodborne disease caused by Listeria monocytogenes. This matched case–control study (1:1 ratio) aimed to identify the risk factors associated with food consumption and food-handling habits for the occurrence of sporadic listeriosis in Beijing, China. Cases were defined as patients from whom Listeria was isolated, in addition to the presence of symptoms, including fever, bacteraemia, sepsis and other clinical manifestations corresponding to listeriosis, which were reported via the Beijing Foodborne Disease Surveillance System. Basic patient information and possible risk factors associated with food consumption and food-handling habits were collected through face-to-face interviews. One hundred and six cases were enrolled from 1 January 2018 to 31 December 2020, including 52 perinatal cases and 54 non-perinatal cases. In the non-perinatal group, the consumption of Chinese cold dishes increased the risk of infection by 3.43-fold (95% confidence interval 1.27–9.25, χ2 = 5.92, P = 0.02). In the perinatal group, the risk of infection reduced by 95.2% when raw and cooked foods were well-separated (χ2 = 5.11, P = 0.02). These findings provide important scientific evidence for preventing infection by L. monocytogenes and improving the dissemination of advice regarding food safety for vulnerable populations.
Investigating the role of spatial spillovers as determinants of land conversion in urbanizing Canada
Feng Qiu, Qingmeng Tong, Junbiao Zhang
Journal: Environment and Development Economics / Volume 27 / Issue 4 / August 2022
Published online by Cambridge University Press: 09 November 2021, pp. 357-373
Although the impacts of income, population growth, and other important determinants of land-use change have been widely studied, there is less understanding of how spatial spillovers matter. Utilizing a spatial econometric approach, we investigate the main determinants of natural landscape conversion, focusing on quantifying local and global spatial spillovers. The empirical investigation applies to the Edmonton Metropolitan Region and the Calgary Regional Partnership in Canada. Key results include: (1) determinants of land conversion have significant spillover effects; (2) income, population density, road density, natural land endowment and land suitability for agriculture are all found to have influences on natural land conversion both in the own and neighboring areas; and (3) local (i.e., within the immediate neighboring areas) and global (in the entire study region) spillovers are different in strength and direction. Our work provides useful information for understanding the spillover issues in land conservation, resource governance, and optimal conservation design.
Adaptively robust filtering algorithm for maritime celestial navigation
Chong-hui Li, Zhang-lei Chen, Xin-jiang Liu, Bin Chen, Yong Zheng, Shuai Tong, Ruo-pu Wang
Journal: The Journal of Navigation / Volume 75 / Issue 1 / January 2022
Celestial navigation is an important means of maritime navigation; it can automatically achieve inertially referenced positioning and orientation after a long period of development. However, the impact of different accuracy of observations and the influence of nonstationary states, such as ship speed change and steering, are not taken into account in existing algorithms. To solve this problem, this paper proposes an adaptively robust maritime celestial navigation algorithm, in which each observation value is given an equivalent weight according to the robust estimation theory, and the dynamic balance between astronomical observation and prediction values of vessel motion is adjusted by applying the adaptive factor. With this system, compared with the frequently used least square method and extended Kalman filter algorithm, not only are the real-time and high-precision navigation parameters, such as position, course, and speed for the vessel, calculated simultaneously, but also the influence of abnormal observation and vessel motion status change could be well suppressed.
Dietary patterns and sarcopenia in elderly adults: the Tianjin Chronic Low-grade Systemic Inflammation and Health (TCLSIH) study
Xuena Wang, Mingxu Ye, Yeqing Gu, Xiaohui Wu, Ge Meng, Shanshan Bian, Hongmei Wu, Shunming Zhang, Yawen Wang, Tingjing Zhang, Jie Cheng, Shinan Gan, Tong Ji, Kaijun Niu
Journal: British Journal of Nutrition / Volume 128 / Issue 5 / 14 September 2022
Published online by Cambridge University Press: 27 September 2021, pp. 900-908
Print publication: 14 September 2022
Sarcopenia is a core contributor to several health consequences, including falls, fractures, physical limitations and disability. The pathophysiological processes of sarcopenia may be counteracted with the proper diet, delaying sarcopenia onset. Dietary pattern analysis is a whole diet approach used to investigate the relationship between diet and sarcopenia. Here, we aimed to investigate this relationship in an elderly Chinese population. A cross-sectional study with 2423 participants aged more than 60 years was performed. Sarcopenia was defined based on the guidelines of the Asian Working Group for Sarcopenia, composed of low muscle mass plus low grip strength and/or low gait speed. Dietary data were collected using a FFQ that included questions on 100 food items along with their specified serving sizes. Three dietary patterns were derived by factor analysis: sweet pattern, vegetable pattern and animal food pattern. The prevalence of sarcopenia was 16·1 %. The higher vegetable pattern score and animal food pattern score were related to lower prevalence of sarcopenia (Ptrend = 0·006 and < 0·001, respectively); the multivariate-adjusted OR of the prevalence of sarcopenia in the highest v. lowest quartiles were 0·54 (95 % CI 0·34, 0·86) and 0·50 (95 % CI 0·33, 0·74), separately. The sweet pattern score was not significantly related to the prevalence of sarcopenia. The present study showed that vegetable pattern and animal food pattern were related to a lower prevalence of sarcopenia in Chinese older adults. Further studies are required to clarify these findings.
A new, remarkably preserved, enantiornithine bird from the Upper Cretaceous Qiupa Formation of Henan (central China) and convergent evolution between enantiornithines and modern birds
Li Xu, Eric Buffetaut, Jingmai O'Connor, Xingliao Zhang, Songhai Jia, Jiming Zhang, Huali Chang, Haiyan Tong
Journal: Geological Magazine / Volume 158 / Issue 11 / November 2021
Published online by Cambridge University Press: 14 September 2021, pp. 2087-2094
A new enantiornithine bird is described on the basis of a well preserved partial skeleton from the Upper Cretaceous Qiupa Formation of Henan Province (central China). It provides new evidence about the osteology of Late Cretaceous enantiornithines, which are mainly known from isolated bones; in contrast, Early Cretaceous forms are often represented by complete skeletons. While the postcranial skeleton shows the usual distinctive characters of enantiornithines, the skull displays several features, including confluence of the antorbital fenestra and the orbit and loss of the postorbital, evolved convergently with modern birds. Although some enantiornithines retained primitive cranial morphologies into the latest Cretaceous Period, at least one lineage evolved cranial modifications that parallel those in modern birds.
Origin, tectonic environment and age of the Bibole banded iron formations, northwestern Congo Craton, Cameroon: geochemical and geochronological constraints
Arlette Pulcherie Djoukouo Soh, Sylvestre Ganno, Lianchang Zhang, Landry Soh Tamehe, Changle Wang, Zidong Peng, Xiaoxue Tong, Jean Paul Nzenti
Journal: Geological Magazine / Volume 158 / Issue 12 / December 2021
The newly discovered Bibole banded iron formations are located within the Nyong Group at the northwest of the Congo Craton in Cameroon. The Bibole banded iron formations comprise oxide (quartz-magnetite) and mixed oxide-silicate (chlorite-magnetite) facies banded iron formations, which are interbedded with felsic gneiss, phyllite and quartz-chlorite schist. Geochemical studies of the quartz-magnetite banded iron formations and chlorite-magnetite banded iron formations reveal that they are composed of >95 wt % Fe2O3 plus SiO2 and have low concentrations of Al2O3, TiO2 and high field strength elements. This indicates that the Bibole banded iron formations were not significantly contaminated by detrital materials. Post-Archaean Australian Shale–normalized rare earth element and yttrium patterns are characterized by positive La and Y anomalies, a relative depletion of light rare earth elements compared to heavy rare earth elements and positive Eu anomalies (average of 1.86 and 1.15 for the quartz-magnetite banded iron formations and chlorite-magnetite banded iron formations, respectively), suggesting the influence of low-temperature hydrothermal fluids and seawater. The quartz-magnetite banded iron formations display true negative Ce anomalies, while the chlorite-magnetite banded iron formations lack Ce anomalies. Combined with their distinct Eu anomalies consistent with Algoma- and Superior-type banded iron formations, we suggest that the Bibole banded iron formations were deposited under oxic to suboxic conditions in an extensional basin. SIMS U–Pb data indicate that the Bibole banded iron formations were deposited at 2466 Ma and experienced metamorphism and metasomatism at 2078 Ma during the Eburnean/Trans-Amazonian orogeny. Overall, these findings suggest that the studied banded iron formations probably marked the onset of the rise of atmospheric oxygen, also known as the Great Oxidation Event in the Congo Craton.
Deep Learning-Based Point-Scanning Super-Resolution Microscopy
Uri Manor, Linjing Fang, Fred Monroe, Sammy Weiser Novak, Lyndsey Kirk, Cara R. Schiavon, Seungyoon B. Yu, Tong Zhang, Melissa Wu, Kyle Kastner, Alaa Abdel Latif, Zijun Lin, Andrew Shaw, Yoshiyuki Kubota, John Mendenhall, Zhao Zhang, Gulcin Pekkurnaz, Kristen Harris, Jeremy Howard
Journal: Microscopy and Microanalysis / Volume 27 / Issue S1 / August 2021
Print publication: August 2021
Population dynamics driven by truncated stable processes with Markovian switching
Mathematical biology in general
Stochastic processes
Zhenzhong Zhang, Jinying Tong, Qingting Meng, You Liang
Journal: Journal of Applied Probability / Volume 58 / Issue 2 / June 2021
Published online by Cambridge University Press: 23 June 2021, pp. 505-522
Print publication: June 2021
We focus on the population dynamics driven by two classes of truncated $\alpha$-stable processes with Markovian switching. Almost necessary and sufficient conditions for the ergodicity of the proposed models are provided. Also, these results illustrate the impact on ergodicity and extinct conditions as the parameter $\alpha$ tends to 2.
Design of a new multi-epitope vaccine against Brucella based on T and B cell epitopes using bioinformatics methods
Zhiqiang Chen, Yuejie Zhu, Tong Sha, Zhiwei Li, Yujiao Li, Fengbo Zhang, Jianbing Ding
Published online by Cambridge University Press: 25 May 2021, e136
Brucellosis is one of the most serious and widespread zoonotic diseases, which seriously threatens human health and the national economy. This study was based on the T/B dominant epitopes of Brucella outer membrane protein 22 (Omp22), outer membrane protein 19 (Omp19) and outer membrane protein 28 (Omp28), with bioinformatics methods to design a safe and effective multi-epitope vaccine. The amino acid sequences of the proteins were found in the National Center for Biotechnology Information (NCBI) database, and the signal peptides were predicted by the SignaIP-5.0 server. The surface accessibility and hydrophilic regions of proteins were analysed with the ProtScale software and the tertiary structure model of the proteins predicted by I-TASSER software and labelled with the UCSF Chimera software. The software COBEpro, SVMTriP and BepiPred were used to predict B cell epitopes of the proteins. SYFPEITHI, RANKpep and IEDB were employed to predict T cell epitopes of the proteins. The T/B dominant epitopes of three proteins were combined with HEYGAALEREAG and GGGS linkers, and carriers sequences linked to the N- and C-terminus of the vaccine construct with the help of EAAAK linkers. Finally, the tertiary structure and physical and chemical properties of the multi-epitope vaccine construct were analysed. The allergenicity, antigenicity and solubility of the multi-epitope vaccine construct were 7.37–11.30, 0.788 and 0.866, respectively. The Ramachandran diagram of the mock vaccine construct showed 96.0% residues within the favoured and allowed range. Collectively, our results showed that this multi-epitope vaccine construct has a high-quality structure and suitable characteristics, which may provide a theoretical basis for future laboratory experiments.
Effects of crustacean hyperglycaemic hormone RNA interference on regulation of glucose metabolism in Litopenaeus vannamei after ammonia-nitrogen exposure
Xin Zhang, Luqing Pan, Ruixue Tong, Yufen Li, Lingjun Si, Yuanjing Chen, Manni Wu, Qiaoqiao Wang
Journal: British Journal of Nutrition / Volume 127 / Issue 6 / 28 March 2022
Print publication: 28 March 2022
To unveil the adaptation of Litopenaeus vannamei to elevated ambient ammonia-N, crustacean hyperglycaemic hormone (CHH) was knocked down to investigate its function in glucose metabolism pathway under ammonia-N exposure. When CHH was silenced, haemolymph glucose increased significantly during 3–6 h, decreased significantly during 12–48 h and recovered to the control groups' level at 72 h. After CHH knock-down, dopamine (DA) contents reduced significantly during 3–24 h, which recovered after 48 h. Besides, the expressions of guanylyl cyclase (GC) and DA1R in the hepatopancreas decreased significantly, while DA4R increased significantly. Correspondingly, the contents of cyclic AMP (cAMP), cyclic GMP (cGMP) and diacylglycerol (DAG) and the expressions of protein kinase A (PKA), protein kinase G (PKG), AMP active protein kinase α (AMPKα) and AMPKγ were significantly down-regulated, while the levels of protein kinase C (PKC) and AMPKβ were significantly up-regulated. The expressions of cyclic AMP response element-binding protein (CREB) and GLUT2 decreased significantly, while GLUT1 increased significantly. Moreover, glycogen content, glycogen synthase and glycogen phosphorylase activities in hepatopancreas and muscle were significantly increased. Furthermore, the levels of key enzymes hexokinase, pyruvate kinase and phosphofructokinase in glycolysis (GLY), rate-limiting enzymes citrate synthase in tricarboxylic acid and critical enzymes phosphoenolpyruvate carboxykinase, fructose diphosphate and glucose-6-phosphatase in gluconeogenesis (GNG) were significantly decreased in hepatopancreas. These results suggest that CHH affects DA and then they affect their receptors to transmit glucose metabolism signals into the hepatopancreas of L. vannamei under ammonia-N stress. CHH acts on the cGMP-PKG-AMPKα-CREB pathway through GC, and CHH affects DA to influence cAMP-PKA-AMPKγ-CREB and DAG-PKC-AMPKβ-CREB pathways, thereby regulating GLUT, inhibiting glycogen metabolism and promoting GLY and GNG. This study contributes to further understand glucose metabolism mechanism of crustacean in response to environmental stress.
Dynamic reprogramming and function of RNA N6-methyladenosine modification during porcine early embryonic development
Tong Yu, Xin Qi, Ling Zhang, Wei Ning, Di Gao, Tengteng Xu, Yangyang Ma, Jason G Knott, Anucha Sathanawongs, Zubing Cao, Yunhai Zhang
Journal: Zygote / Volume 29 / Issue 6 / December 2021
Published online by Cambridge University Press: 23 April 2021, pp. 417-426
N6-Methyladenosine (m6A) regulates oocyte-to-embryo transition and the reprogramming of somatic cells into induced pluripotent stem cells. However, the role of m6A methylation in porcine early embryonic development and its reprogramming characteristics in somatic cell nuclear transfer (SCNT) embryos are yet to be known. Here, we showed that m6A methylation was essential for normal early embryonic development and its aberrant reprogramming in SCNT embryos. We identified a persistent occurrence of m6A methylation in embryos between 1-cell to blastocyst stages and m6A levels abruptly increased during the morula-to-blastocyst transition. Cycloleucine (methylation inhibitor, 20 mM) treatment efficiently reduced m6A levels, significantly decreased the rates of 4-cell embryos and blastocysts, and disrupted normal lineage allocation. Moreover, cycloleucine treatment also led to higher levels in both apoptosis and autophagy in blastocysts. Furthermore, m6A levels in SCNT embryos at the 4-cell and 8-cell stages were significantly lower than that in parthenogenetic activation (PA) embryos, suggesting an abnormal reprogramming of m6A methylation in SCNT embryos. Correspondingly, expression levels of m6A writers (METTL3 and METTL14) and eraser (FTO) were apparently higher in SCNT 8-cell embryos compared with their PA counterparts. Taken together, these results indicated that aberrant nuclear transfer-mediated reprogramming of m6A methylation was involved in regulating porcine early embryonic development.
Bacillus amyloliquefaciens ameliorates high-carbohydrate diet-induced metabolic phenotypes by restoration of intestinal acetate-producing bacteria in Nile Tilapia
Rong Xu, Miao Li, Tong Wang, Yi-Wei Zhao, Cheng-Jie Shan, Fang Qiao, Li-Qiao Chen, Wen-Bing Zhang, Zhen-Yu Du, Mei-Ling Zhang
Poor utilisation efficiency of carbohydrate always leads to metabolic phenotypes in fish. The intestinal microbiota plays an important role in carbohydrate degradation. Whether the intestinal bacteria could alleviate high-carbohydrate diet (HCD)-induced metabolic phenotypes in fish remains unknown. Here, a strain affiliated to Bacillus amyloliquefaciens was isolated from the intestine of Nile tilapia. A basal diet (CON), HCD or HCD supplemented with B. amy SS1 (HCB) was used to feed fish for 10 weeks. The beneficial effects of B. amy SS1 on weight gain and protein accumulation were observed. Fasting glucose and lipid deposition were decreased in the HCB group compared with the HCD group. High-throughput sequencing showed that the abundance of acetate-producing bacteria was increased in the HCB group relative to the HCD group. Gas chromatographic analysis indicated that the concentration of intestinal acetate was increased dramatically in the HCB group compared with that in the HCD group. Glucagon-like peptide-1 was also increased in the intestine and serum of the HCB group. Thus, fish were fed with HCD, HCD supplemented with sodium acetate at 900 mg/kg (HLA), 1800 mg/kg (HMA) or 3600 mg/kg (HHA) diet for 8 weeks, and the HMA and HHA groups mirrored the effects of B. amy SS1. This study revealed that B. amy SS1 could alleviate the metabolic phenotypes caused by HCD by enriching acetate-producing bacteria in fish intestines. Regulating the intestinal microbiota and their metabolites might represent a powerful strategy for fish nutrition modulation and health maintenance in future.
|
CommonCrawl
|
The Ginzburg-Landau functional with vanishing magnetic field (after K. Attar and Helffer-Kachmar)
Bernard Helffer
Département de Mathématiques, Université Paris-Sud 11
Laboratoire Jean Leray, Universit e de Nantes
We study the infimum of the Ginzburg-Landau functional in the case with a vanishing external magnetic field in a two dimensional simply connected domain. We obtain an energy asymptotics which is valid when the Ginzburg-Landau parameter is large and the strength of the external field is comparable with the third critical field. Compared with the known results when the external magnetic field does not vanish, we show in this regime a concentration of the energy near the zero set of the external magnetic field.
Defects of Liquid Crystals
Pingwen ZHANG
Vice Dean and Changjiang Professor
Department of Scientific & Engineering Computing (DSEC)
School of Mathematical Sciences (SMS)
Center for Computational Science & Engineering (CCSE)
Peking University (PKU)
Defects in liquid crystals (LCs) are of great practical and theoretical importance. Recently there is a growing interest in LCs materials under topological constrain and/or external force, but the defects pattern and dynamics are still poorly understood. We investigate three-dimensional spherical droplet within the Landau-de Gennes model under different boundary conditions. When the Q-tensor is uniaxial, the model degenerates to vector model (Oseen-Frank), but Q-tensor model is superior to vector model as the former allows biaxial in the order parameter. Using numerical simulation, a rich variety of defects pattern are found, and the results suggest that, line disclinations always involve biaxial, or equivalently, uniaxial only admits point defects. Then we believe that Q-tensor model is essential to include the disclinations line which is a common phenomena in LCs. The mathematical implication of this observation will be discussed in this talk.
Counter-examples to Strong Diamagnetism
Søren Fournais
Professor of
University of Aarhus
Consider a Schrödinger operator with magnetic field B(x) in 2-dimensions. The classical diamagnetic inequality implies that the ground state energy, denoted by λ1(B) , with magnetic field is higher than the one without magnetic field. However, comparison of the ground state energies for different non-zero magnetic fields is known to be a difficult question. We consider the special case where the magnetic field has the form bβ, where b is a (large) parameter and β(x) is a fixed function. One might hope that monotonicity for large field holds, i.e. that λ1(b1β)>λ1(b2β) if b1>b2 are sufficiently large. We will display counterexamples to this hope and discuss applications to the theory of superconductivity in the Ginzburg-Landau model. This is joint work with Mikael Persson Sundqvist.
Stability of Radial Solutions in the Landau-de Gennes Theory: Interplay between Temperature and Geometry
Apala Majumdar
Nematics liquid crystals are anisotropic liquids with long-range orientational ordering, making them popular working materials for optical applications. The study of static nematic equilibria poses challenging questions in the calculus of variations and theory of partial differential equations. We study two stability problems for the prototypical radial-hedgehog solution within the Landau-de Gennes theory for nematics. The radial-hedgehog solution is an example of a uniaxial nematic equilibrium with an isotropic defect core, analogous to a degree +1-vortex solution in the Ginzburg-Landau theory of superconductivity. The first problem concerns the radial-hedgehog solution in a spherical droplet with radial boundary conditions, for low temperatures below the nematic-isotropic transition temperature.
We prove that an arbitrary sequence of Landau-de Gennes minimizers converges strongly (in W^{1,2}) to the radial-hedgehog solution in the low-temperature limit. We use the celebrated division trick for superconductivity, blow-up techniques for the singularity profile and energy estimates to show that the radial-hedgehog solution is the unique physically relevant uniaxial equilibrium in the low-temperature limit. We then compute the second variation of the Landau-de Gennes energy about the radial-hedgehog solution and demonstrate its instability with respect to higher-dimensional biaxial perturbations, for sufficiently low temperatures. We conclude that Landau-de Gennes minimizers on a spherical droplet, with radial anchoring, are always biaxial for sufficiently low temperatures.
The second problem concerns a punctured spherical droplet with radial boundary conditions. We show that the radial-hedgehog solution is locally stable for all temperatures below the nematic-isotropic transition temperature on a punctured droplet. We adapt methods from [1], [2] and use convexity-based properties of the Landau-de Gennes energy to prove that the radial-hedgehog solution is, in fact, the unique global energy minimizer in two different asymptotic limits: the vanishing elastic constant limit and the low-temperature limit, in contrast to the instability result for a spherical droplet above.
This is joint work with Duvan Henao, Adriano Pisante and Mythily Ramaswamy.
[1] A. Majumdar, A. Zarnescu, Landau-de Gennes theory of nematic liquid crystals: the Oseen-Frank limit and beyond, Arch. Rat. Mech. Anal., 196 (2010), no. 1, 227-280.
[2] F.H.Lin and C.Liu, Static and Dynamic Theories of Liquid Crystals, Journal of Partial Differential Equations, Vol 14, No. 4, 289-330 (2001).
On the Dynamical Q-tensor Models of Liquid Crystals
Shijin DING
Dean & Professor
In this talk, we first introduce the the models of nematic liquid crystals and the known results about the models. Then, we focus on the dynamical Q-tensor model, that is, Beris-Edward model. For this model, we prove the global existence of weak solutions, the global existence of strong solutions with large viscosity, and the weak-strong uniqueness. In our discussions, the Landau-de Gennes functional takes a general form in which we only assume L_5=0, L_1>0 and L_2+L_3>0.
Plateau Problems in Singular Spaces
Robert Hardt
Many extremal variational problems involve a geometric constraint where each competing object or some part of it is required to lie in some fixed set. For example , a liquid crystal may be viewed as a map of a spatial region into the unit 2-sphere. A Plateau problem may require that each surface competing for least area lies in a fixed set A (Thus the complement of A is an obstacle). Also the boundary of the surface may be only partially fixed with the remaining part free to range in some subset B. In general dimensions, these problems may, following [Federer-Fleming,1960], be studied with classes of currents.
The existence and regularity properties of the minimizing currents depend on analytic properties of the sets A and B. We will discuss, with some general theorems and several examples, the role of smoothness and isoperimetric properties of the sets. We will refer to joint works with T De Pauw, W. Pfeffer, and Q. Funk.
Energy-Minimizing Nematic Elastomers
Patricia Bauman
We prove weak lower semi-continuity and existence of energy-minimizers for a free energy describing stable deformations and the corresponding director configuration of an incompressible nematic liquid-crystal elastomer subject to physically realistic bounda ry conditions. The energy is a sum of the trace formula developed by Warner, Terentjev and Bladon (coupling the deformation gradient and the director field) and the bulk term for the director with coefficients depending on temperature. A key step in our analysis is to prove that the energy density has a convex extension to non-unit length director fields.
Our results apply to the setting of physical experiments in which a thin incompressible elastomer in R^3 is clamped on its sides and stretched perpendicular to its initial director field, resulting in shape-changes and director re-orientation.
Changyou WANG
In this talk, I will discuss a simplified Ericksen-Leslie system modeling the hydrodynamic flow of nematic liquid crystals in dimension three, and present a new result on the existence of global weak solutions. This is a joint work with Professor Fanghua Lin.
Yaniv Almog
Consider a superconducting wire whose temperature is lower than the critical one. When the current through the wire exceeds some critical value, it is well known from experimental observation that the wire becomes resistive, behaving like a normal metal. We prove that the time-dependent Ginzburg-Landau model anticipates this behavior, and obtain upper bound for the critical current. The bounds are obtained in terms of the resolvent of the linearized elliptic operator in ${\mathbb R}^2$ and ${\mathbb R}^2_+$. We then relate this problem to some spectral analysis of a more general class of non-selfadjoint operators.
Global Well-posedness of Incompressible Elastodynamics in 2D
Zhen LEI
In this talk I will report our recent result on the global wellposedness of classical solutions to system of incompressible elastodynamics in 2D. The system is revealed to be inherently strong linearly degenerate and automatically satisfies a strong null condition, due to the isotropic nature and the incompressible constraint.
Logarithmic Interaction Energy for Infinitely Many Points in the Plane, Coulomb Gases and Weighted Fekete Sets
Etienne Sandier
Département de Mathématiques
Université Paris 12 Val de Marne
To a configuration of infinitely many points in the plane, one can associate an energy describing the coulombian interaction of positive charges placed at these points with a negatively charged uniform background. I will describe results obtained in collaboration with S.Serfaty which give some basic properties of this energy and links it to superconducting vortices, log-gases and weighted Fekete sets. I will also describe criteria obtained in collaboration with Y.Ge which insure that this energy is finite.
Mathematical Analysis of Liquid Crystal Models in Biology
Maria-Carme Calderer
In this lecture, I present and analyze models of anisotropic crosslinked polymers employing tools from the theories of nematic liquid crystals and liquid crystal elastomers. The anisotropy of these systems stems from the presence of rigid-rod molecular units in the network. Energy functionals of compressible, incompressible elastomers as well as rod-fluid networks are addressed. The theorems on the minimization of the energies combine methods of isotropic nonlinear elasticity with the theory of lyotropic liquid crystals. A main feature of the systems is the coupling between the Eulerian Landau-de Gennes liquid crystal energy and the Lagrangian anisotropic elastic energy of deformation. Predictions of cavities in the minimizing configurations follow as a result of the nature of the coupling. I will also present a mixed finite element analysis of the incompressible elastomer.
We apply the theory to the study of phase transitions in networks of rigid rods, in order to model the behavior of actin filament systems found in the cytoskeleton of the interstitial tissue and in the inner cell.
The phase transition behavior depends on geometric and physical parameters of the network as well as on the aspect ratio, length and density of the rigid groups. In particular, we focus on the formation of chevron structures in the case that the aspect ratio of the rods is small.
The results show good agreement with the molecular dynamics experiments reported in the literature as well as with laboratory experiments.
Finally, we address the problem of polymer encapsulation to study configurations of bacteriophages.
Regularity of Minimizers to a Constrained $Q$-tensor Energy for Liquid Crystals
Daniel Phillips
We investigate minimizers defined on a two-dimensional domain for the Maier--Saupe energy used to characterize nematic liquid crystal configurations. The energy density for the model is singular so as to constrain the competing states to take on physical values. We prove that minimizers are regular and in certain cases we are able to use this regularity to prove that minimizers must take on values strictly within the physical regime. This work is joint with Patricia Bauman.
Variational Problems in Smectic Liquid Crystals
Jinhae Park
In this talk, we will discuss a brief introduction to the governing energy functional for smectic liquid crystals including Chen-Lubensky energy terms. We then talk about existence of minimizers for Boundary Value Problems. Most part of this talk is from a joint work with P. Bauman and D. Phillips.
Well-posedness and Stability of a Hydrodynamic System Modeling Vesicle and Fluid Interactions
Associate Professor in Applied Mathematics
In this talk, we will discuss a hydrodynamic system modeling the deformation of vesicle membranes in incompressible viscous fluids. The system consists of the Navier-Stokes equations coupled with a fourth order phase-field equation. In the three dimensional case, we prove the existence/uniqueness of local strong solutions for arbitrary initial data as well as global strong solutions under the large viscosity assumption. We also establish some regularity criteria in terms of the velocity for local smooth solutions. Finally, we investigate the stability of the system near local minimizers of the elastic bending energy.
On Ground States of Spin-1 Bose-Einstein Condensates w/o external magnetic field
I-Liang Chern
In this talk, I will first give a brief introduction to the spinor Bose-Einstein condensates (BECs). Then I will present two recent results, one is numerical, the other is analytical for spinor BECs w/o uniform external magnetic field. In the numerical study of spinor BECs, a pseudo-arclength continuation method (PACM) was proposed for investigating the ground state patterns and phase diagrams of the spin-1 Bose-Einstein condensates under the influence of a homogeneous magnetic field. Two types of phase transitions are found. The first type is a transition from a two-component (2C) state to a three-component (3C) state. The second type is a symmetry breaking in 3C state. After that, a phase separation of the spin component occurs. In the semi-classical regime, these two phase transition curves are gradually merged.
In the analytical study, the ground states of spin-1 BEC are characterized. First, we present the case when there is no external magnetic field. For ferromagnetic systems, we show the validity of the so-called single-mode approximation (SMA). For antiferromagnetic systems, there are two subcases. When the total magnetization M≠0, the corresponding ground states have vanishing zeroth (mF = 0) components (so call 2C state), thus are reduced to two-component systems. When M=0, the ground states are also reduced to the SMA, and there are one-parameter families of such ground states. Next, we study the case when an external magnetic field is applied. It is shown analytically that, for antiferromagnetic systems, there is a phase transition from 2C state to 3C state as the external magnetic field increases. The key idea in the proof is a redistribution of masses among different components, which reduces kinetic energy in all situations, and makes our proofs simple and unified. The numerical part is a joint work with Jen-Hao Chen and Weichung Wang, whereas the analytical part is jointly with Liren Lin.
New PNP Type Systems for Ionic Liquids
Tai-Chia LIN
To describe ionic liquids with finite size effects involving different ion radii and valences, we derive new PNP (Poisson-Nernst-Planck) type systems and develop mathematical theorems for these systems. Symmetry and non-symmetry breaking conditions are represented by their coupling coefficients. When non-symmetry breaking condition holds true, we prove the existence theorems of solutions of new PNP type systems. On the other hand, when symmetry breaking condition holds true, two steady state solutions can be found and the excess currents (due to steric effects) associated with these two steady state solutions are derived and expressed as two distinct formulas. Our results indicate that new PNP type systems may become a useful model to study ionic liquids and related topics of liquid crystals.
Energetic Variational Approaches: General Diffusion and Stochastic Process
Chun LIU
In this talk, I will explore the variational structures in some specific types of non-ideal diffusion and stochastic processes. In particular, we will focus on those with nonlocal interactions and couplings.
|
CommonCrawl
|
Background Rejection in the DMTPC Dark Matter Search Using Charge Signals (1109.3501)
J.P. Lopez, S. Ahlen, J. Battat, T. Caldwell, M. Chernicoff, C. Deaconu, D. Dujmic, A. Dushkin, W. Fedus, P. Fisher, F. Golub, S. Henderson, A. Inglis, A. Kaboth, G. Kohse, L. Kirsch, R. Lanza, A. Lee, J. Monroe, H. Ouyang, T. Sahin, G. Sciolla, N. Skvorodnev, H. Tomita, H. Wellenstein, I. Wolfe, R. Yamamoto, H. Yegoryan
Sept. 15, 2011 hep-ex, physics.ins-det, astro-ph.IM
The Dark Matter Time Projection Chamber (DMTPC) collaboration is developing low-pressure gas TPC detectors for measuring WIMP-nucleon interactions. Optical readout with CCD cameras allows for the detection for the daily modulation in the direction of the dark matter wind, while several charge readout channels allow for the measurement of additional recoil properties. In this article, we show that the addition of the charge readout analysis to the CCD allows us too obtain a statistics-limited 90% C.L. upper limit on the $e^-$ rejection factor of $5.6\times10^{-6}$ for recoils with energies between 40 and 200 keV$_{\mathrm{ee}}$. In addition, requiring coincidence between charge signals and light in the CCD reduces CCD-specific backgrounds by more than two orders of magnitude.
DMTPC: Dark matter detection with directional sensitivity (1012.3912)
J.B.R. Battat, S. Ahlen, T. Caldwell, C. Deaconu, D. Dujmic, W. Fedus, P. Fisher, F. Golub, S. Henderson, A. Inglis, A. Kaboth, G. Kohse, R. Lanza, A. Lee, J. Lopez, J. Monroe, T. Sahin, G. Sciolla, N. Skvorodnev, H. Tomita, H. Wellenstein, I. Wolfe, R. Yamamoto, H. Yegoryan
Dec. 17, 2010 astro-ph.CO, astro-ph.IM
The Dark Matter Time Projection Chamber (DMTPC) experiment uses CF_4 gas at low pressure (0.1 atm) to search for the directional signature of Galactic WIMP dark matter. We describe the DMTPC apparatus and summarize recent results from a 35.7 g-day exposure surface run at MIT. After nuclear recoil cuts are applied to the data, we find 105 candidate events in the energy range 80 - 200 keV, which is consistent with the expected cosmogenic neutron background. Using this data, we obtain a limit on the spin-dependent WIMP-proton cross-section of 2.0 \times 10^{-33} cm^2 at a WIMP mass of 115 GeV/c^2. This detector is currently deployed underground at the Waste Isolation Pilot Plant in New Mexico.
First Dark Matter Search Results from a Surface Run of the 10-L DMTPC Directional Dark Matter Detector (1006.2928)
S. Ahlen, J. B. R. Battat, T. Caldwell, C. Deaconu, D. Dujmic, W. Fedus, P. Fisher, F. Golub, S. Henderson, A. Inglis, A. Kaboth, G. Kohse, R. Lanza, A. Lee, J. Lopez, J. Monroe, T. Sahin, G. Sciolla, N. Skvorodnev, H. Tomita, H. Wellenstein, I. Wolfe, R. Yamamoto, H. Yegoryan
Dec. 9, 2010 hep-ex, astro-ph.IM
The Dark Matter Time Projection Chamber (DMTPC) is a low pressure (75 Torr CF4) 10 liter detector capable of measuring the vector direction of nuclear recoils with the goal of directional dark matter detection. In this paper we present the first dark matter limit from DMTPC. In an analysis window of 80-200 keV recoil energy, based on a 35.7 g-day exposure, we set a 90% C.L. upper limit on the spin-dependent WIMP-proton cross section of 2.0 x 10^{-33} cm^{2} for 115 GeV/c^2 dark matter particle mass.
The case for a directional dark matter detector and the status of current experimental efforts (0911.0323)
S. Ahlen, N. Afshordi, J. B. R. Battat, J. Billard, N. Bozorgnia, S. Burgos, T. Caldwell, J. M. Carmona, S. Cebrian, P. Colas, T. Dafni, E. Daw, D. Dujmic, A. Dushkin, W. Fedus, E. Ferrer, D. Finkbeiner, P. H. Fisher, J. Forbes, T. Fusayasu, J. Galan, T. Gamble, C. Ghag, I. Giomataris, M. Gold, H. Gomez, M. E. Gomez, P. Gondolo, A. Green, C. Grignon, O. Guillaudin, C. Hagemann, K. Hattori, S. Henderson, N. Higashi, C. Ida, F. J. Iguaz, A. Inglis, I. G. Irastorza, S. Iwaki, A. Kaboth, S. Kabuki, J. Kadyk, N. Kallivayalil, H. Kubo, S. Kurosawa, V. A. Kudryavtsev, T. Lamy, R. Lanza, T. B. Lawson, A. Lee, E. R. Lee, T. Lin, D. Loomba, J. Lopez, G. Luzon, T. Manobu, J. Martoff, F. Mayet, B. Mccluskey, E. Miller, K. Miuchi, J. Monroe, B. Morgan, D. Muna, A. St. J. Murphy, T. Naka, K. Nakamura, M. Nakamura, T. Nakano, G. G. Nicklin, H. Nishimura, K. Niwa, S. M. Paling, J. Parker, A. Petkov, M. Pipe, K. Pushkin, M. Robinson, A. Rodriguez, J. Rodriguez-Quintero, T. Sahin, R. Sanderson, N. Sanghi, D. Santos, O. Sato, T. Sawano, G. Sciolla, H. Sekiya, T. R. Slatyer, D. P. Snowden-Ifft, N. J. C. Spooner, A. Sugiyama, A. Takada, M. Takahashi, A. Takeda, T. Tanimori, K. Taniue, A. Tomas, H. Tomita, K. Tsuchiya, J. Turk, E. Tziaferi, K. Ueno, S. Vahsen, R. Vanderspek, J. Vergados, J. A. Villar, H. Wellenstein, I. Wolfe, R. K. Yamamoto, H. Yegoryan
Nov. 1, 2009 astro-ph.CO
We present the case for a dark matter detector with directional sensitivity. This document was developed at the 2009 CYGNUS workshop on directional dark matter detection, and contains contributions from theorists and experimental groups in the field. We describe the need for a dark matter detector with directional sensitivity; each directional dark matter experiment presents their project's status; and we close with a feasibility study for scaling up to a one ton directional detector, which would cost around $150M.
Transport properties of electrons in CF4 (0905.2549)
T. Caldwell, A. Roccaro, T. Sahin, H. Yegoryan, D. Dujmic, S. Ahlen, J. Battat, P. Fisher, S. Henderson, A. Kaboth, G. Kohse, R. Lanza, J. Lopez, J. Monroe, G. Sciolla, N. Skvorodnev, H. Tomita, R. Vanderspek, H. Wellenstein, R. Yamamoto
May 15, 2009 physics.ins-det
Carbon-tetrafluoride (CF4) is used as a counting gas in particle detectors, but some of its properties that are of interest for large time-projection chambers are not well known. We measure the mean energy, which is proportional to the diffusion coefficent, and the attentuation coefficient of electron propagation in CF4 gas using a 10-liter dark matter detector prototype of the DMTPC project.
The DMTPC project (0903.3895)
G. Sciolla, J. Battat, T. Caldwell, D. Dujmic, P. Fisher, S. Henderson, R. Lanza, A. Lee, J. Lopez, A. Kaboth, G. Kohse, J. Monroe, T. Sahin, G. Sciolla, R. Yamamoto, H. Yegoryan, S. Ahlen, K. Otis, H. Tomita, A. Dushkin, H. Wellenstein
March 23, 2009 astro-ph.CO, astro-ph.IM
The DMTPC detector is a low-pressure CF4 TPC with optical readout for directional detection of Dark Matter. The combination of the energy and directional tracking information allows for an efficient suppression of all backgrounds. The choice of gas (CF4) makes this detector particularly sensitive to spin-dependent interactions.
The DMTPC detector (0811.2922)
G. Sciolla, J. Battat, T. Caldwell, B. Cornell, D. Dujmic, P. Fisher, S. Henderson, R. Lanza, A. Lee, J. Lopez, A. Kaboth, G. Kohse, J. Monroe, T. Sahin, R. Vanderspek, R. Yamamoto, H. Yegoryan, S. Ahlen, D. Avery, K. Otis, A. Roccaro, H. Tomita, A. Dushkin, H. Wellenstein
Nov. 18, 2008 astro-ph
Directional detection of Dark Matter allows for unambiguous direct detection of WIMPs as well as discrimination between various Dark Matter models in our galaxy. The DMTPC detector is a low-pressure TPC with optical readout designed for directional direct detection of WIMPs. By using CF4 gas as the active material, the detector also has excellent sensitivity to spin-dependent interactions of Dark Matter on protons.
DMTPC: a new apparatus for directional detection of Dark Matter (0810.0291)
G. Sciolla, A. Lee, J. Battat, T. Caldwell, B. Cornell, D. Dujmic, P. Fisher, S. Henderson, R. Lanza, J. Lopez, A. Kaboth, G. Kohse, J. Monroe, T. Sahin, R . Vanderspek, R. Yamamoto, H. Yegoryan, S. Alhen, D. Avery, K. Otis, A. Roccaro, H. Tomita, A. Dushkin, H. Wellenstein
Oct. 1, 2008 astro-ph
|
CommonCrawl
|
Home / Basic Electrical / Hybrid Parameters of Two Port Network
Hybrid Parameters of Two Port Network
Ahmad Faizan Basic Electrical
For analyzing circuits containing active devices such as transistors, it is more convenient to think of the input terminals of a four-terminal coupling network as a Thévenin-equivalent voltage source and the output terminals as a Norton-equivalent current source. We then describe the coupling network in terms of four hybrid parameters (h-parameters). We determine these parameters using the same measurement techniques as for z-parameters and y-parameters.
To find the open-circuit voltage of the Thévenin-equivalent source at input terminals (port 1) in Figure 1(a), we feed V2 into the output terminals (port 2). In this circuit, we consider the Thévenin-equivalent source to be a voltage-controlled voltage source. The parameter that represents the fraction of the output voltage appearing at the input terminals is V1/V2, which is a ratio without units. This parameter is the open-circuit reverse- voltage ratio, h12.
Since we are treating the dependent source as a voltage-controlled voltage source, we short-circuit the output terminals while we measure the input voltage and current, as shown in Figure 1(b). The parameter h11 is V1/I1, which is expressed in ohms and represents the short-circuit input impedance of the network. Since h12V2 is a voltage source, the equivalent input circuit for the coupling network shows the dependent voltage source and input impedance in series, as in Figure 1(c).
Figure 1 Finding the Thévenin-equivalent input circuit of a four-terminal network: (a) Open-circuit reverse voltage; (b) Internal input impedance; (c) Network input parameters
To determine the short-circuit current of the Norton-equivalent source at the output terminals (port 2) in Figure 2(a), we feed I1 into the input terminals and short-circuit the output terminals through the ammeter measuring I2. As long as the network impedances are linear (independent of voltage and current), I2 will be a constant fraction of the input current I1. The ratio I2/I1 is the short-circuit forward-current ratio, h21.
Figure 2 Finding the Norton-equivalent output circuit of a four-terminal network: (a) short-circuit forward current; (b) output admittance; (c) complete hybrid parameters.
The output impedance of a Norton-equivalent source is in parallel with the current source, so the fourth hybrid parameter is expressed as an admittance. Since we are treating this dependent source as a current-controlled current source, we leave the input terminals of the network open-circuit to make I1 zero while we measure I2 and V2. The parameter h22 is I2 /V2, which is expressed in Siemens and represents the open-circuit output admittance. These equations summarize the four hybrid parameters of a four-terminal coupling network:
Short-circuit input impedance:
\[\begin{matrix}{{\text{h}}_{\text{11}}}\text{=}\frac{{{\text{V}}_{\text{1}}}}{{{\text{I}}_{\text{1}}}}\left( \text{with }{{\text{V}}_{\text{2}}}\text{=0} \right) & {} & \left( 1 \right) \\\end{matrix}\]
Open-circuit reverse-voltage ratio:
\[\begin{matrix}{{\text{h}}_{\text{12}}}\text{=}\frac{{{\text{V}}_{\text{1}}}}{{{\text{V}}_{\text{2}}}}\left( \text{with }{{\text{I}}_{\text{1}}}\text{=0} \right)Open-Circuit & {} & \left( 2 \right) \\\end{matrix}\]
Short-circuit forward-current ratio:
\[\begin{matrix}{{\text{h}}_{\text{21}}}\text{=}\frac{{{\text{I}}_{\text{2}}}}{{{\text{I}}_{\text{1}}}}\left( \text{with }{{\text{V}}_{\text{2}}}\text{=0} \right)Short-Circuit & {} & \left( 3 \right) \\\end{matrix}\]
Open-circuit output admittance:
\[\begin{matrix}{{\text{h}}_{\text{22}}}\text{=}\frac{{{\text{I}}_{\text{2}}}}{{{\text{V}}_{\text{2}}}}\left( \text{with }{{\text{I}}_{\text{1}}}\text{=0} \right) & {} & \left( 4 \right) \\\end{matrix}\]
Figure 2(c) shows the resulting h-parameter equivalent circuit. For the Thévenin-equivalent source for the network input, we can write a Kirchhoff's voltage-law equation, as we did for z-parameters. For the Norton-equivalent source for the network output, we write a Kirchhoff's current-law equation, as we did for y-parameters. The two unknowns in these equations are I1 and V2.
$\begin{align}& \begin{matrix}{{\text{h}}_{\text{11}}}{{\text{I}}_{\text{1}}}\text{+}{{\text{h}}_{\text{12}}}{{\text{V}}_{\text{2}}}\text{=}{{\text{E}}_{\text{1}}} & {} & \left( 5 \right) \\\end{matrix} \\& \begin{matrix}{{\text{h}}_{\text{21}}}{{\text{I}}_{\text{1}}}\text{+}{{\text{h}}_{\text{22}}}{{\text{V}}_{\text{2}}}\text{=}{{\text{I}}_{\text{2}}} & {} & \left( 6 \right) \\\end{matrix} \\\end{align}$
The transistor amplifier equivalent circuit of Figure 3 is a typical example of hybrid parameters.
Figure 3 Hybrid parameters of a simple transistor amplifier
We cannot use Thevenin's theorem to find the equivalent internal resistance of a dependent source when the controlling element is included in the transformation. Therefore, when we want to determine the input and output impedances of coupling networks, we must calculate V/I. We determined h11 with a short circuit across the output terminals of the network. In the circuit of Figure 3, Zin differs slightly from h11 since the circuit has some reverse coupling.
|
CommonCrawl
|
Guide to Accounting
Corporate Finance & Accounting Financial Statements
By Adam Hayes
Reviewed By David Kindness
What Is Enterprise Multiple?
Enterprise multiple, also known as the EV multiple, is a ratio used to determine the value of a company. The enterprise multiple, which is enterprise value divided by earnings before interest, taxes, depreciation, and amortization (EBITDA), looks at a company the way a potential acquirer would by considering the company's debt. What's considered a "good" or "bad" enterprise multiple will depend on the industry.
Formula and Calculation of Enterprise Multiple
Enterprise Multiple=EVEBITDAwhere:EV=Enterprise Value=Market capitalization +total debt−cash and cash equivalentsEBITDA=Earnings before interest, taxes, depreciationand amortization\begin{aligned} &\text{Enterprise Multiple} = \frac { \text{EV} }{ \text{EBITDA} } \\ &\textbf{where:}\\ &\text{EV} = \text{Enterprise Value} = \text{Market capitalization} \ + \\ &\text{total debt} - \text{cash and cash equivalents} \\ &\text{EBITDA} = \text{Earnings before interest, taxes, depreciation} \\ &\text{and amortization} \\ \end{aligned}Enterprise Multiple=EBITDAEVwhere:EV=Enterprise Value=Market capitalization +total debt−cash and cash equivalentsEBITDA=Earnings before interest, taxes, depreciationand amortization
Enterprise multiple, also known as the EV-to-EBITDA multiple, is a ratio used to determine the value of a company.
It is computed by dividing enterprise value by EBITDA.
The enterprise multiple takes into account a company's debt and cash levels in addition to its stock price and relates that value to the firm's cash profitability.
Enterprise multiples can vary depending on the industry.
Higher enterprise multiples are expected in high-growth industries and lower multiples in industries with slow growth.
Enterprise Multiple: My Favorite Financial Term
What Enterprise Multiple Can Tell You
Investors mainly use a company's enterprise multiple to determine whether a company is undervalued or overvalued. A low ratio relative to peers or historical averages indicates that a company might be undervalued and a high ratio indicates that the company might be overvalued.
An enterprise multiple is useful for transnational comparisons because it ignores the distorting effects of individual countries' taxation policies. It's also used to find attractive takeover candidates since enterprise value includes debt and is a better metric than market capitalization for merger and acquisition (M&A) purposes.
Enterprise multiples can vary depending on the industry. It is reasonable to expect higher enterprise multiples in high-growth industries (e.g. biotech) and lower multiples in industries with slow growth (e.g. railways).
Enterprise value (EV) is a measure of the economic value of a company. It is frequently used to determine the value of the business if it is acquired. It is considered to be a better valuation measure for M&A than a market cap since it includes the debt an acquirer would have to assume and the cash they'd receive.
Example of How to Use Enterprise Multiple
Dollar General (DG) generated $3.18 billion in EBITDA for the trailing 12 months (TTM) as of the quarter ending May 1, 2020. The company had $2.67 billion in cash and cash equivalents and $3.97 billion in debt for the same ending quarter.
The company's market cap was $48.5 billion as of Aug. 10, 2020. Dollar General's enterprise multiple is 15.7 [($48.5 billion + $3.97 billion - $2.67 billion) / $3.18 billion]. At the same time last year, Dollar General's enterprise multiple was 14. The increase in the enterprise multiple is largely a result of the near $15 billion increase in market cap, while EBITDA increased just around $500 million. In this example, you can see how the Enterprise Multiple calculation takes into account both the cash the company has on hand and the debt the company is liable for.
Limitations of Using Enterprise Multiple
An enterprise multiple is a metric used for finding attractive buyout targets. But, beware of value traps—stocks with low multiples because they are deserved (e.g. the company is struggling and won't recover). This creates the illusion of a value investment, but the fundamentals of the industry or company point toward negative returns.
Investors assume that a stock's past performance is indicative of future returns and when the multiple comes down, they often jump at the opportunity to buy it at a "cheap" value. Knowledge of the industry and company fundamentals can help assess the stock's actual value.
One easy way to do this is to look at expected (forward) profitability and determine whether the projections pass the test. Forward multiples should be lower than the TTM multiples. Value traps occur when these forward multiples look overly cheap, but the reality is the projected EBITDA is too high and the stock price has already fallen, likely reflecting the market's cautiousness. As such, it's important to know the catalysts for the company and industry.
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
Dollar General. "Form 10-Q for the quarterly period ended May 1, 2020." Accessed Aug. 21, 2020
When You Should Use the EV/R Multiple When Valuing a Company
The enterprise value-to-revenue multiple (EV/R) is a measure of the value of a stock that compares a company's enterprise value to its revenue.
Multiple Definition
A multiple measures some aspect of a company's financial well-being, determined by dividing one metric by another metric.
EBIT/EV Multiple Definition
The EBIT/EV multiple is a financial ratio used to measure a company's "earnings yield."
EBITDA – Earnings Before Interest, Taxes, Depreciation, and Amortization
EBITDA, or earnings before interest, taxes, depreciation, and amortization, is a measure of a company's overall financial performance.
How to Use the Enterprise Value-to-Sales Multiple
Enterprise value-to-sales (EV/sales) relates the enterprise value (EV) of a company to its annual revenue.
Enterprise Value – EV
Enterprise value (EV) is a measure of a company's total value, often used as a comprehensive alternative to equity market capitalization. EV includes in its calculation the market capitalization of a company but also short-term and long-term debt as well as any cash on the company's balance sheet.
Tools for Fundamental Analysis
How to Use Enterprise Value to Compare Companies
What Is Considered a Healthy EV/EBITDA ?
What the Enterprise Multiple Tells Value Investors
A Clear Look at EBITDA
How can I find a company's EV/EBITDA multiple?
Free Cash Flow vs. EBITDA: What's the Difference?
|
CommonCrawl
|
Scattering theory for the wave equation of a Hartree type in three space dimensions
Symbolic dynamics for the geodesic flow on two-dimensional hyperbolic good orbifolds
May 2014, 34(5): 2243-2259. doi: 10.3934/dcds.2014.34.2243
Asymptotic behavior of Navier-Stokes-Korteweg with friction in $\mathbb{R}^{3}$
Zhong Tan 1, , Xu Zhang 2, and Huaqiao Wang 2,
School of Mathematical Sciences, Xiamen University, Xiamen, Fujian 361005
School of Mathematical Sciences, Xiamen University, Fujian 361005, China, China
Received March 2013 Revised July 2013 Published October 2013
We consider the compressible barotropic Navier-Stokes-Korteweg system with friction in this paper. The global solutions and optimal convergence rates are obtained by pure energy method provided the initial perturbation around a constant state is small enough. In particular, the decay rates of the higher-order spatial derivatives of the solution are obtained. Our proof is based on a family of scaled energy estimates and interpolations among them without linear decay analysis.
Keywords: energy method, Sobolev interpolation., optimal decay rates, Korteweg, Compressible Navier-Stokes equations, optimal decay rates.
Mathematics Subject Classification: Primary: 35Q30, 76N10; Secondary: 76D0.
Citation: Zhong Tan, Xu Zhang, Huaqiao Wang. Asymptotic behavior of Navier-Stokes-Korteweg with friction in $\mathbb{R}^{3}$. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2243-2259. doi: 10.3934/dcds.2014.34.2243
D. Bresch, B. Desjardins and C.-K. Lin, On some compressible fluid models: Korteweg, lubrication and shallow water system,, Comm. Partial Differential Equations, 28 (2003), 843. doi: 10.1081/PDE-120020499. Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system, I. Interfacial free energy,, J. Chem. Phys., 28 (1998), 258. Google Scholar
R. Danchin and B. Desjardins, Existence of solutions for compressible fluid models of Korteweg type,, Ann. Inst. H. Poincar Anal. Non Linaire, 18 (2001), 97. doi: 10.1016/S0294-1449(00)00056-1. Google Scholar
K. Deckelnick, $L^2$-decay for the compressible Navier-Stokes equations in unbounded domains,, Comm. Partial Differential Equations, 18 (1993), 1445. doi: 10.1080/03605309308820981. Google Scholar
R. J. Duan, H. X. Liu, S. Ukai and T. Yang, Optimal $L^p-L^q$ convergence rate for the compressible Navier-Stokes equations with potential force,, J. Differential Equations, 238 (2007), 220. doi: 10.1016/j.jde.2007.03.008. Google Scholar
J. E. Dunn and J. Serrin, On the thermomechanics of interstitial working,, Arch. Ration. Mech. Anal., 88 (1985), 95. doi: 10.1007/BF00250907. Google Scholar
R. J. Duan, S. Ukai, T. Yang and H. J. Zhao, Optimal convergence rate for the compressible Navier-Stokes equations with potential force,, Math. Models Methods Appl. Sci., 17 (2007), 737. doi: 10.1142/S021820250700208X. Google Scholar
B. Haspot, Existence of global weak solution for compressible fluid models of Korteweg type,, J. Math. Fluid Mech., 13 (2011), 223. doi: 10.1007/s00021-009-0013-2. Google Scholar
H. Hattori and D. Li, Solutions for two dimensional system for materials of Korteweg type,, SIAM J. Math. Anal., 25 (1994), 85. doi: 10.1137/S003614109223413X. Google Scholar
B. Haspot, Existence of Global Strong Solution for the Compressible Navier-Stokes System and the Korteweg System in Two-Dimension,, preprint, (). Google Scholar
D. Hoff and K. Zumbrun, Multidimensional diffusion waves for the Navier-Stokes equations of compressible flow,, Indiana Univ. Math. J., 44 (1995), 603. doi: 10.1512/iumj.1995.44.2003. Google Scholar
D. Hoff and K. Zumbrun, Pointwise decay estimates for multidimensional Navier-Stokes diffusion waves,, Z. Angew. Math. Phys., 48 (1997), 597. Google Scholar
H. Hattori and D. Li, Global solutions of a high dimensional system for Korteweg materials,, J. Math. Anal. Appl., 198 (1996), 84. doi: 10.1006/jmaa.1996.0069. Google Scholar
D. J. Korteweg, Sur La Forme Que Prennent Les Équations Du Mouvement Des Fluides Si Lón Tient Compte Des Forces Capillaires Causées Par Des Variations De Densité Considérables Mais Continues Et Sur La Théorie De La Capillarité Dans Lhypothse Dúne Variation Continue De La Densité,, Archives Néerlandaises de Sciences Exactes et Naturelles, (1901), 1. Google Scholar
Y. Kagei and T. Kobayashi, On large time behavior of solutions to the compressible Navier-Stokes equations in the half space in $\mathbbR^3$,, Arch. Ration. Mech. Anal., 165 (2002), 89. doi: 10.1007/s00205-002-0221-x. Google Scholar
Y. Kagei and T. Kobayashi, Asymptotic behavior of solutions of the compressible Navier-Stokes equations on the half space,, Arch. Ration. Mech. Anal., 177 (2005), 231. doi: 10.1007/s00205-005-0365-6. Google Scholar
M. Kotschote, Strong solutions for a compressible fluid model of Korteweg type,, Ann. Inst. H. Poincar-Anal. Non Lin-aire, 25 (2008), 679. doi: 10.1016/j.anihpc.2007.03.005. Google Scholar
Y. P. Li, Global existence and optimal decay rate of the compressible Navier-Stokes-Korteweg equations with external force,, J. Math. Anal. Appl., 388 (2012), 1218. doi: 10.1016/j.jmaa.2011.11.006. Google Scholar
A. Matsumura, An energy method for the equations of motion of compressible viscous and heat-conductive fluids in "MRC Technical Summary Report",, University of Wisconsin-Madison, (1981), 1. Google Scholar
A. Matsumura and T. Nishida, The initial value problems for the equations of motion of viscous and heat-conductive gases,, J. Math. Kyoto Univ., 20 (1980), 67. Google Scholar
A. Matsumura and T. Nishida, The initial value problem for the equations of motion of compressible viscous and heat-conductive fluids,, Proc. Japan Acad. Ser. A, 55 (1979), 337. doi: 10.3792/pjaa.55.337. Google Scholar
L. Nirenberg, On elliptic partial differential equations,, Ann. Scuola Norm. Sup. Pisa, 13 (1959), 115. Google Scholar
G. Ponce, Global existence of small solution to a class of nonlinear evolution equations,, Nonlinear Anal., 9 (1985), 339. doi: 10.1016/0362-546X(85)90001-X. Google Scholar
E. M. Stein, Singular Integrals and Differentiability Properties of Functions,, Princeton Mathematical Series, (1970). Google Scholar
Z. Tan, H. Q. Wang and J. K. Xu, Global existence and optimal $L^2$ decay rate for the strong solutions to the compressible fluid models of Korteweg type,, J. Math. Anal. Appl., 390 (2012), 181. doi: 10.1016/j.jmaa.2012.01.028. Google Scholar
S. Ukai, T. Yang and H. J. Zhao, Convergence rate for the compressible Navier-Stokes equations with external force,, J. Hyperbolic Differ. Equ., 3 (2006), 561. doi: 10.1142/S0219891606000902. Google Scholar
Y. J. Wang and Z. Tan, Optimal decay rates for the compressible fluid models of Korteweg type,, J. Math. Anal. Appl., 379 (2011), 256. doi: 10.1016/j.jmaa.2011.01.006. Google Scholar
Y. J. Wang, Decay of the Navier-Stokes-Poisson equations,, J. Differential equations, 253 (2012), 273. doi: 10.1016/j.jde.2012.03.006. Google Scholar
Wenjun Wang, Weike Wang. Decay rates of the compressible Navier-Stokes-Korteweg equations with potential forces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 513-536. doi: 10.3934/dcds.2015.35.513
Ryo Ikehata, Shingo Kitazaki. Optimal energy decay rates for some wave equations with double damping terms. Evolution Equations & Control Theory, 2019, 8 (4) : 825-846. doi: 10.3934/eect.2019040
Petronela Radu, Grozdena Todorova, Borislav Yordanov. Higher order energy decay rates for damped wave equations with variable coefficients. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 609-629. doi: 10.3934/dcdss.2009.2.609
Pavel I. Plotnikov, Jan Sokolowski. Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations. Evolution Equations & Control Theory, 2013, 2 (3) : 495-516. doi: 10.3934/eect.2013.2.495
Tong Tang, Hongjun Gao. On the compressible Navier-Stokes-Korteweg equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (8) : 2745-2766. doi: 10.3934/dcdsb.2016071
Ruy Coimbra Charão, Jáuber Cavalcante Oliveira, Gustavo Alberto Perla Menzala. Energy decay rates of magnetoelastic waves in a bounded conductive medium. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 797-821. doi: 10.3934/dcds.2009.25.797
Moez Daoulatli. Energy decay rates for solutions of the wave equation with linear damping in exterior domain. Evolution Equations & Control Theory, 2016, 5 (1) : 37-59. doi: 10.3934/eect.2016.5.37
Haibo Cui, Lei Yao, Zheng-An Yao. Global existence and optimal decay rates of solutions to a reduced gravity two and a half layer model. Communications on Pure & Applied Analysis, 2015, 14 (3) : 981-1000. doi: 10.3934/cpaa.2015.14.981
Pavel I. Plotnikov, Jan Sokolowski. Compressible Navier-Stokes equations. Conference Publications, 2009, 2009 (Special) : 602-611. doi: 10.3934/proc.2009.2009.602
Bopeng Rao. Optimal energy decay rate in a damped Rayleigh beam. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 721-734. doi: 10.3934/dcds.1998.4.721
Sun-Ho Choi. Weighted energy method and long wave short wave decomposition on the linearized compressible Navier-Stokes equation. Networks & Heterogeneous Media, 2013, 8 (2) : 465-479. doi: 10.3934/nhm.2013.8.465
Yingshan Chen, Shijin Ding, Wenjun Wang. Global existence and time-decay estimates of solutions to the compressible Navier-Stokes-Smoluchowski equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5287-5307. doi: 10.3934/dcds.2016032
Zdeněk Skalák. On the asymptotic decay of higher-order norms of the solutions to the Navier-Stokes equations in R3. Discrete & Continuous Dynamical Systems - S, 2010, 3 (2) : 361-370. doi: 10.3934/dcdss.2010.3.361
Takeshi Taniguchi. The existence and decay estimates of the solutions to $3$D stochastic Navier-Stokes equations with additive noise in an exterior domain. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4323-4341. doi: 10.3934/dcds.2014.34.4323
Tamara Fastovska. Decay rates for Kirchhoff-Timoshenko transmission problems. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2645-2667. doi: 10.3934/cpaa.2013.12.2645
Barbara Kaltenbacher, Irena Lasiecka. Global existence and exponential decay rates for the Westervelt equation. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 503-523. doi: 10.3934/dcdss.2009.2.503
Moez Daoulatli. Rates of decay for the wave systems with time dependent damping. Discrete & Continuous Dynamical Systems - A, 2011, 31 (2) : 407-443. doi: 10.3934/dcds.2011.31.407
Daoyuan Fang, Ting Zhang. Compressible Navier-Stokes equations with vacuum state in one dimension. Communications on Pure & Applied Analysis, 2004, 3 (4) : 675-694. doi: 10.3934/cpaa.2004.3.675
Zhong Tan Xu Zhang Huaqiao Wang
|
CommonCrawl
|
Home Journals MMEP Performance Optimization of Thermoelectric Cooler Using Genetic Algorithm
Performance Optimization of Thermoelectric Cooler Using Genetic Algorithm
Jitendra Mohan Giri* | Pawan Kumar Singh Nain
School of Mechanical Engineering, Galgotias University, Greater Noida 201312, India
[email protected]
Thermoelectric coolers (TECs) use the Peltier effect for thermal management of electronic devices. They offer high reliability and low noise operation but limited in use due to low performance. In the present work, through the use of a genetic algorithm (GA), two single-objective optimizations associated with two separate objectives are carried out, aiming maximization of cooling capacity and maximization of the coefficient of performance (COP) of TEC with space restrictions. Interfacial thermal resistance and electrical contact resistance are taken into consideration to obtain a more realistic model. This paper presents a new approach to finding appropriate solutions by optimally arranging the length of n-type and p-type thermoelectric (TE) elements, the cross-sectional area of TE elements, and input electric current. To validate the GA predictions, three-dimensional steady-state TEC models are prepared, and finite-element simulations are carried out using ANSYS®. Close agreement between the GA and ANSYS® has been observed. This study provides a new mathematical optimization model that is more realistic and is quite close to the physical construction of TEC modules manufactured by industry.
thermoelectric cooler, optimization, genetic algorithm, finite-element method, ANSYS workbench, cooling capacity, COP
The solid-state thermoelectric (TE) technology attract great attention of the researchers because of its potential use as green energy conversion devices. The Peltier effect of thermoelectric technology offers direct conversion of electrical energy into temperature difference. Conversely, the Seebeck effect of TE technology provides the conversion of thermal energy of temperature differential into electric power [1]. A thermoelectric cooler (TEC) dissipates the heat and removes the hotspots of the electronic devices in an environment-friendly manner using the Peltier effect. A TEC could be installed easily within a restricted space due to its practical manufacturing possibility in small sizes. Thermoelectric coolers must be appropriately designed and manufactured to meet the necessary performance requirements. Two essential performance parameters of a TEC are the cooling capacity and the coefficient of performance. The cooling capacity of thermoelectric coolers ranges from milliwatts to watts depending on the requirements. The maximum cooling effect or higher COP for a thermoelectric cooler can be achieved through upgraded TE materials and improved device design.
The efficiency of TE materials increases with a material property known as figure of merit (Z). The term Z is defined as α2/RK, where α is the Seebeck coefficient, R is the electrical resistance, K is the thermal conductance. With absolute temperature (T), the dimensionless figure of merit (ZT) is used to characterize TE materials. A higher value of ZT corresponds to better cooling performance. Hicks et al. described that the value of ZT could be enhanced by reducing the dimensions of thermoelectric materials [2, 3]. At room temperature, Venkatasubramanian et al. [4] reported a ZT∼2.4 for p-type Bi2Te3/Sb2Te3 superlattice devices. Peak ZT values of different TE materials are attainable at different temperatures. Over the past two decades, significant progress in maximizing ZT has been made in developing thermoelectric materials [5-10].
With the significant ongoing efforts to improve TE materials, the researchers also focus on designing and assembling the TECs. The investigations established that the geometric structure of thermoelectric elements affects the performance of thermoelectric coolers [11-15]. Huang et al. [16] combined a three dimensional TEC model with a simplified conjugate-gradient technique. They reported that at a fixed temperature difference and fixed current, a substantial value of the total area of TE elements with small element length can maximize cooling capacity. Yang et al. [17] reported that micro-thermoelectric coolers operating in a transient regime could provide a better cooling effect. Nain et al. [18] reported that a suitable value of dimensionless current can enhance the performance of TEC. Pareto-optimal solutions were obtained for different settings of temperature ratio. Shen et al. [19] reported that a two-segmented TE element structure can reduce the joule heating effect from 50% to 35% on the cold side. The results showed a remarkable 118.1% improvement in maximum cooling capacity. Nain et al. [20] optimized cooling capacity and COP performance of TEC using dimensional structural parameters as design variables. The geometrical parameters were optimized to find Pareto-optimal solutions. Jeong [21] reported that the COP of TEC can be increased by optimal values of current and length of thermoelements. Lee [22] proposed a dimensional analysis approach to find out the optimal design of TE devices with feasible mechanical constraints. Mijangos et al. [23] reported a novel design of asymmetrical legs to enhance the performance of TE devices.
Literature reports several studies on performance optimization of TEC [13, 15, 24-29]. However, in the current study, the two performance parameters, namely, cooling capacity and coefficient of performance, are optimized as two single-objective optimization problems. So far, the standard approach has been to choose either a set of geometric design variables or operating design variables. In this paper, a combined set of three design variables, electric current, length of n-type and p-type TE elements and cross-sectional area of TE elements is chosen in both optimization problems. The optimization algorithm mathematical model is customized to handle the presence of ceramic substrate, copper contacts, electric contact resistances at the interface, and heat sink, which are essential parts in the fabrication of a TEC module in industrial applications. It is a new aspect of modelling TEC. The geometry of the thermoelectric element plays a vital role in the performance of the thermoelectric cooler. However, tight geometric space constraints are found in many telecommunications and other scientific applications. The TEC is used for cooling electronic devices where space restrictions are quite prevalent. Hence, consideration of performance optimization of TEC with space restrictions is a very valid assumption. The genetic algorithm is used to maximize the cooling capacity and COP of a TEC with space restrictions in two different optimization problems. The optimization results are validated through finite-element simulations using ANSYS®.
2. Description of a Thermoelectric Cooler Model
The general schematic diagram of a practical single-stage thermoelectric cooler is shown in Figure 1 (a). A thermoelectric cooler (TEC) consists of many thermoelectric (TE) elements. These thermoelectric elements are assembled electrically in series. Copper tabs are used to interconnect n-type and p-type elements. This array configuration is sandwiched between two thermally conducting ceramic substrates. Figure 1 (b) is an exploded view diagram of a practical TEC system.
The basic unit of the physical model of a TEC is a thermocouple (pair of n-type and p-type semiconductor thermoelectric elements). The number of pairs of thermoelectric elements may vary from several to hundreds. On the one hand, the manufacturing cost of TEC is high, and on the other hand, many TE materials are high-priced. Further, to predict the performance of a TEC with a heat sink, knowing the temperature at important points is quite difficult. Also, the thermal resistances in the heat sink, copper conductors, and ceramic substrates play a significant role in the total resistance to heat flow in the TEC system. These issues make the performance optimization problem challenging to solve. In this work, the effects of electrical contact resistance and thermal resistance are included. The impact of Joule heat and thermal conduction are included as well.
In this work, to simplify the investigation considering thermal resistances, a thermal-resistance model has been developed. This model includes thermal resistance of copper tabs, ceramic substrates, and cold side heat sink for developing a more realistic TEC model. A thermocouple and the developed thermal resistance model for this work is shown in Figure 2.
Figure 1. (a) Single-stage TEC (b) Exploded view of TEC
Figure 2. (a) Thermoelectric couple (b) Thermal resistance model
By applying the electrical analogy of the heat flow to the thermal resistance model shown in Figure 2(b), the temperatures at the TEC hot surface and the cold surface can be expressed as
$T_{h}=Q_{h}\left(R_{h s}+R_{c r}+R_{c u}\right)+T_{a}$ (1)
$T_{c}=T_{c o}-Q_{c}\left(R_{c r}+R_{c u}\right)$ (2)
where, Th and Tc are the hot and cold side temperatures (K) of n-type and p-type elements. Qh is the heat rejection rate (W) from the hot side. Qc is the heat absorption rate at the cold side (W), which is referred to as the cooling capacity in common usage. Tco and Tho are the temperatures (K) at the cold surface and hot surface of TEC, respectively. Rhs is the thermal resistance (℃/W) of the heat sink attached to the hot side of TEC, Rcr is the thermal resistance (℃/W) of the ceramic substrates, and Rcu is the thermal resistance (℃/W) of the copper tabs. Ta is the ambient temperature (K).
In the current study, some reasonable assumptions are considered.
Heat transfer is assumed to take place along the length of TE elements.
The thermoelectric elements have the same cross-section and length.
Thomson effect is not considered.
Steady-state condition is prevailing.
A constant electric current pass through the circuit of dissimilar semiconductors. The heat is pumped to one of the two sides. It results in making one side cool and another side hot. A heat sink attached externally to the hot side ceramic substrate dissipates heat to the ambient environment. A thermoelectric couple produces cooling or heating effect depending on the direction of the electric current. Eq. (3) and Eq. (4) shows the heat energy balance at the cold and the hot side of the thermoelectric cooler. Tc and Th correspond to the temperature at TE element-copper conductor interface at the cold side and hot side, respectively, and used with the same reference in each referred equation of this paper.
$Q_{c}=2 N\left[I \alpha T_{c}-\frac{k A\left(T_{h}-T_{c}\right)}{L}-\frac{1}{2} I^{2}\left(\frac{\rho L}{A}+2 \frac{r_{c}}{A}\right)\right]$ (3)
$Q_{h}=2 N\left[I \alpha T_{h}-\frac{k A\left(T_{h}-T_{c}\right)}{L}+\frac{1}{2} I^{2}\left(\frac{\rho L}{A}+2 \frac{r_{c}}{A}\right)\right]$ (4)
where, thermoelectric material properties α, ρ, k are the Seebeck coefficient (V/K), electrical resistivity (Ωm) and thermal conductivity (W/mK), respectively. rc is the electrical contact resistance (Ωm2). L and A are the length (m) and cross-sectional area (m2) of n-type and p-type thermoelectric elements, respectively. I is the supplied electric current (A), and N is the total number of thermoelectric couples. There are three essential terms on the right side of Eq. (3) and Eq. (4). The first terms, IαTc and IαTh, represent the Peltier heat at the cold junction and hot junction, respectively. The second heat transfer term kA (Th −Tc)/L is due to thermal conduction. The third term ½ I2 (ρL/A+2rc/A) represents the Joule heat generation.
The selection of thermoelectric materials directly affects the performance of TEC. The material properties of thermoelectric elements are temperature dependent. Bismuth telluride (Bi2Te3) is the popular thermoelectric material used in thermoelectric coolers. The material properties of Bi2Te3 used in this work are given below, as specified by Fraisse et al. [30]. Tave is the average of Tc and Th.
$\alpha=\left(22224+930.6 T_{\text {ave }}-0.9905 T_{\text {ave }}^{2}\right) \times 10^{-9}$ (5)
$\rho=\left(5112+163.4 T_{\text {ave }}+0.6279 T_{\text {ave }}^{2}\right) \times 10^{-10}$ (6)
$k=\left(62605-277.7 T_{\text {ave }}+0.4131 T_{\text {ave }}^{2}\right) \times 10^{-4}$ (7)
Cooling capacity (Qc) is one of the significant performance indexes of TEC, which is used in this study. The Coefficient of Performance (COP) is another crucial performance index of thermoelectric coolers. Both performance indexes are considered in the current study. COP is the ratio of cooling capacity to power consumption and defined by the following equation.
Coefficient of Performance, $\mathrm{COP}=\frac{Q_{c}}{P}$ (8)
The input electric power (P), as shown in Figure 2(b), can be calculated by the following relationship.
Input Electric Power $, P=Q_{h}-Q_{c}$ (9)
The cost-competitive and high-performance TEC system will pave the way for a promising future of such green devices.
The various geometrical, material and operational parameters affect the cooling performance of the thermoelectric cooler. Besides, the restricted maximum area of cooling devices, which depends on its application in electronic devices, is a significant constraint for TEC design. Performance optimization is vital to enhance the use of thermoelectric coolers in real-world applications. In this study, the objective is to maximize the cooling capacity of TEC with space restrictions. This paper presents a new approach by selecting electric current, length of n-type and p-type TE elements and cross-sectional area of TE elements as design variables.
3.1 Optimization of cooling capacity of TEC
The single-objective optimization problem for maximization of the cooling capacity of TEC is formulated mathematically as:
$\left\{\begin{array}{ll} & \text {Maximize } Q_{c} \\ \text {Subjectto} & \\ I_{\min } \leq I \leq I_{\max } \\ L_{\min } \leq L \leq L_{\max } \\ A_{\min } \leq A \leq A_{\max }\end{array}\right.$ (10)
Further, the total number of thermoelectric couples (N) is a dependent design variable. Its value depends on the cross-sectional area of n-type and p-type thermoelectric elements and computed using Eq. (11).
$N=\frac{\text {Available area}(S) \text { of } T E C \times \text { packaging density}}{2 \times A}$ (11)
The optimization problem, as mentioned in Eq. (10) has been solved using some specific values of parameters. Table 1 lists the values of the parameters and properties used in this work.
Table 1. Values of parameters and properties
Cold surface temperature
Heat sink thermal resistance
Electrical contact resistance
Available C.S. area of TEC
Packaging density
Ceramic thermal conductivity
Copper thermal conductivity
293.15 K or 20℃
0.10 ℃ /W
1 x 10-8 Ω m2
35.3 W/m°C
386 W/m-°C
The design variables in the present study are constrained by lower and upper bounds. From a practical viewpoint, the range for length and cross-sectional area of n-type and p-type TE elements is taken as 1.0-2.0 mm and 1.0-2.0 mm2, respectively. The range for input electric current is taken as 0.1-3.0 A. The dependent design variable N will vary from 45 to 90 as it is governed by Eq. (11). The thicknesses of ceramic substrates and copper tabs are taken as 0.2 mm and 0.1 mm, respectively. The surface area of the ceramic substrate on each side is considered identical to the size of TEC. The total surface area of the copper tab on each side is considered 90% of the size of TEC. Rcrand Rcu are computed as 0.025181 °C/W and 0.001279345 °C/W, respectively. All these values are taken with the help of TEC manufacturing companies' catalogues.
Genetic algorithm (GA) is an evolutionary algorithm based on natural genetics. The genetic algorithm begins with the creation of a population of possible solutions (called individuals). Based on the value of the objective function, each member of the population is assigned a fitness value. To evolve better solutions, new generations are created by undergoing selection, recombination, and mutation of solutions. The fitness of the new generation is evaluated. This cycle is repeated over generations until the stopping criterion is met. The objective of GA is to search for an appropriate solution for the design problems. This involves maximization or minimization of the objective function.
Genetic algorithm is a population-based optimization approach to find optimal or near-optimal solutions. In terms of quality and robustness of solutions, GA's capability has been widely recognized for providing excellent results on classic discrete and continuous optimization problems. The genetic algorithm's performance depends on many genetic parameters such as population size, crossover, and mutation rate. GA parameters play an important role, and a different combination of parameters may lead to a significant GA performance change. The smaller population size helps faster convergence than larger population sizes. The decision on various GA parameters and operators are usually selected based on recommendations made by GA researchers.
The real-variable GA employing SBX operator created by Deb and Agarwal is used in this study [31]. Table 2 lists the values of the GA parameters like population size, crossover, mutation & number of generations that are used in the present study. The results are reported after multiple runs of GA converged to the same best solution.
Table 2. Values of GA parameters
Crossover probability
Mutation probability
Number of generations
3.2 Optimization of Coefficient of Performance (COP) of TEC
The objective of the second optimization problem is the maximization of the coefficient of performance of TEC. The design variables are the same as those selected in the previous problem. The fixed values of the parameters and properties are identical to the values used in the previous problem and described in Table 1. The thicknesses of ceramic substrates and copper tabs have the same values of 0.2 mm and 0.1 mm, respectively. This new problem is mathematically expressed as:
$\left\{\begin{array}{l}\text { Maximize COP } \\ \text { Subjectto } \\ \qquad \begin{array}{l}I_{\min } \leq I \leq I_{\max } \\ L_{\min } \leq L \leq L_{\max } \\ A_{\min } \leq A \leq A_{\max }\end{array}\end{array}\right.$ (12)
The goal of this optimization problem is to find the design variables within the variable bounds that result in the maximum COP of the device.
3.3 Optimization procedure
To apply the genetic algorithm to the optimization problems described in Eq. (10) and Eq. (12), the fitness evaluation of solution vectors is required. However, the procedure for evaluating fitness function is slightly tricky for this problem. The unknown values of Th and Tc are initially guessed for approximately estimate Qc and Qh. The initial guess for Th and Tc satisfies TEC's prevailing temperature conditions, i.e., Th >Ta and Tc <Tco. In principle, these conditions must be satisfied. The initial guess will be iteratively modified and reach the exact value. Eq. (1) and Eq. (2) are used to calculate new values of Th and Tc that are termed as Thn and Tcn. These are updated repeatedly to corresponding new values until the difference in old values and new values are negligible. Then the values of Qc and Qh are accepted.
A flowchart for GA implementation for these two optimization problems are given in Figure 3.
The brief steps of the fitness evaluation procedure for a population individual (solution vector) followed in this work are described below.
The hot side and cold side temperatures of TE elements (Th and Tc) are initially assigned to a guessed value.
The material properties are estimated using Eq. (5), (6) and (7).
The expected values of Qc and Qh are calculated using Eq. (3) and (4).
Using Eq. (1) and (2) the new values of Thand Tc are calculated. These are termed as Thn and Tcn, respectively.
If the difference of guessed values and new values is considerable, then guessed value is updated as Th = Thn and Tc = Tcn. Go to step (b) and repeat the iteration.
If the difference of guessed values and new values is small, then accept the solution. Take the next individual in the GA population to evaluate until all individuals of the current generation are evaluated.
Figure 3. Flowchart for GA implementation
4. Results and Discussions
In the first segment of present work, cooling capacity Qc, the first performance index of TEC is maximized. The algorithm of this study is coded in C language. The GA source code is developed by Deb and used in this work [32]. Multiple runs of 1000 generations have been repeated, and the best run is reported in Table 3 on which algorithm converged several times during various runs.
Table 3. Result of GA based optimization for maximum Qc
Optimal Values of Design Variables
(Dependent)
8.476807 W
2.993 A
1.607 mm2
At optimal values of design variables, the corresponding values of Th and Tc are found at 28.59℃ and 19.78℃, respectively. The hot surface temperature (Tho) of TEC is 27.84℃. The heat rejection rate (Qh) at the hot side is 28.401 W. For the maximized Qc, the value of COP obtained is 0.425. It can be observed that L is hitting lower bound while other parameters have optimal values without hitting any bound of the permitted range.
To optimize the second performance index of TEC, the coefficient of performance (COP) is maximized. The boundary conditions and assumptions are similar to those considered during the optimization of Qc. This optimization problem is solved using the same parameters of GA, as mentioned in Table 2. The steps to implement GA in this problem are similar to those used in the optimization of cooling capacity and shown with the help of a flowchart in Figure 3. Several runs of 1000 generations have been performed to reach solutions with the highest quality, and the best run is reported in Table 4. It is worth mentioning that GA converged to the same results in various runs.
Table 4. Result of GA optimization for maximum COP
0.283A
1.956mm2
With this maximum COP, the corresponding Qc is obtained as 0.745992 W. The corresponding values of Th and Tc are 25.11℃ and 19.97℃, respectively. The hot surface temperature (Tho) of the thermoelectric cooler is 25.09℃. The heat rejection rate (Qh) at the hot side is 0.927 W. The optimal values of I and A design variables are unique, while the optimal value of L is hitting the upper boundary. It can be seen that COP increased significantly, and cooling capacity is just 8.8% of Max. Qc obtained, as mentioned in Table 4. It is found that a design variable L hits its lower bound for high Qc while for high COP, L hits its upper bound.
From these two results, it is well established that maximization of Qc and maximization of COP are obtained at a different set of design parameters. Also, maximum Qc does not ensure providing optimal COP and vice-versa. This means that these objectives are conflicting. The resolution of these conflicting design objectives will be Pareto solutions through multi-objective optimization if there is no specific objective interest. It will be useful to determine a set of solutions that will allow the decision-maker to choose among them according to the application's requirement.
The results of this study show that it is possible to improve the cooling capacity or COP of the thermoelectric coolers with these design variables to be competitive with compressor-based cooling devices. The complex impacts of electrical contact resistance and thermal resistance deteriorate the TEC performance. These factors always need to be included in the model for optimization and analysis.
5. Finite-Element Simulation for Result Validation
Finite-element simulation is a computational method for solving complex engineering problems of the real-world. The finite element simulations are performed to validate the optimization results of GA. ANSYS® is a useful, common-purpose finite-element method tool. It is used to solve a broad range of engineering problems numerically. Hence ANSYS® is used in the current study. The Thermal-electric module of ANSYS® is capable of providing simultaneous solutions of thermal and electrical fields. The present work makes use of the thermal-electric module for the steady-state analysis of the TEC model. A three-dimensional non-linear finite-element model is setup. The model in this work is set up with one pair of n-type and p-type elements as per the GA result. A new approach to incorporate the effect of electric contact resistance on the performance of TEC is used in the present study. The finite-element simulation includes four additional geometric parts termed as 'Contact' and used for modelling of the electric contact resistance effect. These parts have material properties as per the thermo-electric behaviour of electrical contact resistance. The contact geometries are created at each end of the TE elements. The complete schematic of the TEC model for Finite-element simulation to validate GA results is shown in Figure 4.
Figure 4. Schematic of TEC for finite-element simulation to validate GA results
5.1 Finite-element simulation for maximum Qc
To validate GA predictions for maximum Qc, the length of n-type and p-type elements is taken as 1.0 mm, as reported in Table 3. The TE elements are of the square cross-section of 1.27 mm. The distance between n-type and p-type elements is 0.31 mm. The material properties for the simulation are computed at average (Tave) of Th and Tc values obtained during the GA based optimization of Qc. The finite-element simulation input parameters of the modelled TEC are given in Table 5.
To model adiabatic heat transfer from the exposed surfaces of TEC, a small convection loss of 0.000001 W/mK was applied on all surfaces except the ones on which boundary conditions mentioned in Table 5 are specified. The computationally generated mesh, electric voltage, and temperature distribution across the finite-element model of the thermoelectric cooler are shown in Figure 5.
Table 5. Finite-element simulation input parameters for maximum Qc
Value per pair of TE Elements
Temperature (hot side of TEC)
0.1514 W
5c.png
Figure 5. (a) Mesh (b) Voltage distribution (c) Temperature distribution in the finite-element model for maximum QC
The parameters obtained from finite-element simulation are compared with the GA results and reported in Table 6. It is observed that the results for a single pair of TE elements from GA simulation and those obtained from finite-element simulation are in close agreement. The finite-element simulation result represents a 3-D solution based on a numerical technique, while GA results are based on 1-D analytical equations. Hence, the optimization result obtained by GA is verified through the solutions of the thermal-electric module of ANSYS®.
Table 6. Comparison of results for maximum Qc
ANSYS®
5.2 Finite-element simulation for maximum COP
In this segment, the finite-element simulation for maximum COP is performed with ANSYS® software. The steady-state TEC model consists of TE elements with 2.0 mm length, as reported in Table 4. The TE elements are of the square cross-section of 1.4 mm. The distance between n-type and p-type elements is 0.38 mm. The temperature-dependent material properties are calculated based on the average of Th, and Tc found during GA based optimization of COP. The input parameters of the TEC model for finite-element simulation are given in Table 7.
Table 7. Finite-element simulation input parameters for maximum COP
The three-dimensional steady-state TEC model is created, and predictions of GA based optimization are tested for maximum COP. For this simulation, the mesh, electric voltage, and temperature distribution are shown in Figure 6. The finite-element simulation results agree well with the GA results. The parameters for GA and finite-element simulation results have been compared and reported in Table 8.
Figure 6. (a) Mesh (b) Voltage distribution (c) Temperature distribution in the finite-element model for maximum COP
The ANSYS® result is consistent with GA based optimization results for the maximization of COP. Hence, the optimization result is verified through the solutions of the thermal-electric module of ANSYS®.
Table 8. Comparison of results for maximum COP
This paper presents an effective method with a new analytical model to improve cooling capacity and coefficient of performance of thermoelectric cooler for a specific need. In order to analyze more than one factor simultaneously, the thermoelectric cooler's current and geometric parameters were set to be variables. The described study emphasized to find out the optimal values of current, length of n-type and p-type TE elements and cross-sectional area of TE elements within size restrictions on space. It was found that length, the cross-sectional area of thermoelectric elements, and input electric current had a great influence on the TEC performance. Performance optimizations to maximize cooling capacity and to maximize COP were successfully performed by the genetic algorithm. The use of this stochastic optimization algorithm based on natural genetics theory proved to be the right option. The genetic algorithm successfully converged to the same optimal results over several runs. The finite-element simulations through ANSYS® validated the GA result.
The work suggests that these design variables should be appropriately selected in practical application. Results revealed that the relationship between the coefficient of performance and cooling capacity is inverse. The maximum cooling capacity does not provide optimum COP and vice-versa. The smaller length of thermoelectric elements facilitates maximum cooling capacity whereas greater length of elements obtains maximum coefficient of performance. The best performance requires specific values of electric current and cross-sectional area of TE elements as per the objective requirements. The appropriate optimum results can be achieved for any space restriction. This study can guide the TEC designers working for some specific cooling targets. The use of microprocessor-based control of input power parameters to get an optimal cooling with the best possible COP under dynamic conditions needs to be explored.
Rcu
cross-sectional area of TE elements, m2
coefficient of performance
electric current, A
length of thermoelectric element, m
number of thermoelectric couples
power input, W
heat absorption rate at the cold side, W
heat rejection rate from the hot side, W
thermal resistance of heat sink, ℃/W
thermal resistance of ceramic, ℃/W
thermal resistance of copper, ℃/W
available cross-sectional area of TEC, m2
temperature at the cold side of elements, K
temperature at the hot side of elements, K
temperature at the cold surface of TEC, K
temperature at the hot surface of TEC, K
ambient temperature, K
average of Tc and Th, K
figure of merit, 1/K
Seebeck coefficient, V/K
electrical resistivity, Ω m
[1] Rowe, D.M. (1995). CRC Handbook of Thermoelectrics. CRC Press. https://doi.org/10.1201/9781420049718
[2] Hicks, L.D., Dresselhaus, M.S. (1993). Effect of quantum-well structures on the thermoelectric figure of merit. Physical Review B, 47: 12727. https://doi.org/10.1103/PhysRevB.47.12727
[3] Hicks, L.D., Dresselhaus, M.S. (1993). Thermoelectric figure of merit of a one-dimensional conductor. Physical Review B, 47: 16631. https://doi.org/10.1103/PhysRevB.47.16631
[4] Venkatasubramanian, R., Siivola, E., Colpitts, T., O'Quinn, B. (2001). Thin-film thermoelectric devices with high room-temperature figures of merit. Nature, 413: 597-602. https://doi.org/10.1038/35098012
[5] Su, C.H. (2019). Design, growth and characterization of PbTe-based thermoelectric materials. Progress in Crystal Growth and Characterization of Materials, 65(2): 47-94. https://doi.org/10.1016/j.pcrysgrow.2019.04.001
[6] Tan, G., Zhao, L.D., Kanatzidis, M.G. (2016). Rationally designing high-performance bulk thermoelectric materials. Chemical Reviews, 116(19): 12123-12149. https://doi.org/10.1021/acs.chemrev.6b00255
[7] Poudel, B., Hao, Q., Ma, Y., Lan, Y., Minnich, A., Yu, B., Yan, X., Wang, D.Z., Muto, A. (2008). High-thermoelectric performance of nanostructured bismuth antimony telluride bulk alloys. Science, 320(5876): 634-8. https://doi.org/10.1126/science.1156446
[8] Chen, G., Dresselhaus, M.S., Dresselhaus, G., Fleurial, J.P., Caillat, T. (2003). Recent developments in thermoelectric materials. International Materials Reviews, 48(1): 45-66. https://doi.org/10.1179/095066003225010182
[9] Sootsman, J.R., Chung, D.Y., Kanatzidis, M.G. (2009). New and old concepts in thermoelectric materials. Angewandte Chemie - International Edition, 48(46): 8616-8639. https://doi.org/10.1002/anie.200900598
[10] Alam, H., Ramakrishna, S. (2013). A review on the enhancement of figure of merit from bulk to nano-thermoelectric materials. Nano Energy, 2(2): 190-212. https://doi.org/10.1016/j.nanoen.2012.10.005
[11] Völklein, F., Min, G., Rowe, D.M. (1999). Modelling of a microelectromechanical thermoelectric cooler. Sensors and Actuators, A: Physical, 75(2): 95-101. https://doi.org/10.1016/S0924-4247(99)00002-3
[12] Yu, J., Wang, B. (2009). Enhancing the maximum coefficient of performance of thermoelectric cooling modules using internally cascaded thermoelectric couples. International Journal of Refrigeration, 32(1): 32-39. https://doi.org/10.1016/j.ijrefrig.2008.08.006
[13] Abramzon, B. (2007). Numerical optimization of the thermoelectric cooling devices. Journal of Electronic Packaging, 129(3): 339-347. https://doi.org/10.1115/1.2753959
[14] Pan, Y., Lin, B., Chen, J. (2007). Performance analysis and parametric optimal design of an irreversible multi-couple thermoelectric refrigerator under various operating conditions. Applied Energy, 84(9): 882-892. https://doi.org/10.1016/j.apenergy.2007.02.008
[15] Cheng, Y.H., Lin, W.K. (2005). Geometric optimization of thermoelectric coolers in a confined volume using genetic algorithms. Applied Thermal Engineering, 25(17-18): 2983-2997. https://doi.org/10.1016/j.applthermaleng.2005.03.007
[16] Huang, Y.X., Wang, X.D., Cheng, C.H., Lin, D.T.W. (2013). Geometry optimization of thermoelectric coolers using simplified conjugate-gradient method. Energy, 59: 689-697. https://doi.org/10.1016/j.energy.2013.06.069
[17] Yang, R., Chen, G., Kumar, A.R., Snyder, G.J., Fleurial, J.P. (2005). Transient cooling of thermoelectric coolers and its applications for microdevices. Energy Conversion and Management, 46(9-10): 1407-1421. https://doi.org/10.1016/j.enconman.2004.07.004
[18] Nain, P.K.S., Sharma, S., Giri, J.M. (2010). Non-dimensional multi-objective performance optimization of single stage thermoelectric cooler. Lecture Notes in Computer Science, 404-413. https://doi.org/10.1007/978-3-642-17298-4_44
[19] Shen, L., Zhang, W., Liu, G., Tu, Z., Lu, Q., Chen, H., Huang, J.Q. (2020). Performance enhancement investigation of thermoelectric cooler with segmented configuration. Applied Thermal Engineering, 168: 114852. http://doi.org/10.1016/j.applthermaleng.2019.114852
[20] Nain, P.K.S., Giri, J.M., Sharma, S., Deb, K. (2010). Multi-objective performance optimization of thermo-electric coolers using dimensional structural parameters. Lecture Notes in Computer Science, pp. 607-614. https://doi.org/10.1007/978-3-642-17563-3_71
[21] Jeong, E.S. (2014). A new approach to optimize thermoelectric cooling modules. Cryogenics, 59: 38-43. https://doi.org/10.1016/j.cryogenics.2013.12.003
[22] Lee, H.S. (2013). Optimal design of thermoelectric devices with dimensional analysis. Applied Energy, 106: 79-88. https://doi.org/10.1016/j.apenergy.2013.01.052
[23] Fabián-Mijangos, A., Min, G., Alvarez-Quintana, J. (2017). Enhanced performance thermoelectric module having asymmetrical legs. Energy Conversion and Management, 148: 1372-1381. https://doi.org/10.1016/j.enconman.2017.06.087
[24] Göktun, S. (1996). Optimal performance of a thermoelectric refrigerator. Energy Sources, 18(5): 531-536. https://doi.org/10.1080/00908319608908788
[25] Cheng, Y.H., Shih, C. (2005). Application of genetic algorithm to maximizing the cooling capacity in a thermoelectric cooling system. Proceedings of the IEEE International Conference on Industrial Technology, Hong Kong, China. https://doi.org/10.1109/ICIT.2005.1600648
[26] Thiébaut, E., Goupil, C., Pesty, F., D'Angelo, Y., Guegan, G., Lecoeur, P. (2017). Maximization of the thermoelectric cooling of a graded Peltier device by analytical heat-equation resolution. Physical Review Applied, 8(6). http://doi.org/10.1103/PhysRevApplied.8.064003
[27] Seifert, W., Pluschke, V. (2014). Maximum cooling power of a graded thermoelectric cooler. Physica Status Solidi (B) Basic Research, 251(7): 1416-1425. https://doi.org/10.1002/pssb.201451038
[28] Erturun, U., Erermis, K., Mossi, K. (2014). Effect of various leg geometries on thermo-mechanical and power generation performance of thermoelectric devices. Applied Thermal Engineering, 73(1): 128-141. https://doi.org/10.1016/j.applthermaleng.2014.07.027
[29] Chen, L., Li, J., Sun, F., Wu, C. (2007). Optimum allocation of heat transfer surface area for heating load and COP optimisation of a thermoelectric heat pump. International Journal of Ambient Energy, 28(4): 189-196. https://doi.org/10.1080/01430750.2007.9675043
[30] Fraisse, G., Ramousse, J., Sgorlon, D., Goupil, C. (2013). Comparison of different modeling approaches for thermoelectric elements. Energy Conversion and Management, 65: 351-356. https://doi.org/10.1016/j.enconman.2012.08.022
[31] Deb, K., Agrawal, R.B. (1994). Simulated binary crossover for continuous search space. Complex Systems, 9(2).
[32] Deb, K. (2001). Single-objective GA code in C. https://www.iitk.ac.in/kangal.
|
CommonCrawl
|
Continuous functions on closed, bounded intervals
Let's start off today with a lemma - those useful little building blocks fo proper theorems and their proofs. If you took the exam for Analysis 1, Fall 2012, this will look familiar:
Lemma: If $g : A \to \mathbb{R}$ is continuous at $c \in A$ and $g(c) \neq 0$, then there exists a neighborhood of $c$ for which $g(x) \neq 0$ on that neighborhood.
Proof: $g(c) \neq 0$, so let $\epsilon = \frac{|g(c)|}{2}$. Since $g$ is continuous at $x=c$, $\exists \delta > 0$ such that $|x - c| < \delta$ implies $|g(x) - g(c)| < \epsilon$. So $|g(c)| - |g(x)| \leq |g(c) - g(x)| < \frac{|g(c)|}{2}$. Rearranging this, we see that $|g(c)| - \frac{|g(c)|}{2} < |g(x)|$ thus $0 < \frac{|g(c)|}{2} < |g(x)|$. So in the neighborhood of our $\delta$, $g(x) \neq 0$.
We can also get the following corollary out of this:
Corollary: If $g : A \to \mathbb{R}$ is continuous at $c \in A$ and $g(c) \neq 0$, then $1 / g(x)$ is continuous at $c$.
Proof: For a given $\epsilon > 0$, there exists a $\delta > 0$ such that $|g(x)| > \frac{|g(c)|}{2}$ for all $x$ in the $\delta$-neighborhood of $c$. Thus, $\frac{2}{|g(c)|} > \frac{1}{|g(x)|}$ for those $x$. Thus, by the Triangle Inequality:
$$ \left|\frac{2}{|g(x)|} - \frac{2}{|g(c)|}\right| = \frac{|g(c) - g(x)|}{|g(x)||g(c)|} \leq \frac{2 \cdot |g(x) - g(c)|}{|g(c)|^2} $$
But $g$ is continuous at $c$, so for a given $\epsilon > 0$, there exists a $\delta_1 > 0$ such that if $|x - c| < \delta_1$, then $|g(x) - g(c)| < \frac{|g(c)|^2}{2}\epsilon$. Thus, if we choose a $\delta_2 := \inf\{\delta, \delta_1\}$ and set $|x - c| < \delta_2$, then:
$$ \left|\frac{1}{g(x)} - \frac{1}{g(c)}\right| \leq \frac{2 \cdot |g(x) - g(c)|}{|g(c)|^2} < \epsilon $$
Thus completing our proof.
1More combinations of continuous functions¶
Now let's talk about functions that are everywhere continuous. It's easy to see that $f(x) = x$ is a continuous function, and as we've seen, the product of a continuous function with a scalar or another continuous function is continuous. Additionally, the sum of continuous functions is continuous. Therefore polynomials are continuous everywhere as you can construct them with $f(x) = x$ and some coefficients. Rational functions are also continuous everywhere in their domain.
What about the composition of continuous functions? It turns out these are also continuous, which we'll state and prove as follows:
Theorem: Given $A, B \subseteq \mathbb{R}$, let $f : A \to \mathbb{R}$ and $g : B \to \mathbb{R}$ be functions where $f(A) \subseteq B$. If $f$ is continuous at $c \in A$ and $g$ is continuous at $f(c)$ then $g \circ f : A \to \mathbb{R}$ is continuous at $c$.
Proof: $g$ is continuous at $b = f(c)$, so for a given $\epsilon > 0$, there exists a $\delta > 0$ such that $|g(x) - g(b)| < \epsilon$. $f$ is continuous at $c$. Thus there exists a $\delta_1 > 0$ such that if $|x - c| < \delta_1$ then $|g(f(x)) - g(f(c))| < \epsilon$, thus completing our proof.
2Closed, bounded intervals¶
Now we give a definition that will be useful (and is quite obvious):
Definition: A function $f : A \to \mathbb{R}$ is said to be bounded on $A$ if there exists some $M > 0$ such that $|f(x)| \leq M$ for all $x \in A$.
In general, a continuous function on an interval need not be bounded, for example the function $1/x$ on $(0,1)$. However, a continuous function on a closed, bounded interval is bounded on that interval:
Theorem: Let $I = [a, b]$ be a closed, bounded interval and $f : I \to \mathbb{R}$ be a continuous function on $I$. Then $f$ is bounded on $I$.
Proof: Suppose $f$ is not bounded. Then for all $M \in \mathbb{N}$ there exists some $x_M \in I$ such that $|f(x_M)| > M$. Thus, $(x_M)$ is a sequence in $I$. But $I$ is a closed and bounded interval, so by Bolzano-Weierstrass it has a convergent subsequence $(x_{M_k})$ that converges to some $x_0 \in I$. As $f$ is continuous on $I$, $f(x_{M_k}) \to f(x_0)$, i.e. $f(x_{M_k})$ is convergent and convergent sequences are bounded. Hence we conclude our proof by contradiction.
This is an important result to know as it is used often. And now for another definition:
Definition: Let $A \subseteq \mathbb{R}$, $f : A \to \mathbb{R}$. We say $f$ has an absolute maximum on $A$ if there exists $c \in A$ such that $f(c) \geq f(x)$ for all $x \in A$. Similarly, $f$ has an absolute minimum on $A$ if there exists $d \in A$ such that $f(d) \leq f(x)$ for all $x \in A$.
This is a pretty intuitive definition and doesn't require much thought. Now let's make use of this new definition in the following theorem:
Theorem: Let $I = [a, b]$ be a closed, bounded interval, and $f : I \to \mathbb{R}$ be continuous on $I$. Then $f$ achieves both an absolute maximum and an absolute minimum on $I$.
Proof: By a previous theorem, we know there exists $M > 0$ such that $|f(x)| \leq M$, so $f(x) \leq |f(x)| \leq M$ for all $x \in A$. Thus the set $f(I) = \{f(x) : x \in A\}$ is bounded above, and therefore has a supremum $S$. Another way or saying this is that for $n \in \mathbb{N}$, $S - 1/n$ is not an upper bound. So we can create a sequence $(x_n)$ defined such that $S - 1/n < f(x_n) \leq S$. By Bolzano-Weierstrass, there is a convergent subsequence $(x_{n_k})$ that converges to some $x_0 \in I$. By the squeeze theorem we know that $f(x_0) = S$, giving us the absolute maximum. The proof for a minimum follows similarly.
|
CommonCrawl
|
Principally polarized squares of elliptic curves with field of moduli equal to $\mathbb Q$
Alexandre Gélin, Everett W. Howe and Christophe Ritzenthaler
DOI: 10.2140/obs.2019.2.257
We give equations for 13 genus-2 curves over ℚ¯, with models over ℚ, whose unpolarized Jacobians are isomorphic to the square of an elliptic curve with complex multiplication by a maximal order. If the generalized Riemann hypothesis is true, there are no further examples of such curves. More generally, we prove under the generalized Riemann hypothesis that there exist exactly 46 genus-2 curves over ℚ¯ with field of moduli ℚ whose Jacobians are isomorphic to the square of an elliptic curve with complex multiplication by a maximal order.
genus-2 curves, abelian varieties, polarizations, fields of moduli, complex multiplication
Primary: 11G15
Secondary: 14H25, 14H45
Revised: 15 September 2018
Alexandre Gélin
Laboratoire de Mathématiques de Versailles
Université de Versailles Saint-Quentin-en-Yvelines
Centre national de la recherche scientifique
Université Paris-Saclay
Everett W. Howe
Center for Communications Research
Institute for Defense Analyses
Christophe Ritzenthaler
Institut de recherche mathématique de Rennes
Université de Rennes 1
Campus de Beaulieu
|
CommonCrawl
|
Characterization Study of Detector Module with Crystal Array for Small Animal PET: Monte Carlo Simulation
소동물 전용 양전자방출단층시스템의 섬광체 배열에 따른 특성 평가: 몬테칼로 시뮬레이션 연구
Baek, Cheol-Ha
백철하 (동서대학교 방사선학과)
The aim of this study is to perform simulations to design the detector module with crystal array by Monte Carlo simulation. For this purpose, a small animal PET scanner, employing module with 1~8 crystal array discrimination scheme, was designed. The proposed scanner has an inner diameter of 100 mm with detector modules in crystal array. Each module is composed of a 5.0 mm LSO crystal with a $2.0{\times}2.0mm^2$ sensitive area with a pitch 2.1 mm and 10.0 mm thickness. The LSO crystals are attached to the SiPM which has a dimension of $2.0{\times}2.0mm^2$. The detector module with crystal array of the designed PET detector was simulated using the Monte Carlo code GATE(Geant4 Application for Tomographic Emission). The detector is enough compensation for the loss of data in sinogram due to gaps between modules. The results showed that the high sensitivity and effectively reduced the problem about the missing data were greatly improved by using the detector module with 1 crystal array.
Small Animal PET System;Monte Carlo Simulation;GATE Code
Y. H. Chung, T. Y. Song, and C Choi, "Nuclear Medicine imaging instrumentations for molecular imaging," Kor. J. Nucl. Med. Vol.38, No.2, pp.131-139, 2004.
Y. C. Tai and R. Laforest, "Instrumentation aspects of animal PET," Annu Rev Biomed Eng, Vol.7, pp.255-285, 2005 https://doi.org/10.1146/annurev.bioeng.6.040803.140021
A. Vandenbroucke, A. Foudray, P. Olcott, and C. S. Levin, "Performance characterization of a new high resolution PET scintillation detector," Phys. Med & Biol, Vol.55, No.19, pp.5895-5911, 2010. https://doi.org/10.1088/0031-9155/55/19/018
S. Ha, S. Matej, M. Ispiryan, and K. Mueller, "GPU Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction," IEEE Trans Nucl Sci, Vol.60, No.1, pp.166-173, 2013. https://doi.org/10.1109/TNS.2012.2233754
A. Kishimoto, J. Kataoka, T. Kato, T. Miura, T. Nakamori, K. Kamada, K. Sato, Y. Ishikawa, K. Yamamura, N. Kawabata, and S. Yamamoto, "Development of a Dual-Sided Readout DOI-PET Module Using Large-Area Monolithic MPPC-Arrays," IEEE Trans Nucl Sci, Vol.60, No.1, pp.38-43, 2013. https://doi.org/10.1109/TNS.2012.2233215
L. A. Shepp and Y. Vardi, "Maximum likelihood reconstruction for emission tomography," IEEE Trans. Med. Imaging, Vol.1, No.2, pp.113-122, 1982. https://doi.org/10.1109/TMI.1982.4307558
H. M. Hudson and R. S. Rarkin, "Accelerated Image Reconstruction using Ordered Subsets of Projection Data," IEEE Trans Med Img, Vol.13, No.4, pp.394-398, 1994
G.A. Kastis, A. Gaitanis, Y. Fernandez, G. Kontaxakis, and A. Fokas., Evaluation of a spline reconstruction technique: Comparison with FBP, MLEM and OSEM, 2010 IEEE NSSMIC, pp.3282-3287, 2010
D. S. Kim, H. J. Yoo, D. O. Shim, and H. J. Yu, "The Evaluation of Reconstructed Images in 3D OSEM Accoring to Iteration and Subset Number," J Nucl Med Technol, Vol.15, No.1, pp.17-24, 2011
H. Baghaei, H. Li, J. Uribe, Y. Wang, and W. H. Wong, "Compensation of missing projection data for MDAPET camera," IEEE Nucl. Sci. Symp. and Med. Imag. Conf., 2000.
R. Buchert, K. H. Bohuslavizki, J. Meste, T. Sera, and C. Blechmann, "Quality Assurance in PET: Evaluation of the Clinical Relevance of Detector Defects," J. Nucl. Med., Vol.40, No.10, pp.1657-1665, 1999.
R. Redus, J. S. Gordon, and P. Bennett, "An imaging nuclear survey system," IEEE Trans. Nucl. Sci., Vol.43, No.3, pp.1827-1831, 1996. https://doi.org/10.1109/23.507230
H. W. A. M. de Jong, R. Boellaard, C. Knoess, M. Lenox, C. Michel, M. Casey, and A. A. Lammertsma, "Correction methods for missing data in sinograms of the HRRT PET scanner," IEEE Trans. Med. Imag., Vol.50, No.5, pp.1452-1456, Oct. 2003.
J. S. Karp, G. Muehllehner, and R. M. Lewitt, "Constrained Fourier space method for compensation of missing data in emission computed tomography," IEEE Trans. Med. Imag., Vol.7, No.1, pp.21-25, 1988.
A. Del Guerra, N. Belcari, M. Giuseppina Bisogni, G. LLosa, S. Marcatili, G. Ambrosi, F. Corsi, C. Marzocca, G. Galla, and C. Piemonte, Advantages and pitfalls of the silicon photomultiplier (SiPM) as photodetector for the next generation of PET scanners, Nucl. Instrum. Methods Phys Res A, Vol.617, No.1/3, pp.223-226, 2010. https://doi.org/10.1016/j.nima.2009.09.121
J. Y. Yeom, R. Vinke, and C. S. Levin, "Optimizing timing performance of silicon photomultiplier-based scintillation detectors," Phys Med Biol, Vol.58, No.4, pp.1207-1220, 2013. https://doi.org/10.1088/0031-9155/58/4/1207
J. H. Jung, Y. Choi, K. J. Hong, J. Kang, W. Hu, K. H. Lim, Y. Huh, S. Kim, J. Jung, and B. Kim, "Development of brain PET using GAPD arrays," Med Phys, Vol.39, No.3, pp.1227-1233, 2012. https://doi.org/10.1118/1.3681012
S. Jan, G. Santin, D. Strul, S. staelens, K. Assie, D. Autret, S. Avner, R. Barbier, M. Bardies, P. M. Bloomfield, D. Brasse, B. Breton, P. Bruyndonckx, I. Bubat, A. F. Chatziioannou, Y. Choi, Y. H Chung, C. Comtat, D. Donnarieix, L. Ferrer, S. J. Glick, C. J. Groiselle, D. Guez, P. F. Honore, S. Kerhoas-Cavata, A. S. Kirov, M. Koole, and M. Krieguer, "GATE: a simulation toolkit for PET and SPECT," Phys. Med. Biol., Vol.49, pp.4543-4561, 2004. https://doi.org/10.1088/0031-9155/49/19/007
S. Staelens1, D. Strul, G. Santin, S. Vandenberghe, M. Koole, Y. Asseler, I. Lemahieu, and R. Van de Walle., "Monte Carlo simulations of a scintillation camera using GATE: validation and application modeling," phys. Med. Biol., Vol.34, pp.1026-1036, 2007.
C. H. Baek, S. J. Lee, and Y. H. Chung, "Coded Aperture Gamma Camera for Thyroid Imaging: Monte Carlo Simulation," Kor. J. Med. Phy., Vol.19, No.4, pp.247-255, 2008.
M. E. Phelps, "PET: Molecular Imaging and Its Biological applications," Springer-Verlag New York, Inc., pp.38-48, 2004.
|
CommonCrawl
|
Search SpringerLink
Bio-fertilizer Affects Structural Dynamics, Function, and Network Patterns of the Sugarcane Rhizospheric Microbiota
Qiang Liu1,2,
Ziqin Pang1,2,3,4,
Zuli Yang6,
Fallah Nyumah1,2,3,4,
Chaohua Hu1,
Wenxiong Lin3,4 &
Zhaonian Yuan1,2,5
Microbial Ecology (2021)Cite this article
Fertilizers and microbial communities that determine fertilizer efficiency are key to sustainable agricultural development. Sugarcane is an important sugar cash crop in China, and using bio-fertilizers is important for the sustainable development of China's sugar industry. However, information on the effects of bio-fertilizers on sugarcane soil microbiota has rarely been studied. In this study, the effects of bio-fertilizer application on rhizosphere soil physicochemical indicators, microbial community composition, function, and network patterns of sugarcane were discussed using a high-throughput sequencing approach. The experimental design is as follows: CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), and BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea). The results showed that the bio-fertilizer was effective in increasing sugarcane yield by 3–12% compared to the CF treatment group, while reducing soil acidification, changing the diversity of fungi and bacteria, and greatly altering the composition and structure of the inter-root microbial community. Variance partitioning canonical correspondence (VPA) analysis showed that soil physicochemical variables explained 80.09% and 73.31% of the variation in bacteria and fungi, respectively. Redundancy analysis and correlation heatmap showed that soil pH, total nitrogen, and available potassium were the main factors influencing bacterial community composition, while total soil phosphorus, available phosphorus, pH, and available nitrogen were the main drivers of fungal communities. Volcano plots showed that using bio-fertilizers contributed to the accumulation of more beneficial bacteria in the sugarcane rhizosphere level and the decline of pathogenic bacteria (e.g., Leifsonia), which may slow down or suppress the occurrence of diseases. Linear discriminant analysis (LDA) and effect size analysis (LEfSe) searched for biomarkers under different fertilizer treatments. Meanwhile, support vector machine (SVM) assessed the importance of the microbial genera contributing to the variability between fertilizers, of interest were the bacteria Anaerolineace, Vulgatibacter, and Paenibacillus and the fungi Cochliobolus, Sordariales, and Dothideomycetes between CF and BF2, compared to the other genera contributing to the variability. Network analysis (co-occurrence network) showed that the network structure of bio-fertilizers was closer to the network characteristics of healthy soils, indicating that bio-fertilizers can improve soil health to some extent, and therefore if bio-fertilizers can be used as an alternative to chemical fertilizers in the future alternative, it is important to achieve green soil development and improve the climate.
Increasing population numbers are putting tremendous pressure and challenges on global food demand and land productivity [1, 2]. Soil fertility degradation has been a key agricultural concern [3, 4]. Overuse of chemical fertilizers in some growing agricultural areas, especially over-reliance on nitrogen fertilizers, has led to an imbalance in the nutrient structure of fertilizer supply and a decrease in fertilizer utilization [5, 6]. Such unreasonable agronomic measures lead to soil nutrient imbalance, gradual decline of crop growth, reduction of the content of soil organic matter, destruction of soil agglomeration structure, and a reduction of the activity of soil microorganisms that are closely related to plant growth [7,8,9]. In addition, intensive agricultural practices characterized by using high levels of chemical fertilizers and pesticides can alter soil biology by disrupting biological interactions. Such measures may lead to the rapid development of soil-borne diseases with imbalances in the subsurface microbiosphere caused by the proliferation of harmful soil microorganisms, including plant pathogenic fungi and bacteria. Therefore, in this context, the development of new bio-fertilizers will bring a fresh turn in agricultural production. Modern agriculture has increasingly focused on the use of bio-fertilizer as alternatives to chemical fertilizers. Numerous studies have shown that the application of bio-fertilizers can inhibit the development of related soil-borne diseases by reshaping the plant rhizosphere microbiota and promoting the secretion of related chemicals such as carbohydrates, amino acids, organic acids, proteins, and enzymes [10, 11]. Indoor cultivation trials by Dong et al. showed that soil and microorganisms under bio-fertilizer treatment conditions were significantly more resistant to pathogenic bacteria than those treated with chemical fertilizers after a spiking of Ralstonia solanacearum [12]. The study by Zhang et al. also showed that using Trichoderma bio-fertilizer can increase soil antifungal compounds, and it was speculated that it may suppress pathogenic bacteria and be an important reason for increasing grass biomass [13]. It has also been shown that the application of bio-fertilizers improves soil organic matter content, pH, soil microbial activity, and diversity more than the application of chemical fertilizers alone [14]. Most of these studies have focused on model crops or indoor cultivation conditions, and the response of rhizosphere microorganisms to bio-fertilizer under real production and field conditions remains elusive.
Soil is a very complex ecosystem in which different microorganisms play different roles [15, 16]. Plants have been placed in a sea of microorganisms from the time they were planted. Mechanisms of growth evolution have led plants to know how to find partner microorganisms that work together below adversity [17]. Plant growth-promoting bacteria (PGPB) and plant growth-promoting fungi (PGPF) can work hand in hand with plants [18]. Meanwhile, soil microbes are sensitive to environmental stresses and they play an important role in fertilizer nutrient conversion. The importance of rhizosphere microbes as neighbors of plant roots for plant health and growth cannot be overstated [15, 20]. Rhizosphere microbial communities can promote the growth of plant above-ground tissues by enhancing adaptation to environmental stresses, improving nutrient acquisition, and improving plant metabolic functions. A study by Singh et al. demonstrated the defense response of a rhizosphere microbial community consisting of Pseudomonas (PHU094), Trichoderma (THU0816), and Rhizobium (RL091) strains to specific biotic stresses in chickpea [21]. In addition, Yi et al. showed that plants can defend themselves against herbivore attack by self-protection mechanisms that recruit beneficial microorganisms of plant-promoting rhizobacteria/fungi [22]. Furthermore, Solanki et al. published that in intercropping systems, abundant plant rhizosphere beneficial diazotrophs can promote plant growth and act as an effective biological inoculant to sustain sugarcane production, and this exploration of rhizosphere microbes can provide an excellent solution to reduce the overuse of chemical fertilizers [5]. Breakthroughs in the study of rhizosphere microbial communities will open the door to microbial regulation of plant growth and metabolism. With the increasing exploration of soil microbial potential and the deepening of the concept of sustainable development, green and healthy bio-fertilizer will become the preferred choice for agricultural production. The objectives of our study were (a) to investigate the relationship between changes in the rhizosphere microbial community of sugarcane and different fertilizer application regimes and to reveal the correlation between soil microbial composition and soil chemical properties, (b) to determine the network characteristics of microorganisms under different fertilizers, and (c) to determine the contribution of bio-fertilizer application to sustainable agriculture.
Plant Materials and Fertilizers
FN41 sugarcane variety was obtained from the sugarcane experiment site of Fujian Agriculture and Forestry University. Chemical fertilizer was bought from Meishan Xindu Chemical Compound Fertilizer Co., Ltd., and its total nutrient (N-P2O5-K2O: 15–15-15) ≥ 45%. The bio-fertilizer is a compound microbial fertilizer provided by Jiangyin Lianye Biology Co., Ltd., which is developed by Nanjing Agricultural University. Bio-fertilizer was produced by inoculation of Bacillus amyloliquefaciens T-5 [23] into a mixture of rapeseed meal and chicken manure composts for the solid fermentation process. The properties of the bio-fertilizer were (N + P2O5 + K2O) = 8%, effective living bacteria ≥ 20 million/g, and organic matter ≥ 20%. The fertilizer application calculation tool (version 1.1) for the experimental plots was used to determine the amount of fertilizer to be applied.
Experimental Description and Soil Samples
A field experiment was conducted at the Sugarcane Research Station in Xingbin District, Guangxi Province of China, from March 7, 2017 to December 20, 2017. The climate is mainly subtropical monsoon climate. The annual average temperature and annual precipitation are located in the range of 20–22℃ and 1300–1350 mm, respectively. The pre-test soil samples were collected on March 1, 2017, stored on ice, and transported back to the laboratory where the determination of physicochemical properties began immediately, and the physicochemical properties were as follows: pH (4.82), soil organic matter (SOC, 17.50 g ·kg–1), total nitrogen (TN, 1.29 g· kg–1), available potassium (AK, 54.16 mg· kg–1), and available phosphorus (AP, 45.19 mg· kg–1). The treatments are as follows: (1) CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), and BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea). Fertilizer was applied at different periods, the first application was made at the seedling stage (March 10, 2017), accounting for 40% of the total fertilizer application, and the second was made at the elongation stage (July 10, 2017), accounting for 60%. Each plot contained 5 rows. The field experiment was conducted in a randomized block design, and the row spacing was 1.2 m and row length was 25 m. Sugarcane yields and sugar content were evaluated and soil samples were collected during the maturity period. Nine soil cores from one field plot were pooled into one sample [24], and a total of 12 field plot samples were collected, including four fertilization treatments × three replications. All samples were placed individually in sterile bags and sent to the laboratory, and stored at − 20 °C; after each sample collection, the tools used were disinfected with an alcohol wipe. The samples were sieved using a 2-mm mesh, thoroughly homogenized, and divided into two parts. Portion was stored at 4 °C, and then a sufficient amount of soil was taken out and dried naturally for the determination of soil physical and chemical properties, while the other portion was stored at − 20 °C for DNA extraction.
Determination of Soil Physicochemical and Sugarcane Yield Indicators
Soil pH was estimated with a glass electrode using a soil-to-water ratio of 1:2.5, and the soil total nitrogen (TN) in the extract was determined by Element Analyzer (Thermo Scientific™, Waltham, MA, USA). Soil available phosphorus (AP) was extracted with sodium bicarbonate and determined by the molybdenum blue method. The available nitrogen (AN) and available potassium (AK) were determined by the alkaline hydrolysis diffusion method and the flame photometric method. In addition, the soil organic carbon content (SOC) was determined by using 0.8 mol/L K2Cr2O7 redox titration method. All soil physical–chemical properties were determined according to Bao [25]. The stem height and diameter of sugarcane were measured by randomly selecting 30 sugarcane plants in each plot and using tape and Vernier caliper. The number of effective stems was extrapolated from the number of effective stems in the area of 1.2 × 2.5 m to the total area of effective stems of sugarcane. To measure the sucrose content, an Extech Portable Sucrose Brix Refractometer (Mid-State Instruments, San Luis Obispo, CA, USA) was used, and the calculation was performed using the following formula: sucrose (%) = Brix (%) × 1.0825 − 7.703 [26]. The theoretical yield of sugarcane was estimated using the following equation:
$$(a) Single stalk weight (kg)\hspace{0.17em}=\hspace{0.17em}(stalk diameter {(cm))}^{2}\hspace{0.17em}\times \hspace{0.17em}(stalk height (cm)\hspace{0.17em}-\hspace{0.17em}30)\hspace{0.17em}\times \hspace{0.17em}1 (g/{cm}^{3})\hspace{0.17em}\times \hspace{0.17em}0.7854/1000$$
$$(b) Theoretical production (kg/{hm}^{2})\hspace{0.17em}=\hspace{0.17em}single stalk weight (kg)\hspace{0.17em}\times \hspace{0.17em}productive stem numbers ({hm}^{2})$$
Soil DNA Extraction, PCR Amplification, and Sequencing
Deoxyribonucleic acid was extracted from the experimental soil using the Power Soil DNA Isolation Kit (MoBio Laboratories Inc., Carlsbad, USA) according to the manufacturer's instructions. The quantity and quality of deoxyribonucleic acid (DNA) extracts were analyzed using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Waltham, MA, USA) and the DNA was stored at − 80℃ for future analysis [12]. 16S rRNA and 18S rRNA gene fragments were amplified using primers 338F (5′-ACTCCTACGGGAGGCAGCAG-3′)/806R (5′-GGACTACHVGGGTWTCTAAT-3′) [27] and SSU0817F (5′-TTAGCATGGAATAATRRAATAGGA-3′)/SSU1196R (5′-TCTGGACCTGGTGAGTTTCC-3′) [28], respectively. The amplification condition was 95℃ for 3 min, followed by 35 cycles of 95℃ for 30 s, 55℃ for 30 s, and 72℃ for 45 s, with a final extension at 72℃ for 10 min (GeneAmp 9700, ABI, California CA, USA). PCR reaction was performed in triplicate in a 20-μL mixture containing 2 μL of 2.5 mM deoxyribonucleoside triphosphate (dNTPs), 4 μL of 5 × Fast Pfu buffer, 0.4 μL of Fast Pfu polymerase, 0.4 μL of each primer (5 μM), and template DNA(10 ng) [29]. Extraction of amplicons was carried out using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA). Then, QuantiFluor™-ST (Promega, Madison, WI, USA) was used for quantification. Purified amplicons were pooled in equimolar and paired-end sequenced (2 × 250) on an Illumina MiSeq platform (Majorbio, Shanghai) according to the standard protocols. The UPARSE standard pipeline was used to analyze the sequence data [30]. Briefly, sequences with short reads (< 250 bp) were filtered out prior to downstream analysis [31]. Sequences with ≥ 97% similarity were clustered into OTUs, and the taxonomic assignment was performed using the RDP database (http://rdp.cme.msu.edu/). All sequences were deposited in the NCBI Sequence Read Archive database with the accession number PRJNA682545.
For subsequent analyses, minimum numbers of sequences were extracted at random from each sample to calculate an alpha diversity index. The significance of soil nutrients and sugarcane yield indicators was calculated using DPS, based on the LSD test (P < 0.05). Box plots of α-diversity indices, species composition, Venn diagrams, and correlation heatmap (Spearman correlation) were performed using R (3.5.2). The difference analysis (DESeq2) and VPA (variance partitioning canonical correspondence) analysis were also calculated and visualized using R [32, 33]. Bray–Curtis distance was calculated by "vegdist" function of vegan package on R (3.4.0). Non-parametric multivariate analysis of variance (PERMANOVA) was performed with the Adonis function in the vegan package of R based on the Bray–Curtis distance. Support vector machine (SVM) analysis is as follows: first logarithmic transformation of relative abundance data, then intra-matrix correction of data, and finally using Wekemo biointiomnatios cloud platform (https://bioincloud.tech) to complete [34]. Co-occurrence networks were done using the R (version 4.0.3) and Cytoscape software (3.6.1), and network structure analysis was done using UCINET (version 6.186) to calculate mean degree, clustering coefficient, and other parameters [35]. Bacterial functions were predicted by the PICRUSt software based on the KEGG functional database and fungi were annotated using the FUNGuild database [36, 37].
Sugarcane Yield Index and Soil Nutrient Variability
Compared to CF treatment, the yield per hectare of FN41 sugarcane increased from 3 to 12% under the bio-fertilizer amendment soil (BF1 and BF2). Furthermore, compared to CK, BF1, BF2, and CF treatments significantly increased (P < 0.05) plant height, stem weight, and effective stem. However, sugarcane stem diameter under CF, BF1, and BF2 treatments revealed no significant difference compared to CK treatment (Table 1). Compared with CK and CF treatments, soil pH was significantly higher (P < 0.05) in both BF1 and BF2 treatments. However, CF treatment significantly reduced soil pH compared with CK. Moreover, soil organic carbon and available phosphorus were not impacted in all the treatments compared to CK treatment. Compared to CK treatment, soil total nitrogen was significantly higher (P < 0.05) in both BF1 and BF2 treatments, whereas soil available nitrogen did not change considerably among all the treatments. The contents of total nitrogen, available nitrogen, total phosphorus, and available potassium increased significantly by about 13.8–33.8%, 12.6–25.0%, 43.8–56.3%, and 97.4–169.5%, respectively, with the increase in BF1 treatment group being the most significant (Table 2).
Table 1 Effects of different treatments on yield indexes of sugarcane
Table 2 Effects of different treatments on soil nutrient content of sugarcane
Effect of Different Fertilizers on Rhizosphere Microbial Community and Diversity
In order to assess the effects of different treatments on microbial alpha diversity in sugarcane rhizosphere soil, we plotted the box-line diagrams (Fig. 1). The rarefaction curve showed the richness of observed OTU, which proved that the depth of sample sequencing was enough to show microbial alpha diversity (Fig. S1). According to the result, rhizosphere bacterial α-diversity (Shannon, Sobs, Chao, and Ace) indices were significantly (P ≤ 0.05) affected by fertilizer, but there were differences in the degree of influence between fungi and bacteria (Table S1). For the bacteria, treatments BF1 and BF2 produced the highest significant Shannon indices respectively, compared with CK and CF, and the highest Sobs, Ace, and Chao indices were recorded in treatment BF2 (Table S1). On the other hand, of the fungi, except for Shannon and Chao which were not significantly affected by fertilizer treatment, treatment BF2 registered the highest Sobs and Ace indices compared with other treatments (Table S1).
Box plots of rhizosphere microbial alpha diversity index under different fertilizer treatments, Tukey method. CK: urea application (57 kg/ha), CF: compound fertilizer (450 kg/ha), BF1: bio-fertilizer (1500 kg/ha of bio-fertilizer + 57 kg/ha of urea), BF2: bio-fertilizer (2250 kg/ha of bio-fertilizer + 57 kg/ha of urea)
The dominant bacteria phyla were Actinobacteria, Proteobacteria, Acidobacteria, Cyanobacteria, Firmicutes, Planctomycetes Bacteroidetes, Chloroflexi, Gemmatimonadetes, and Nitrospirae in all fertilizer treatment soils (Fig. 2A), and the dominant fungi phyla were Ascomycota, Basidiomycota, Zygomycota, Ciliophora, Ochrophyta, Chytridiomycota, Choanomonada, Glomeromycota, Schizoplasmodiida, and Blastocladiomycota (Fig. 2B). Although the dominant phyla of rhizosphere microorganisms in all soils were consistent, changes in the relative abundance of the dominant taxa were observed across different treatments (Table S2). In bacteria, there was a lower abundance of Actinobacteria and a higher abundance of Acidobacteria and Chloroflexi in soils with BF addition comparing with CK and CF (Fig. 2A), and in the OTU level, the addition of fertilizer reduced the number of OTU unique to bacteria in soil, but the degree of decrease was related to the type of fertilizer (Fig. 2C). In addition, Ascomycota had absolute abundance advantage in rhizosphere fungi. Compared to CF, BF treatment has more Ciliophora, Ochrophyta, and Zygomycota (Fig. 2B). In OTU level, the addition of bio-fertilizer makes it have more unique fungal OTU, specifically, CF reduced the number of unique OTUs (Fig. 2D).
Relative abundance histograms of the top 10 rhizosphere microbial phyla in each sample (A and B). Comparison of bacterial and fungal OTU using Venn diagram among different fertilizer treatments (C and D)
The Spearman's heatmap showed the relationship between microbial diversity and soil traits (Fig. 3A), and the Spearman heatmap correlation analysis between major microbial genera and physiochemical soil variables is also illustrated in Fig. 3B. In bacteria, TP significantly affected the diversity index of bacteria and showed significant positive correlation with Shannon, Ace, Sobs, and Chao (Fig. 3A). In addition, there was a significant correlation between pH, AK, TN, and most of the bacterial genera in bacterial top 30. Among them, genus Acidobacteria, Anaerolineaceae, and Nitrospira had a significant positive correlation with soil pH while Bacillus, Rhizomicrobium, Frankiales, Saccharibacteria, and Bradyrhizobium were observed to have a significant negative correlation with pH. Furthermore, Haliangium, Nitrospira, and Nitrosomonadaceae had a strongly significant positive correlation with TN, but Bradyrhizobium registered a significant negative correlation with TN (Fig. 3B). In fungi, TN and AK had a significant positive correlation with Sobs (Fig. 3A). Meanwhile, Fusarium showed a significant negative correlation with AP and AK, and Ascomycota also showed a significant negative correlation with TP and AK. It is noteworthy that Chalazion showed a significant positive correlation with SOC and TN, and the part of these observations was also confirmed in RDA analysis with the top 10 genera.
The heatmap of Spearman correlation between microbial alpha diversity index and soil traits (A), and a Spearman correlation heatmap of soil environmental variables and the top 30 dominant bacterial and fungal genera, and the correlation coefficient was greater than 0.4, marking the significance level (B). * significance at P < 0.05, ** significance at P < 0.01, and *** significance at P < 0.001
A non-metric multidimensional scaling (NMDS) analysis showed a clear distinction in bacterial and fungal community composition of CK, CF, and BF (Fig. 4A and D). In all the treatments, the bacterial community was distinct from each other based on their NMDS1 axis; however, fungal community composition showed distinct variation among the treatments at their NMDS2 axis. Based on redundancy analysis (RDA), results revealed that soil variables (pH, AN, AK, TN, TP, SOC) affected the soil microbial community in different treatments. The X and Y canonical axes explained 40.71% and 17.12% and 30.55% and 17.86% of the observed bacterial and fungal species dynamics, respectively. It is worth noting that, of all the soil variables investigated, pH (r2 = 0.8070, p-value = 0.0005) and AK (r2 = 0.7988, p-value = 0.001) in bacteria, SOC (r2 = 0.6974, p-value = 0.0025), TN (r2 = 0.7558, p-value = 0.0020), pH (r2 = 0.6640, p-value = 0.0045), and AK (r2 = 0.6303, p-value = 0.0085) in fungi were observed as important drivers shaping and controlling microbial community (Fig. 4C and F; Table S3). Meanwhile, the results of Adonis test indicated significant differences between different fertilizer treatment groups (Table 3), and VPA analysis showed that soil physicochemical factors explained 80.09% and 73.31% of the variance for bacteria and fungi, respectively, with pH explaining a higher percentage of the variance for fungi (23.88%) than for the bacterial (9.91%) group (Fig. S2).
A non-metric multidimensional scaling (NMDS) of rhizosphere microbial community composition among different fertilizer treatments (A and D). Redundancy analysis (RDA) illustrating association between samples and soil properties among treatments (B and E), and RDA also indicate association between microbial (top 10 genera) and environmental variables (C and F). Points with different colors depict sample groups under different fertilizer treatments; gray and black points represent different microbial genera, red arrows represent environmental factors, and the arrow length represents the degree of influence on different genera or samples. Bacteria (A-C) and fungi (D-F)
Table 3 Analysis of bacteria and fungi Adonis
Differential Microorganisms Under Different Fertilizer Treatments
According to the results of DESeq2, we identified 220 genus including 98 upregulated genus and 122 downregulated genus after the comparison between CK and BF2 in the bacteria, 86 genus (up = 40, down = 46) between CK and CF, and 29 genus (up = 19, down = 10) between CF and BF2, respectively (Table S4). Latescibacteria, Actinobacteria, Acidobacteria, and Nordella were significantly enriched in comparison of CF and BF2; however, Actinospica, Jatrophihabitans, Leifsonia, and Sinomonas were significantly reduced (Fig. 5C). In the fungal community, 4 (CK vs. CF), 29 (CK vs. BF2), and 28 (CF vs. BF2) differential genera were identified in the comparison groups of the different treatments, respectively (Fig. 5D-F). Mrakia, Saccharomycetales, Obertrumia, and Galactomyces were significantly enriched after BF2 treatment compared to the control group, and Phallus, Ascomycota, and Thysanophora were significantly reduced (Fig. 5E). The identified differentially genus were shown by volcano plot (Fig. 5). In the volcano plot, p < 0.05 was set as the cut-off criterion of significant difference.
Volcano plots depicting bacteria (A-C) and fungi (D-F) genus. The X coordinate was |log2 (fold change)| and the Y coordinate was − log 10 (p adj), P < 0.05, log2 (fold change) > 2. Each point represented a genus. Points in the brown area are regulatory genera with significant changes and markers for dominant genera. Other dots were genus of non-significant difference
Effects of Fertilizer Treatments on Rhizosphere Microbial Biomarkers and Functions
Linear discriminant effect size (LEfSe) analysis was conducted to identify and select unique microbial taxa significantly related to each fertilizer treatment. Biomarker bacterial and fungal community were depicted in cladograms, and bacterial linear discriminant analysis (LDA) scores ≥ 3.5 and fungal LDA ≥ 3 were then performed respectively (Fig. 6A and C). Biomarkers associated with treatments varied across the fertilizer. The bacterial and fungal community LDA analysis detected 66 (CK = 24, CF = 16, BF1 = 26, BF2 = 0) and 98 (CK = 20, CF = 15, BF1 = 21, BF2 = 42) biomarkers for different fertilizers respectively (Fig. 6A and C). The higher score biomarker bacterial of BF1 treatment belonged to phyla Acidobacteria and Anaerolineaceae; that of CF belonged to Alphaproteobacteria, Gaiellales, and Frankiales. Meanwhile, in fungi, the higher score biomarker of BF2 belonged to Cystofilobasidiaceae, Mrakia, Pinnularia, and Tremellomycetes; that of CF belonged to unclassified Dothideomycetes and Tremellales (Fig. 6C). In addition, regarding KEGG, 44 pathways were significantly different in third-level pathways (LDA > 2.5, P < 0.05, Fig. 6B), including 29 pathways with significant difference in BF1, such as genetic information processing, global and overview maps, and energy metabolism. Seven pathways were significantly different in CF, such as environmental information processing, lipid metabolism, and xenobiotic biodegradation and metabolism (Fig. S4). The BF1 treatment group had the most differential pathways. Meanwhile, there were 14 fungal FUNGuild (CK = 4, CF = 6, BF1 = 0, BF2 = 4), of which BF2 mainly included pathotroph and animal pathogen, and pathotroph-saprotroph and fungal parasite-undefined saprotroph were in CF (LDA > 2.0, P < 0.05, Fig. 6D and Fig. S5).
Cladogram illustrating the phylogenetic dynamics of the rhizosphere microorganisms associated with different fertilizes (A and C). Bacterial biomarkers with LDA scores of ≥ 3.5 in each treatment were listed and the LDA scores of fungi ≥ 3. Different colors depict different treatments while circles show phylogenetic levels from phylum to OTU. KEGG functional pathways are differentially abundant by different fertilizes. Differentially abundant KEGG functional pathways in sugarcane's PICRUSt predicted metagenome and differences in functional classification of fungi FUNGuild were shown by using LEfSe (B and D). The nodes of different colors represent the microbes that perform a crucial role in the grouping illustrated in the color, and yellow nodes denote non-significant
In the bacteria, of the top 30 genera identified by a support vector machine (Fig. S3), Woodsholea, norank_Latescibacter, Bauldia, Myxococcales, and Oryzihumus were all identified as important variables that significantly contributed to the class separation between CK and CF, Anaerolinea, Vicinamibacter, Syntrophobacter, and Anaerolineaceae were the more important genera for the difference between CK and BF2, and more attention needs to be paid to the more important role of norank_ Anaerolineace, Vulgatibacter, Paenibacillus, Achromobacter, and Roseiarcus for their differentiation between CF and BF2 (Fig. 7A). On the other hand, in the fungi, Hydnodontaceae, norank_ Agaricomyce, Saccharomycetales, Ascomycota, and Glomeromycota between CK and CF, Ascomycota, Obertrumia, Salpingoeca, Monosiga, and Discicristoidea between CK and BF2, and Cochliobolus, Sordariales, Dothideomycetes, Pleosporales, and Acrospermum between CF and BF2 had a greater contribution to the variability between groupings than other genus, respectively (Fig. 7B).
A support vector machine (SVM) approach was used to select the bacterial genera (A) and fungal genera (B) with the highest contribution to the variance in the different fertilizer treatment groups. The horizontal coordinate is the average importance and the vertical coordinate is the microbial genus, and the heatmap shows the relative abundance differences between microbial genera between the two comparison groups. Bacteria showed the top 30 genera in importance and fungi showed the top 15 genera. Order of comparison: CK vs. CF, CK vs. BF2, and CF vs. BF2
Network Analysis of Soil Microbial Communities (Co-occurrence Network)
Co-occurrence network analysis was used to assess interactions across dominant populations, and only the significant correlations (r2 > 0.4, p < 0.05) were shown in this network. The results revealed a lower number of links in the BF2 in the bacteria, and in the fungi, BF1 feature networks had the least number of links (Table S5). Further insight into the bacterial and fungal genera network illustrated the lowest mean degree, centralization closeness, network centralization, and clustering coefficient values in BF2 than other treatments (Table S6). Some genus, such as norank_Acidobacteria, norank_Anaerolineaceae, Bacillus, and Roseiflexus, had a higher relative abundance and clustering coefficient in the bacterial network of BF1. The genus Candidatus_Solibacter, norank_Nitrosomonadaceae, Nitrospira, and norank_Acidimicrobiales of CF in bacterial network had the largest clustering coefficient compared with other treatments (Fig. 8C and Table S7). In fungal network, Fusarium had the highest clustering coefficient values in CF compared to other treatments; however, BF2 had the lowest clustering coefficient value (Fig. 8D and F, Table S8).
Co-occurrence networks of rhizosphere microbial features. The map shows the bacterial and fungal networks at the genus level, respectively, and then showed the bacterial and fungal networks with top 40 genera, respectively. CK: urea application (A and B), CF: compound fertilizer (C and D), BF1: bio-fertilizer + urea (E and F), and BF2: bio-fertilizer + urea (G and H). Different lines represent two significant Pearson correlations (r2 > 0.4, p < 0.05). Light red lines represent a significant positive correlation and blue lines represent a significant negative correlation. The red nodes represent the top 6 node values in each network, and the size of the circle represents the relative abundance of each genera
Fertilizer application is one of the most common agricultural practices used in agricultural production activities to increase crop yields [38, 39]. Although the nutrient use efficiency in China's farming activities has gradually improved over the past decade [40], a large amount of inorganic fertilizers (nitrogen, phosphorous, and potassium) have been applied to farmland in order to increase crop yields, which has caused many serious ecological problems, such as soil organic matter loss [41], low soil fertility, nutrient inefficiency, and soil quality degradation [43]. In this dangerous environment where the intake of chemical fertilizers cannot continue to improve yields, the development of new fertilizers is a very important milestone. At the same time, an in-depth understanding of the activity pattern of rhizosphere soil microorganisms after bio-fertilizer application can play a crucial role in the better development and utilization of new fertilizers to improve soil productivity. Therefore, we conducted this study.
Impact of Different Fertilizers on Sugarcane Yield Index and Soil Nutrients
Until now, there is enough evidence that soil physicochemical factors such as SOC, TP, TN, AP, AN, and AK are enhanced by different fertilizers; at the same time, some fertilizers can mitigate soil acidification to some extent [43] [41]. However, these studies are based on chemical fertilizers or other fertilizers, and rhizosphere microbial studies related to bio-fertilizers are still relatively lacking. Our findings suggest that sugarcane sugar and soil pH showed noticeable variation among different fertilizers, which may be attributable to the fact that the microorganisms added to the bio-fertilizer promote the increase of sugarcane root secretion or the rhizosphere community under the bio-fertilizer recruits more functional microbes from the soil that facilitate soil acidity reduction and nutrient uptake by the roots [13]. Although the addition of bio-fertilizer did not result in a significant level of difference in yield indicators compared to the CF treatment group, the yield increase with the use of bio-fertilizer was greater than the addition of chemical fertilizers. Similarly, the input of organic matter in the bio-fertilizer can improve the water-soluble and exchangeable forms of soil micronutrients, further enhancing the uptake of soil micronutrients by the sugarcane root system [45].
Effect of Fertilizers on Microbial Species Composition and Diversity
Fertilizer addition significantly affected the diversity and species composition of the sugarcane rhizosphere microbial community. The results showed that both compound fertilizer and bio-fertilizer increased bacterial diversity and abundance to different degrees, but had no significant effect on the rhizosphere fungal community. This phenomenon is similar to the findings of Bello et al. [46]. The non-metric multidimensional scaling (NMDS) and redundancy analysis (RDA) were used to explore changes in the composition of the rhizosphere microbial community and the correlation between environmental factors and the rhizosphere community, respectively. The results indicated that samples from different treatment groups in NMDS (Fig. 4A and D) were significantly separated and then clustered together, and the Adonis test (Table 3) once again proved that there was a significant difference between the fertilizer treatments (p < 0.05). Many studies have demonstrated that soil physicochemical factors are important drivers of soil microbial communities [47, 48]. Likewise, our finding revealed that pH, AN, TN, AK, and SOC significantly affected the rhizosphere bacterial and fungal structure and diversity according to RDA and Spearman correlation heatmap analyses (Fig. 3). The results of the VPA analysis likewise revealed that soil physicochemical variables explained a large proportion of the microbial variation (Fig. S2). These results support some of the previous findings, Cao et al. who reported that soil pH, SOC, TN, and TP were all significantly correlated with bacteria, fungi, and total microorganisms [49]. These observations may be due to the fact that the properties of different fertilizers can have specific effects on rhizosphere environment, and that functional bacteria in bio-fertilizers may increase the availability of nutrients or promote the secretion of certain chemicals from sugarcane while influencing rhizosphere community interactions, thus affecting the entire root-soil-microbial system. In addition, the bacterial genera that showed significant positive correlation with TN, TP, and AK in this study were Acidimicrobiales, Haliangium, Nitrospira, and Nitrosomonadaceae; and the major fungal genera were Pseudallescheria, Mrakia, Chalazion, and Chytridiomycota. These microbial genera are likely to act as coordinators or transformers of nutrients in the soil [50, 51].
Fertilizer's Effect on Differential Microbes
There was a large variability of differential microbial genus in comparison groups (Fig. 5 and Table S4). The bacterial genera Microbacterium, Leifsonia, and Sinomonas that were significantly reduced in BF2 compared to CK and CF were reported as a group of gram-positive bacteria may associated with disease [52]; in particular, the reduction of Leifsonia is likely to suppress or slow down the occurrence of ratoon stunting (growth-hindering) disease of sugarcane [53]. Meanwhile, significantly enriched Geobacter, Nitrosomonadaceae, and Pedomicrobium were associated with environmental remediation [54], nitrification, and utilization of trace elements in the soil [55, 56], and microbial interactions may have promoted the activity of rhizosphere-related enzymes in sugarcane, thus facilitating the uptake and utilization of trace elements. In addition, in the fungal volcano map (Fig. 5B), compared with CK and CF treatment groups, the increase of Saccharomycetales could synthesize the active chemical substances that promote root growth and cell division and promote the substrate required for the proliferation of other effective microorganisms [57]. The emergence of these phenomenons has deepened our understanding of the role of bio-fertilizers in promoting soil ecosystems and plant health in several ways.
Impact of Fertilizers on Biomarkers and Functions
To further explore the effects brought by the bio-fertilizer on the rhizosphere community, LEfSe analysis and machine algorithm (support vector machine, SVM) were used to find biomarkers and the differential contribution of microbial genera in different treatment groups, respectively. According to the results of LEfSe analysis, microbial indicator differs significantly among fertilizer treatments. This suggested that the treatment with different fertilizers accelerated the selection of the rhizosphere microbial community by modifying the rhizosphere soil microenvironment and releasing chemical secretions (recruitment or expulsion) by sugarcane to build a suitable rhizosphere environment for its own growth [58, 59]. Most of all significant biomarkers belong to Acidobacteria, Actinobacteria, and Proteobacteria in bacterial groups and Ascomycota and Basidiomgcota in fungi community. Such results once again corroborated the observation of Zhang et al., who reported phylum Ascomycota to be the most pronounced biomarker microbial community under different carbon assimilation [60]. Meanwhile, SVM evaluated the importance of the microbial genera responsible for the variability between fertilizers. Microbial genera of relatively high average importance may influence functional differences in sugarcane under fertilizer measures [61]. Between BF2 and CF, the top ranked bacterial genera in terms of relative importance were Anaerolineace, Vulgatibacter, and Paenibacillus and fungi were Cochliobolus, Sordariales, and Dothideomycetes. Microbial genera of high importance may be associated with biological processes significantly marked in LEfSe (Fig. 6B and D). Furthermore, among the LEfSe of bacterial functional pathway, BF1 had the most tagged functional pathways, such as genetic information processing, global and overview maps, energy metabolism, translation, citrate cycle, and TCA cycle, which suggested that the addition of bio-fertilizers may affect numerous biological processes by altering the community structure and composition of rhizosphere microorganisms. In a previous study, the application of Trichoderma bio-fertilizer reported by Zhang et al. changed the microbial environment of the grassland and Trichoderma abundance became the most important contributor to the grassland biomass, suggesting from the side that the addition of bio-fertilizer changed a series of biological processes at the rhizosphere level [13], while in fungi, CF treatment seemed to have a stronger effect on the biological processes of rhizosphere fungi, and this phenomenon may be due to the contest between fertilizer effect and microbial effect, which needs to be explored more deeply [46].
Fertilizer's Effects on Soil Microbial Communities and Network Patterns
Co-occurrence analysis showed that the relative abundance of bacteria Acidobacteria and Anaerolineaceae was significantly higher with the addition of bio-fertilizer to the soil compared to CK and CF treatment groups (Table S9) and played a more important role in the network (Table S7). We hypothesized that the increase in abundance was closely related to the increase in rhizosphere soil pH of sugarcane. Soil pH has been reported to be one of the major soil factors determining microbial community structure under controlled conditions of different fertilizers [46, 62]. Some microorganisms can inhibit most enzyme metabolism through internal acidification of cells, and are sensitive to pH changes [63]. Thus, an increase in soil pH is in part suggestive of a healthier soil environment. We also identified some potential beneficial bacteria among the microbes with higher relative abundance and position in the BF1 and BF2 co-occurrence network; for instance, Nitrosomonadaceae has been reported to be closely associated with nitrification in soil and bioremediation of toxic chemicals in soil [64,65,66]. In addition, the network centralization of bacterial networks differed among fertilizer treatments, with BF2 having the smallest network centralization (15.52%) (Table S6), which may be due to the fact that the addition of functional bacteria in the fertilizer disrupted the equilibrium of the interaction between the original microorganisms in the soil, making the network more extensive and more key microbes become the central radiation point. In the fungal network, Talaromyces had absolute numerical and positional dominance in each treatment (Tables S8 and S10). However, the addition of different fertilizers resulted in more negative relationships among the genera, and the greatest increase in the rate of negative relationships was observed in the BF2 network (Table S5). Meanwhile, the fungal network with bio-fertilizer treatment possessed fewer interactions, which was similar to the network characteristics of healthy soil proposed by Yun et al. [67]. Interestingly, among the fungal networks, CF possessed the highest network centralization, which may be due to the specific effects of chemical fertilizers on fungi.
In this study, we determined the rhizosphere microbial community composition, function, and response to changes in soil physicochemical parameters in sugarcane after application of different fertilizers. The main reason for such changes could be due to the combined effect of soil pH, nutrients in fertilizers, and functional bacteria. The VPA analysis showed a high degree of explanation for the microbial community by soil physicochemical factors. Compared with CK and CF, using bio-fertilizer greatly reduced soil acidification and improved soil microbial community composition and structure, thus improving soil quality and soil productivity. In addition, using bio-fertilizers induced more beneficial microorganisms to accumulate in the rhizosphere soil of sugarcane; meanwhile, the reduction of some pathogenic bacteria such as Leifsonia likely inhibited or slowed down the occurrence of sugarcane-persistent dwarf disease, promoting plant health. In the co-occurrence networks under different fertilizer measures, bio-fertilizer network is closer to the network characteristics of healthy soil, which indicated that the application of bio-fertilizer can improve the health of soil to some extent and achieve green and stable sustainable development. Overall, this study provides new insights into the future replacement of overused chemical fertilizers by bio-fertilizers and is important for exploring the plant-soil-microbial interactions.
Lenaerts B, Collard BCY, Demont M (2019). Review: improving global food security through accelerated plant breeding. Plant Science, 287:110207.
Iizumi T, Kotoku M, Kim W, West PC, Gerber JS, Brown ME (2018). Uncertainties of potentials and recent changes in global yields of major crops resulting from census- and satellite-based yield datasets at multiple resolutions. Plos One, 13:e203809.
Bel J, Legout A, Saint-André L, Hall SJ, Löfgren S, Laclau J, et al. (2020). Conventional analysis methods underestimate the plant-available pools of calcium, magnesium and potassium in forest soils. Scientific Reports, 10.
Gkarmiri K, Finlay RD, Alström S, Thomas E, Cubeta MA, Högberg N (2015). Transcriptomic changes in the plant pathogenic fungus Rhizoctonia solani AG-3 in response to the antagonistic bacteria Serratia proteamaculans and Serratia plymuthica. BMC Genomics, 16.
Solanki MK, Wang F, Wang Z, Li C, Lan T, Singh RK et al (2019) Rhizospheric and endospheric diazotrophs mediated soil fertility intensification in sugarcane-legume intercropping systems. J Soils Sediments 19:1911–1927
Wang J, Xue C, Song Y, Wang L, Huang Q, Shen Q (2016). Wheat and rice growth stages and fertilization regimes alter soil bacterial community structure, but not diversity. Frontiers in Microbiology, 7.
Ramirez KS, Craine JM, Fierer N (2012) Consistent effects of nitrogen amendments on soil microbial communities and processes across biomes. Glob Change Biol 18:1918–1927
Hamza MA, Anderson WK (2005) Soil compaction in cropping systems. Soil and Tillage Research 82:121–145
Guo JH, Liu XJ, Zhang Y, Shen JL, Han WX, Zhang WF et al (2010) Significant acidification in major Chinese croplands. Science 327:1008–1010
Gu Y, Wang X, Yang T, Friman V, Geisen S, Wei Z, et al. (2020). Chemical structure predicts the effect of plant-derived low molecular weight compounds on soil microbiome structure and pathogen suppression. Functional Ecology.
Badri DV, Vivanco JM (2009) Regulation and function of root exudates. Plant, Cell Environ 32:666–681
Dong M, Zhao M, Shen Z, Deng X, Ou Y, Tao C et al (2020) Biofertilizer application triggered microbial assembly in microaggregates associated with tomato bacterial wilt suppression. Biol Fertil Soils 56:551–563
Zhang F, Huo Y, Cobb AB, Luo G, Zhou J, Yang G, et al. (2018). Trichoderma biofertilizer links to altered soil chemistry, altered microbial communities, and improved grassland biomass. Frontiers in Microbiology, 9.
Zhong W, Gu T, Wang W, Zhang B, Lin X, Huang Q et al (2010) The effects of mineral fertilizer and organic manure on soil microbial community and diversity. Plant Soil 326:523
Gunarto L (2000). Rhizosphere microbes: their roles and potential. Jurnal Penelitian Dan Pengembangan Pertanian.
Gyaneshwar P, Kumar GN, Parekh LJ, Poole PS (2002) Role of soil microorganisms in improving P nutrition of plants. System Sciences & Comprehensive Studies in Agriculture 245:133–143
Malik AA, Swenson T, Weihe C, Morrison EW, Martiny JBH, Brodie EL et al (2020) Drought and plant litter chemistry alter microbial gene expression and metabolite production. ISME J 14:2236–2247
Lugtenberg B, Kamilova F (2009) Plant-growth-promoting rhizobacteria 63(1):541–556
Zhang Q, Zhou W, Liang G, Wang X, Sun J, He P, et al. (2015). Effects of different organic manures on the biochemical and microbial characteristics of albic paddy soil in a short-term experiment. Plos One, 10:e124096.
Pang Z, Dong F, Liu Q, Lin W, Hu C, Yuan Z (2021). Soil metagenomics reveals effects of continuous sugarcane cropping on the structure and functional pathway of rhizospheric microbial community. Frontiers in Microbiology, 12.
Singh A, Sarma BK, Upadhyay RS, Singh HB (2013) Compatible rhizosphere microbes mediated alleviation of biotic stress in chickpea through enhanced antioxidant and phenylpropanoid activities. Microbiol Res 168:33–40
Yi H, Heil M, Adame-Álvarez RM, Ballhorn DJ, Ryu C (2009) Airborne induction and priming of plant defenses against a bacterial pathogen. Plant Physiol 151:2152–2161
Tan S (2013) The effect of organic acids from tomato root exudates on rhizosphere colonization of Bacillus amyloliquefaciens T-5. Appl Soil Ecol 64:15–22
Vestergaard G, Schulz S, Sch Ler A, Schloter M (2017) Making big data smart—how to use metagenomics to understand soil quality. Biol Fertil Soils 53:1–6
Bao SD (2000). Soil and agricultural chemistry analysis.
Lin W, Wu L, Lin S, Zhang A, Zhou M, Lin R et al (2013) Metaproteomic analysis of ratoon sugarcane rhizospheric soil. BMC Microbiol 13:1–13
Sun L, Han X, Li J, Zhao Z, Liu Y, Xi Q, et al. (2020). Microbial community and its association with physicochemical factors during compost bedding for dairy cows. Frontiers in Microbiology, 11.
Wang W, Yi Y, Yang Y, Zhou Y, Jia W, Zhang S, et al. (2019). Response mechanisms of sediment microbial communities in different habitat types in a shallow lake. Ecosphere (Washington, D.C), 10:n/a-n/a.
Pang Z, Tayyab M, Kong C, Hu C, Zhu Z, Wei X et al (2019) Liming positively modulates microbial community composition and function of sugarcane fields. Agronomy 9:808
Edgar RC (2013) UPARSE: highly accurate OTU sequences from microbial amplicon reads. Nat Methods 10:996–998
Caporaso JG, Kuczynski J, Stombaugh J, Bittinger K, Bushman FD, Costello EK, et al. (2010). QIIME allows analysis of high-throughput community sequencing data. Nature Methods.
Ma B, Lv X, Cai Y, Chang SX, Dyck MF (2018) Liming does not counteract the influence of long-term fertilization on soil bacterial community structure and its co-occurrence pattern. Soil Biol Biochem 123:45–53
Love M, Anders S, Huber W (2014). Differential analysis of count data–the deseq2 package.
Suykens J, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300
Borgatti SP, Everett MG, Freeman LC (2002). UCINET VI for windows: software for social network analysis.
Douglas GM, Maffei VJ, Zaneveld JR, Yurgel SN, Brown JR, Taylor CM, et al. (2020). PICRUSt2 for prediction of metagenome functions. Nature Biotechnology.
Nhn A, Zs B, Stb B, Sb C, Lt D, Jm A et al (2016) FUNGuild: an open annotation tool for parsing fungal community datasets by ecological guild. Fungal Ecol 20:241–248
Mącik M, Gryta A, Sas-Paszt L, Frąc M (2020) The status of soil microbiome as affected by the application of phosphorus biofertilizer: fertilizer enriched with beneficial bacterial strains. Int J Mol Sci 21:8003
Yin H, Zhao W, Li T, Cheng X, Liu Q (2018) Balancing straw returning and chemical fertilizers in China: role of straw nutrient resources. Renew Sustain Energy Rev 81:2695–2702
Huang Y, Huang X, Xie M, Cheng W, Shu Q (2021). A study on the effects of regional differences on agricultural water resource utilization efficiency using super-efficiency SBM model. Scientific Reports, 11.
Qian L, Chen B, Chen M (2016). Novel alleviation mechanisms of aluminum phytotoxicity via released biosilicon from rice straw-derived biochars. Scientific Reports, 6.
Lima Neto AJD, Deus JALD, Rodrigues Filho VA, Natale W, Parent LE (2020). Nutrient diagnosis of fertigated "Prata" and "Cavendish" banana (Musa spp.) at Plot-Scale. Plants, 9:1467.
Sánchez-Montesinos B, Diánez F, Moreno-Gavira A, Gea FJ, Santos M (2019) Plant growth promotion and biocontrol of Pythium ultimum by saline tolerant trichoderma isolates under salinity stress. Int J Environ Res Public Health 16:2053
Wang R, Shi X, Wei Y, Yang X, Uoti J (2006) Yield and quality responses of citrus (Citrus reticulate) and tea (Podocarpus fleuryi Hickel.) to compound fertilizers. Journal of Zhejiang University B Science 7B:696–701
Dhaliwal SS, Naresh RK, Mandal A, Singh R, Dhaliwal MK (2019). Dynamics and transformations of micronutrients in agricultural soils as influenced by organic matter build-up: a review. Environmental and Sustainability Indicators, 1–2:100007.
Bello A, Wang B, Zhao Y, Yang W, Ogundeji A, Deng L, et al. (2021). Composted biochar affects structural dynamics, function and co-occurrence network patterns of fungi community. Science of the Total Environment, 775:145672.
Shao JL, Lai B, Jiang W, Wang JT, Hong YH, Chen FB, et al. (2019). Diversity and co-occurrence patterns of soil bacterial and fungal communities of Chinese cordyceps habitats at Shergyla Mountain, Tibet: implications for the occurrence. Microorganisms, 7.
Jia T, Cao M, Wang R (2018) Effects of restoration time on microbial diversity in rhizosphere and non-rhizosphere soil of Bothriochloa ischaemum. Int J Environ Res Public Health 15:2155
Cao H, Chen R, Wang L, Jiang L, Yang F, Zheng S, et al. (2016). Soil pH, total phosphorus, climate and distance are the major factors influencing microbial activity at a regional spatial scale. Scientific Reports, 6.
LeBlanc N, Kinkel LL, Kistler HC (2015) Soil fungal communities respond to grassland plant community richness and soil edaphics. Microb Ecol 70:188–195
Hatam I, Petticrew EL, French TD, Owens PN, Laval B, Baldwin SA (2019). The bacterial community of Quesnel Lake sediments impacted by a catastrophic mine tailings spill differ in composition from those at undisturbed locations – two years post-spill. Scientific Reports, 9.
Zhou Y, Wei W, Wang X, Lai R (2009) Proposal of Sinomonas flava gen. nov., sp. nov., and description of Sinomonas atrocyanea comb. nov. to accommodate Arthrobacter atrocyaneus. Int J Syst Evol Microbiol 59:259–263
Brumbley SM, Petrasovits LA, Birch RG, Taylor PWJ (2002). Transformation and transposon mutagenesis of Leifsonia xyli subsp.xyli, causal organism of ratoon stunting disease of sugarcane. Molecular Plant-Microbe Interactions®, 15:262–268.
Lovley DR, Ueki T, Zhang T, Malvankar NS, Shrestha PM, Flanagan KA et al (2011) Geobacter: the microbe electric's physiology, ecology, and practical applications. Adv Microb Physiol 59:1
Prosser JI, Head IM, Stein LY (2014) The family Nitrosomonadaceae. Springer, Berlin Heidelberg
Ridge JP, Lin M, Larsen EI, Fegan M, Sly LI (2007) A multicopper oxidase is essential for manganese oxidation and laccase-like activity in Pedomicrobium sp. ACM 3067. Environ Microbiol 9:944–953
Galitskaya P, Biktasheva L, Saveliev A, Grigoryeva T, Boulygina E, Selivanovskaya S (2017). Fungal and bacterial successions in the process of co-composting of organic wastes as revealed by 454 pyrosequencing. Plos One, 12:e186051.
Zhao X, Jiang Y, Liu Q, Yang H, Wang Z, Zhang M (2020). Effects of drought-tolerant Ea-DREB2B transgenic sugarcane on bacterial communities in soil. Frontiers in Microbiology, 11.
Liu Y, Yang H, Liu Q, Zhao X, Xie S, Wang Z, et al. (2021). Effect of two different sugarcane cultivars on rhizosphere bacterial communities of sugarcane and soybean upon intercropping. Frontiers in Microbiology, 11.
Zhang Q, Guo T, Li H, Wang Y, Zhou W (2020). Identification of fungal populations assimilating rice root residue-derived carbon by DNA stable-isotope probing. Applied Soil Ecology, 147:103374.
Ammons MCB, Morrissey K, Tripet BP, Van Leuven JT, Han A, Lazarus GS, et al. (2015). Biochemical association of metabolic profile and microbiome in chronic pressure ulcer wounds. Plos One, 10:e126735.
Zhalnina K, Dias R, de Quadros PD, Davis-Richardson A, Camargo FAO, Clark IM et al (2015) Soil pH determines microbial diversity and composition in the park grass experiment. Microb Ecol 69:395–406
Colla LM, Primaz AL, Benedetti S, Loss RA, de Lima M, Reinehr CO et al (2016) Surface response methodology for the optimization of lipase production under submerged fermentation by filamentous fungi. Braz J Microbiol 47:461–467
Zhang B, Xu X, Zhu L (2018). Activated sludge bacterial communities of typical wastewater treatment plants: distinct genera identification and metabolic potential differential analysis. AMB Express, 8.
Jiang J, Song Z, Yang X, Mao Z, Nie X, Guo H, et al. (2017). Microbial community analysis of apple rhizosphere around Bohai Gulf. Scientific Reports, 7.
Li M, Chen Z, Qian J, Wei F, Zhang G, Wang Y, et al. (2020). Composition and function of rhizosphere microbiome of Panax notoginseng with discrepant yields. Chinese Medicine, 15.
Yuan J, Wen T, Zhang H, Zhao M, Penton CR, Thomashow LS et al (2020) Predicting disease occurrence with high accuracy based on soil macroecological patterns of Fusarium wilt. ISME J 14:2936–2950
Thanks for the data analysis provided by the free online platform of the Magi Cloud platform (www. majorbio.com).
This research was funded by the Modern Agricultural Industry Technology System of China (CARS-170208), the Nature Science Foundation of Fujian Province (2017J01456), and the Special Foundation for Scientific and Technological Innovation of Fujian Agriculture and Forestry University (KFA17172A, KFA17528A) and the Nature Science Foundation of China (31771723), supported by China Agriculture Research System of MOF and MARA.
Key Laboratory of Sugarcane Biology and Genetic Breeding, Ministry of Agriculture, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Qiang Liu, Ziqin Pang, Fallah Nyumah, Chaohua Hu & Zhaonian Yuan
College of Agricultural, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Qiang Liu, Ziqin Pang, Fallah Nyumah & Zhaonian Yuan
Fujian Provincial Key Laboratory of Agro-Ecological Processing and Safety Monitoring, College of Life Sciences, Fujian Agriculture and Forestry University, Fuzhou, 350002, China
Ziqin Pang, Fallah Nyumah & Wenxiong Lin
Key Laboratory of Crop Ecology and Molecular Physiology, Fujian Agriculture and Forestry University, Fuzhou, 35002, China
Province and Ministry Co-Sponsored Collaborative Innovation Center of Sugar Industry, Nanning, 530000, China
Zhaonian Yuan
Guangxi Laibin Xinbin Commercial Crop Technology Extension Station, Laibin, 546100, Guangxi, China
Zuli Yang
Qiang Liu
Ziqin Pang
Fallah Nyumah
Chaohua Hu
Wenxiong Lin
All authors contributed to the intellectual input and provided assistance to this study and manuscript preparation; Z.Y. and Z.P. designed the research and conducted the experiments; Q.L. analyzed the data and wrote the manuscript; Fallah N., W.L., and C.H. reviewed the manuscript; Z.Y supervised the work and approved the manuscript for publication.
Correspondence to Zhaonian Yuan.
Below is the link to the electronic supplementary material.
Supplementary file1 (DOCX 2013 kb)
Liu, Q., Pang, Z., Yang, Z. et al. Bio-fertilizer Affects Structural Dynamics, Function, and Network Patterns of the Sugarcane Rhizospheric Microbiota. Microb Ecol (2021). https://doi.org/10.1007/s00248-021-01932-3
Bio-fertilizer
Physicochemical property
Rhizosphere microbes
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
California Privacy Statement
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Spatial modelling improves genetic evaluation in smallholder breeding programs
Maria L. Selle ORCID: orcid.org/0000-0002-2062-32351,
Ingelin Steinsland1,
Owen Powell2,
John M. Hickey2 &
Gregor Gorjanc2
Breeders and geneticists use statistical models to separate genetic and environmental effects on phenotype. A common way to separate these effects is to model a descriptor of an environment, a contemporary group or herd, and account for genetic relationship between animals across environments. However, separating the genetic and environmental effects in smallholder systems is challenging due to small herd sizes and weak genetic connectedness across herds. We hypothesised that accounting for spatial relationships between nearby herds can improve genetic evaluation in smallholder systems. Furthermore, geographically referenced environmental covariates are increasingly available and could model underlying sources of spatial relationships. The objective of this study was therefore, to evaluate the potential of spatial modelling to improve genetic evaluation in dairy cattle smallholder systems.
We performed simulations and real dairy cattle data analysis to test our hypothesis. We modelled environmental variation by estimating herd and spatial effects. Herd effects were considered independent, whereas spatial effects had distance-based covariance between herds. We compared these models using pedigree or genomic data.
The results show that in smallholder systems (i) standard models do not separate genetic and environmental effects accurately, (ii) spatial modelling increases the accuracy of genetic evaluation for phenotyped and non-phenotyped animals, (iii) environmental covariates do not substantially improve the accuracy of genetic evaluation beyond simple distance-based relationships between herds, (iv) the benefit of spatial modelling was largest when separating the genetic and environmental effects was challenging, and (v) spatial modelling was beneficial when using either pedigree or genomic data.
We have demonstrated the potential of spatial modelling to improve genetic evaluation in smallholder systems. This improvement is driven by establishing environmental connectedness between herds, which enhances separation of genetic and environmental effects. We suggest routine spatial modelling in genetic evaluations, particularly for smallholder systems. Spatial modelling could also have a major impact in studies of human and wild populations.
This study evaluates the potential of spatial modelling to improve the genetic evaluation of animals in smallholder systems. Over the past century, genetic selection of dairy cattle has significantly increased milk production in developed countries [1]. For example, the average milk production of US Holstein cows has almost doubled between 1960 and 2000, and more than half of this is due to genetic improvement [2]. However, such improvements have not been achieved in low to middle income countries, for example, in East Africa. For instance, Rademaker et al [3] reported that milk yield in smallholder farms in Kenya are about 5 to 8 L per cow per day, which is several-fold smaller than in large-scale commercial farmers around the world. These low milk yields are due to environmental, technological and infrastructural difficulties as well as mixed breed composition [4, 5]. Whereas large-scale commercial farmers measure phenotypes accurately, and keep records of performance and pedigree, smallholders usually do not keep records and the absence of routine phenotyping systems reduces the accuracy of these records [6, 7].
To perform an accurate genetic evaluation of animals in a breeding program, a sufficient amount of data is needed, and the data should be appropriately structured [7,8,9]. In developed countries, a small number of large-scale commercial farms produce most of the milk, and there is a widespread use of artificial insemination that establishes strong genetic connectedness between herds. However, in many smallholder systems, smallholder farms contribute significantly to milk production, and there is low usage of artificial insemination with consequent weak genetic connectedness between herds. For example, smallholder milk-producing households in Kenya with one to three cows represent the majority of the national dairy population [3, 10]. Furthermore, 87% of surveyed Kenyan farmers used natural mating services rather than artificial insemination, even though 54% reported that they would have preferred artificial insemination [11]. Similar proportions were reported elsewhere [12, 13].
Small herd sizes and weak genetic connectedness between herds challenge accurate genetic evaluation [14,15,16]. When herds are small, it is difficult to accurately separate the genetic and environmental effects on the phenotype. Furthermore, with weak genetic connectedness, low relationships between animals in different herds limit sharing of information, which additionally limits accurate separation of the genetic and environmental effects. Since most smallholders mate cows with their own or neighbour's bull, it is reasonable to assume that most farmers in close distance use the same bulls. This system genetically connects herds that are close in distance although the overall genetic connectedness across the country is weak.
In the statistical models for genetic evaluations, the genetic effect is modelled using expected or realised genetic relationship between animals, respectively derived from pedigree or genomic data. A herd effect, or a herd-year-season effect, is often included as the main environmental effect [6, 17,18,19,20]. When herd sizes are small, the herd effects are treated as random to increase sharing of information between herds and increase accuracy compared to treating them as fixed [7, 18, 21, 22]. In the extreme case of a single animal per herd, modelling herds as random is, in fact, the only possible approach [7]. In addition, including other factors and covariates in the statistical models is a way of including information in the model that can further enhance the separation of genetic and environmental effects.
Environmental effects can be on management (herd) level, or a larger scale, likely shared by herds in close distance. Examples of environmental effects on management level are education, age, and experience of the farmer, use of natural mating or artificial insemination etc. Some of these effects can be similar for herds in proximity. Feed quality is likely similar in nearby farms and veterinary practices are likely to vary with local, regional or national government policies. Farmers with higher levels of education and experience will likely be more skilled and positively affect phenotype. Age is usually also related to experience. Examples of large-scale environmental effects are climate effects, proximity to roads, markets and towns, and government policies. Many of the environmental effects can be assumed to be spatially correlated. We will refer to the environmental effects on management level as herd effects, and the large-scale environmental effects as spatial effects.
There are multiple spatial models that could be used in an animal breeding context. A prerequisite for this is that data are geographically referenced. Geographical location can be described coarsely with regions or precisely with point coordinates. For an application of region-based models in an animal breeding context see [23], where veterinary district was modelled as an environmental effect with covariance between neighbouring districts [24, 25]. We focus on coordinate-based models (often referred to as geostatistical models [25,26,27,28]) to account for fine-grained spatial relationships between smallholder farms. The only requirement for a coordinate-based model is that we collect herd coordinates and then all data pertaining to a herd is point-referenced. For a herd i, we define a tuple \({\mathbf {w}}_i\) that typically contains two-dimensional coordinates (latitude and longitude), but note that further extensions are possible [29, 30]. The observation at specific locations and locations themselves can vary continuously over a geographical region. A common model for such continuous spatial processes is a Gaussian random field where we model observations at a set of locations \((y({\mathbf {w}}_1),...,y({\mathbf {w}}_n))\) with a multivariate normal distribution with mean \({\mu }\) and a distance based covariance matrix \({\Sigma }\) [25]. The same approach can also be used as a model component in the context of a linear mixed model [25], as is the case with genetic effects, but in the spatial context, we account for relationships between locations. There are multiple possible covariance functions for spatial modelling. Most of them assume stationarity and isotropy, so that \({\mu }({\mathbf {w}}) = {\mu }\) and spatial covariance between locations is a function of Euclidian distance between locations and model parameters, such as variance. The most commonly used is the Matérn covariance function [31].
Modelling with continuously indexed Gaussian random fields is computationally challenging because they give rise to dense precision (covariance inverse) matrices that are numerically expensive to factorise [25], as is the case with genomic models [32, 33]. Gaussian Markov random fields approximate Gaussian random fields by assuming conditional independence, which increases sparsity of the precision matrix and reduces computational complexity. Lindgren et al. [29] showed how to construct an explicit link between some Gaussian random fields and Gaussian Markov random fields via a solution of stochastic partial differential equations. They also proposed use of a finite element method to further reduce computational complexity. This approach allows the implementation of computationally efficient numerical methods for spatial modelling of large-scale point-referenced data. Assuming conditional independence to scale genomic modelling has also been proposed recently [34, 35].
This study aimed at evaluating the potential of spatial modelling in addition to modelling independent herd effects to improve genetic evaluation in smallholder systems, and to determine if the impact depended on the genetic connectedness across the herds, and the use of pedigree or genomic data. In addition, we tested whether adding environmental covariates was beneficial beyond the simple distance-based relationships between herds.
We performed a simulation study that resembled smallholder systems that are commonly observed in East Africa with small herd sizes. We evaluated scenarios with different genetic connectedness across herds, herd distribution and spatial variation. The results showed that spatial modelling improved genetic evaluations, especially with weak genetic connectedness. We also analysed real dairy cattle data and the results indicated that the standard and spatial models separated the genetic and environmental effects in different ways for animals living in areas with larger spatial effects.
We first introduce the data used in the analyses; a simulated smallholder dairy cattle data, and a real dairy cattle data. Then, we present the statistical models used for genetic evaluation and how we fitted and evaluated the models. Scripts for data simulation and model fitting are available in Additional file 1.
We used simulation to evaluate the potential of spatial modelling to improve genetic evaluation. The simulated data resembled the smallholder systems commonly observed in East Africa with small herds clustered in villages and a varying level of genetic connectedness. We simulated phenotype observations \(y_i\) as:
$$\begin{aligned} y_i = \mu + g_i + h_i + s_i + e_i, \end{aligned}$$
where \(\mu\) is population mean, \(g_i\) is the additive genetic effect of individual i, \(h_i \sim {\mathcal {N}}(0, \sigma _{h}^2)\) is the herd effect with \(\sigma _{h}^2 =0.25\), \(s_i\) is the spatial effect, and \(e_i \sim {\mathcal {N}}(0,\sigma _{e}^2)\) is an independent residual with \(\sigma _{e}^2 = 0.25\). Below, we describe the simulation of genetic and spatial effects. In Fig. 1, we show a conceptual illustration of the simulation. The top left panel shows the phenotypes, and the remaining panels show the genetic, herd and spatial effects. Note the most bottom-right village (cluster) with high genetic merit animals, but intermediate phenotypes due to negative spatial effects.
Illustration of the simulation. Each point denotes an animal, their location in a country and colour of the point denotes value of phenotype and underlying genetic, herd and spatial effects (residual not shown)
We simulated the data under three scenarios of genetic connectedness, from weak genetic connectedness between herds from different villages to strong genetic connectedness across all herds regardless of the village. We generated 60 independent data sets for each scenario of genetic connectedness.
Simulation of founders
First, we simulated a genome consisting of 10 chromosome pairs with cattle genome and demography parameters [36]. To this end, we used the Markovian Coalescent Simulator [37] and AlphaSimR [38, 39] to simulate genome sequences for 5000 founder individuals, which served as the initial parents. For each chromosome, we randomly chose segregating sites in the founders' sequences to serve as 5000 single-nucleotide polymorphisms (SNPs) and 1000 quantitative trait loci (QTL) per chromosome, yielding 50,000 SNPs and 10,000 QTL.
Then, we simulated a single complex trait with additive architecture by sampling QTL allele substitution effects from a standard normal distribution. We multiplied these with individuals' QTL and summed them to the true breeding value. Then we simulated phenotypes with different heritabilities for cows (\(h^2=0.3\)) and bulls (\(h^2=0.8\)) to reflect different amounts of information per gender. These phenotypes were used for the initial assignment of bulls and their selection throughout the evaluation phase.
Population simulation
We created 100 villages, each consisting of 20 herds, with herd sizes generated from a zero truncated Poisson distribution with parameter \(\lambda =1.5\). The 110 best males from the founder individuals (based on true genetic values) were assigned as breeding bulls, 100 as natural mating or artificial insemination bulls depending on the scenario, and 10 as artificial insemination bulls. The remaining founders were considered as cows and were randomly placed in the herds. Since the herd sizes were sampled, we did not have the same number of individuals in each replicate. On average, there were 3860 cows in total, and the cows not assigned to a herd were discarded.
We positioned the 100 villages by assuming a square country and sampled village coordinates in the north-south and east-west direction from a uniform distribution on (0, 1). We then positioned the 2000 herds by sampling their coordinates \({\mathbf {w}}\in {\mathbb {R}}^2\) from a bi-variate normal distribution with mean from the corresponding village coordinates and location variance \(3.5\cdot 10^{-4} {\mathbf {I}}_{2\times 2}\). This clustered the herds around village centres. We chose the location variance to achieve reasonable spread and clustering. We tested the sensitivity of results to this simulation parameter.
We tested three levels of genetic connectedness by controlling the breeding strategy. To achieve weak genetic connectedness, each village used their own natural mating bull, meaning that the cows were strongly related within the village and nominally unrelated between villages. However, there was still some base level genetic relationship due to the shared population history. To achieve intermediate genetic connectedness, each village used their own bull for mating in 75% of the herds, while the remaining herds in the village used one of the ten artificial insemination bulls at random, meaning that cows were still strongly related within villages, and somewhat related between villages. To achieve strong genetic connectedness, 100 artificial insemination bulls were randomly mated to cows across all herds and villages, meaning that cows were equally related within and between villages. For this last scenario, we used the 100 artificial insemination bulls instead of the ten artificial insemination bulls in order to maintain a relatively high degree of genetic diversity, and with this, a more challenging situation for separation of environmental and genetic effects.
The three scenarios were then simulated over 12 discrete generations. Within each farm, we replaced the current cows by their newborn female calves. The cows with male calves were not replaced, and their calves were candidates for natural mating if they came from a farm using natural mating, or for artificial insemination if they came from a farm using artificial insemination.
In the 11th generation, we scaled the true breeding values to have mean 0 and variance \(\sigma _g^2=0.1\), and used them as genetic effects in the model for phenotype observation \(y_i\) in Eq. (1), with 3860 records on average. In addition, the female calves in the 12th generation were kept for prediction purposes. To ease the computations with the genome-based model, we predicted breeding values for randomly chosen 200 calves in the 12th generation.
Simulation of spatial effects
We simulated spatial effects from multiple Gaussian random fields to mimic several sources of environmental effects. We imagined that these different sources could be temperature, precipitation, elevation, land size, proximity to markets and towns, availability of extension services, vaccine use, local and regional policies etc. We simulated the effects of eight such processes \({\mathbf {v}}_k, k = 1,\ldots ,8\) at the herd locations from a Gaussian random field with mean 0 and a Matérn covariance function [31]. The Matérn covariance function between locations \({\mathbf {w}}_i, {\mathbf {w}}_j \in {\mathbb {R}}^d\) is:
$$\begin{aligned} Cov({\mathbf {w}}_i, {\mathbf {w}}_j) = \frac{\sigma ^2}{2^{\nu -1} \Gamma (\nu )}\left( \kappa \Vert {\mathbf {w}}_j - {\mathbf {w}}_i \Vert \right) ^{\nu } K_{\nu }\left( \kappa \Vert {\mathbf {w}}_j - {\mathbf {w}}_i \Vert \right) , \end{aligned}$$
where \(K_{\nu }\) is the modified Bessel function of the second kind and the order \(\nu >0\) determines the mean-square differentiability of the field. The parameter \(\kappa\) can be expressed as \(\kappa = \sqrt{8\nu }/\rho\), where \(\rho >0\) is the range parameter describing the distance where correlation between two points is near 0.1, and \(\sigma ^2\) is the marginal variance. We varied these parameters to simulate processes on large and small scales and with different properties. Specifically, we sampled the range parameter \(\rho\) for each of the processes \({\mathbf {v}}_k\) from a uniform distribution on (0.1, 0.5), set the marginal variance \(\sigma ^2\) to either 0.2 or 0.3 with equal probability, and fixed the parameter \(\nu\) to 1.
We finally summed the eight processes to obtain the total spatial effect (Fig. 1) for all herd locations \({\mathbf {s}}\), with \({\mathbf {s}}({\mathbf {w}}_i)\) being the total spatial effect at location \({\mathbf {w}}_i\). We differentially emphasised some processes according to:
$$\begin{aligned} {\mathbf {s}} = \sum _{k=1}^3 {\mathbf {v}}_k + \sum _{k=4}^6 {\mathbf {v}}_k (1 + \alpha _k) + \sum _{k=7}^8 {\mathbf {v}}_k (1 + \alpha _k + \beta _k ) \end{aligned}$$
with the weights \(\alpha ,\beta \sim\)Uniform\((-\,0.5,0.5)\). We scaled the spatial effects to have mean 0 and variance \(\sigma _{s}^2=0.4\).
Environmental covariates
We assumed that some spatial processes could be observed as environmental covariates at herd locations, possibly with some noise. We took the eight real processes and sampled two more (with mean 0 and a Matérn covariance function) that did not affect the phenotype.
For the spatial processes \({\mathbf {v}}_1\), \({\mathbf {v}}_2\), and \({\mathbf {v}}_3\), we assumed that we could observe the spatial covariates perfectly without error, which could be reasonable for some covariates, such as temperature and precipitation.
For the spatial processes \({\mathbf {v}}_4\), \({\mathbf {v}}_5\), and \({\mathbf {v}}_6\), we assumed that we could not observe them accurately, so we added normal distributed error with mean 0 and variance equal to 10% of the process marginal variance. This could be reasonable for some covariates that are difficult to measure or that vary with time; it could, for example, be challenging to quantify the amount and quality of feed.
For the spatial processes \({\mathbf {v}}_7\) and \({\mathbf {v}}_8\), we assumed that we could only observe categorical realisations of the continuous effects, for example, distance to markets and towns could be categorised as either a rural or urban area. For the process \({\mathbf{v}}_7\), we created a two-level factor by sampling a threshold from a uniform distribution between one standard deviation from the mean of \({\mathbf {v}}_7\) in both negative and positive directions. Values of \({\mathbf {v}}_7\) above the threshold were assigned one level, and values below were assigned the other level. For the process \({\mathbf {v}}_8\), we created a three-level factor by sampling two thresholds. The lower threshold was sampled from a uniform distribution between two standard deviations below the mean of \({\mathbf {v}}_8\) and the mean of \({\mathbf {v}}_8\). The upper threshold was sampled from a uniform distribution between the mean of \({\mathbf {v}}_8\) and two standard deviations above the mean of \({\mathbf {v}}_8\). The values of \({\mathbf {v}}_8\) were then assigned one of three levels depending on thresholds.
Changing the proportion of spatial variance and herd clustering
To evaluate how the models performed when there was no or little spatial effect on the phenotype, we created scenarios with different proportions of spatial variance relative to the sum of herd effect variance and spatial variance so that the total variation between herds was constant. We kept \(\sigma _{s}^2 + \sigma _{h}^2 = 0.65\), and let \(\sigma _{s}^2/(\sigma _{s}^2+\sigma _h^2) = \{0, 0.2,0.4,0.6,0.8,1\}\). This was repeated for 30 of the data sets.
We also evaluated the importance of how tightly the herds were clustered around village centres. We varied the location variance of the bi-variate distribution for the herd coordinates \({\mathbf {w}}\in {\mathbb {R}}^2\) from \(1.0\cdot 10^{-4} {\mathbf {I}}_{2\times 2}\) (strong clustering), \(3.5\cdot 10^{-4} {\mathbf {I}}_{2\times 2}\) (intermediate clustering) to \(9.0\cdot 10^{-4} {\mathbf {I}}_{2\times 2}\) (weak clustering). This was repeated for each of the 60 data sets.
Real dairy cattle data
We then analysed phenotypic data for 30,314 Brown-Swiss cattle data from Slovenia collected between 2004 and 2019, from 2012 herds. The data included a body conformation measure, year and scorer, cow's age, stage of lactation, year and month of calving, herd and the farm's coordinates. In addition, the data contained a pedigree for 56,465 animals including the phenotyped cows. We analysed the body conformation, which we standardised by subtracting the phenotypic mean and dividing by the phenotypic standard deviation.
The average herd size was approximately 15 cows per herd, and most cows were in herds with more than five animals. To imitate data typical of smallholder systems, with few individuals per herd, we used a subset of the full data. We sampled 3800 individuals without replacement, with sampling probability equal to the inverse herd size, meaning that larger herds had fewer records in the data subset. The subset contained cows from 1838 herds, and the average herd size was about 2 cows per herd. The herds were spread over most of Slovenia (see Additional file 2: Figure S2).
The following model was fitted to the observed phenotype \(y_i\) of individual \(i=1,...,n\):
$$\begin{aligned} y_i = {\mathbf {x}}_i {\beta } + a_i + h_i + s_i + e_i, \end{aligned}$$
where \({\beta }\) is a vector containing contemporary group effects, including a common intercept, with known covariate vector \({\mathbf {x}}_i\) and \(\beta \sim {\mathcal {N}}(0,\sigma ^2_{\beta })\), \(a_i\) is the additive genetic effect (breeding value), \(h_i\) is the herd effect with \({\mathbf {h}} \sim {\mathcal {N}}({\mathbf {0}}, {\mathbf {I}}\sigma ^2_h)\), \(s_i\) is the spatial effect for the herd at location \({\mathbf {w}}_i \in {\mathbb {R}}^2\) modelled with a Gaussian Markov random field with \({\mu }={\mathbf {0}}\) and Matérn covariance function as given in Eq. (2), and \(e_i\) is a residual effect with \({\mathbf {e}} \sim {\mathcal {N}}({\mathbf {0}}, {\mathbf {I}}\sigma _e^2)\). Although the data generation model (1) and this statistical model (3) are similar, we note that the statistical model is not "aware" of the 10,000 true QTL effects and the eight true spatial processes.
We modelled the genetic effect (breeding value) using a relationship matrix based either on pedigree or genome data. For the pedigree-based model, we assumed \({\mathbf {a}}\sim {\mathcal {N}}({\mathbf {0}}, {\mathbf {A}}\sigma ^2_a)\), where \({\mathbf {A}}\) is the pedigree relationship matrix [40]. We used pedigree for the phenotyped individuals (11th generation), their offspring (12th generation), and three previous generations (8–10th). For the genome-based model, we assumed \({\mathbf {a}}\sim {\mathcal {N}}({\mathbf {0}}, {\mathbf {G}}\sigma ^2_a)\), where \({\mathbf {G}}\) is the genomic relationship matrix calculated from \({\mathbf {G}} = {{\mathbf {Z}}}{{\mathbf {Z}}}^T/k\), \({\mathbf {Z}}\) was a column-centered SNP matrix, and \(k = 2 \Sigma _l q_l (1-q_l)\) with \(q_l\) being allele frequency of marker l [32].
Prior distributions for hyper-parameters
We used a full Bayesian analysis which requires prior distributions for all model parameters. For the intercept and fixed effects, we assumed \(\sigma ^2_{\beta } = 1000\), and for the remaining variance parameters and the spatial range, we assumed penalised complexity priors [41], which are proper priors that penalise model complexity to avoid over-fitting. The penalised complexity prior for variance parameters can be specified through a quantile u and a probability \(\alpha\) which satisfy Prob\((\sigma > u) = \alpha\), and the penalised complexity prior for the spatial range parameter through a quantile u and a probability \(\alpha\) which satisfy Prob\((\rho < u) = \alpha\). For the variances and spatial range, we assumed penalised complexity prior distributions with quantiles u and probabilities \(\alpha\) (Table 1).
Table 1 Parameters u and \(\alpha\) for the penalised complexity priors of hyper-parameters by fitted models to the simulated and real data (see "Prior distributions for hyper-parameters" section)
Fitted models to the simulation data
We fitted five models to the simulated data: G, GH, GS, GHS and GHSC. All models had an intercept \(\beta _0\), a genetic effect \(a_i\), and a residual effect \(e_i\). Model GH had in addition a herd effect \(h_i\), GS had in addition a spatial effect \(s_i\), GHS had in addition both a herd effect and a spatial effect, and GHSC had in addition a herd effect, a spatial effect and the environmental covariates \(z_i\). The models are summarised as:
$$\begin{aligned} \text {G: } y_i & =\beta _0 + a_i + e_i, \\ \text {GH: } y_i & =\beta _0 + a_i + h_i + e_i, \\ \text {GS: } y_i &=\beta _0 + a_i + s_i + e_i, \\ \text {GHS: } y_i &=\beta _0 + a_i + h_i + s_i + e_i, \\ \text {GHSC: } y_i &=\beta _0 + a_i + h_i + s_i + {\mathbf {z}}_i {\beta }_z + e_i, \\ \end{aligned}$$
where \({\mathbf {z}}_i\) is the vector of environmental covariates for individual i and \({\beta }_z \sim {\mathcal {N}}({\mathbf {0}}, 1000 {\mathbf {I}})\) is a vector of environmental covariate effects. The other effects were assumed distributed as described above for Eq. (3).
Model evaluation for simulated data
We will refer to the mean posterior genetic effect for phenotyped individuals as the estimated breeding values, and the mean posterior genetic effect for non-phenotyped individuals as the predicted breeding values. We evaluated the models using three measures: first, with the Pearson correlation (accuracy) between the true and estimated/predicted breeding values for all individuals; second, with the Spearman's rank correlation between the true and estimated/predicted breeding values for the top 100 individuals; and third, with the continuous rank probability score (CRPS) [42], comparing the whole posterior distribution of breeding values to the true breeding values. The CRPS compares both the location and spread of the posterior distribution to the true value. The CRPS is negatively oriented, which means that lower CRPS values indicate more accurate predictions.
Fitted models to the real dairy cattle data
We fitted four models to the real dairy cattle data that were structurally the same as models fitted to the simulated data: G, GH, GS, and GHS. The only difference was in fixed effects that are part of the routine genetic evaluation for the analysed trait and population; an intercept \(\beta _0\), three factors (year and scorer, cow's age and stage of lactation, and year and month of calving). The genetic effect was estimated using the available pedigree. For the variances and spatial range, we assumed penalised complexity prior distributions with quantiles u and probabilities \(\alpha\) shown in Table 1.
We used the deviance information criterion (DIC) [43] to compare the fit of the models. The DIC is widely used to compare model fit between different hierarchical Bayesian models while also assessing the model complexity. Lower values of the DIC indicate a better model fit.
For inference, we used the Bayesian numerical approximation procedure known as the Integrated Nested Laplace Approximations (INLA) introduced by [44], with further developments described in [45, 46] and implementation available in the R-INLA package. INLA is suited for the class of latent/hierarchical Gaussian models, which includes generalised linear (mixed) models, generalised additive (mixed) models, spline smoothing methods, and models used in this study. INLA calculates marginal posterior distributions for all model parameters (fixed and random effects, and hyper-parameters) and linear combinations of effects without sampling-based methods such as Markov chain Monte Carlo (MCMC).
In this section, we present the results from fitting the models to the simulated and real data. For simulation, we compare accuracy and CRPS of estimated and predicted breeding values for the tested models. For the real data, we present posterior variances, DIC, estimated spatial effects, and how estimated breeding values differ with and without spatial modelling. All results indicate that spatial modelling improves genetic evaluation.
This section presents the results from the simulation study, where the models G, GH, GS, GHS and GHSC were fitted to data with three different genetic connectedness. Overall, the results showed that in smallholder systems (i) spatial modelling increased accuracy of estimating and predicting breeding values, (ii) environmental covariates did not improve accuracy substantially beyond the distance-based spatial model, (iii) for the models without spatial effects, the accuracy of separating genetic and environmental effects was low, (iv) the benefit of spatial modelling was largest when genetic and environmental effects were strongly confounded, (v) spatial modelling in addition to the independent random herd effect did not decrease accuracy even when there was no spatial effects, and (vi) when environmental and genetic effects were confounded the accuracy improved when herds were weakly clustered rather than strongly clustered.
Spatial modelling increases accuracy
Spatial modelling increased accuracy of estimated and predicted breeding values. Table 2 presents the accuracy for all models and genetic connectedness scenarios. Setting the model GHSC aside for later, we observed the highest accuracy with model GHS across all scenarios. The second best was model GS, third was GH, and the worst was G. As expected genomic data improved the accuracy compared to using pedigree, and estimated breeding values were more accurate than the predicted. With weak genetic connectedness, the accuracy was low and comparable between estimation and prediction, and the pedigree models has an accuracy almost as high as the genomic models.
Table 2 Average accuracy of estimated breeding values (EBV) and predicted breeding values (PBV) by genetic connectedness (weak, intermediate and strong) and model with intermediate clustering of herds
Table 3 presents the average CRPS. The trends in the CRPS were the same as for the accuracy, with model GHS having the lowest (best) CRPS. Again, as expected genomic data improved the CRPS compared to using pedigree, and in most cases, average CRPS was lower for estimation than for prediction, but in some cases the average CRPS for prediction was slightly lower than for estimation. This improved CRPS for prediction was observed for models that did not model environmental variation and had lower accuracy (Table 2), so the lower (better) CRPS indicates that those models underestimated prediction uncertainty.
Table 3 Average CRPS of estimated breeding values (EBV) and predicted breeding values (PBV) by genetic connectedness (weak, intermediate and strong) and model with intermediate clustering of herds
The rank correlations for the top 100 individuals were in line with accuracy (Table 2) and CRPS (Table 3) results for all individuals. We show this in Additional file 3: Table S1. These results show that spatial modelling (models GS, GHS and GHSC) improved accuracy of ranking the top individuals compared to no spatial modelling (models G and GH).
Including environmental covariates
The environmental covariates did not improve the results substantially beyond the simple distance-based relationships between herds. This is shown for accuracy in Table 2 and CRPS in Table 3. The accuracy and CRPS were only marginally better for the GHSC model compared to the GHS model in some cases, and in the remaining cases, they were comparable. Because of this, we focused on the sufficient models and excluded model GHSC in the remaining results. Some additional results with model GHSC are given in Additional file 3.
Separating genetic and spatial (environmental) effects
The models without spatial effects were not able to accurately separate genetic and spatial (environmental) effects. In Table 4, we present the correlations between the estimated breeding values and the true spatial effects by model and genetic connectedness. Models G and GH had a high correlation, which suggests that estimated breeding values captured parts of the spatial effects. Models GS and GHS had correlations closer to zero, which suggests that these models separated genetic and spatial effects more accurately. This, together with the correlation results in Table 2 and CRPS results in Table 3, suggests that the herd effect alone is not sufficient to account for all environmental effects in smallholder systems.
Table 4 Average correlation between estimated breeding values and true spatial effect by genetic connectedness (weak, intermediate and strong) and model
Comparing genetic connectedness scenarios and genetic models
The benefit of spatial modelling was largest when spatial and genetic effects were difficult to separate. In Additional file 2: Figure S1, we show the relative improvement in accuracy and CRPS between models GH and GHS by genetic connectedness. With both the genome and pedigree data, the improvement was largest with weak genetic connectedness (about 50% to 80%), second with intermediate genetic connectedness (about 35% to 65%), and third with strong genetic connectedness (about 20% to 45%). These settings range between strongly confounded genetic and spatial effects, to separable genetic and spatial effects. With weak genetic connectedness, there was little difference in improvement between models using genomic or pedigree data, whereas with intermediate and strong genetic connectedness there was a tendency for the improvement to be largest with the pedigree data.
Changing proportion of spatial variance
Spatial modelling, in addition to an independent random herd effect even when there were no spatial effects, did not decrease the accuracy. In Fig. 2, we present the accuracy and CRPS for estimated breeding values when using genomic data under intermediate genetic connectedness. The x-axis goes from all environmental variance covered by herd effects to all covered by spatial effects. For models G and GH, the accuracy and CRPS worsened as the proportion of spatial variance increased, whereas for models GS and GHS the accuracy and CRPS improved. Overall, model GHS had the highest accuracy and lowest (best) CRPS for all spatial variance proportions. It was as good as model GH when there was no spatial variation and as model GS when there was no herd effect variation.
Average accuracy (a) and CRPS (smaller is better) (b) with 95% confidence intervals for estimated breeding values by proportion of spatial variance in the sum of spatial and herd variance in the scenario with intermediate genetic connectedness and using the genomic model
From the results so far, we have seen that model GS had better accuracy and CRPS than model GH. However, this is not always the case. When most of the environmental variation was due to herd effects rather than spatial effects, model GH gave better estimates than model GS.
The same tendencies were seen for the predicted breeding values for both genomic and pedigree-based models, and in other genetic connectedness scenarios, as shown in the tables presented in Additional file 3.
Changing the herd clustering
When spatial and genetic effects were confounded, the accuracy of estimation improved when herds were weakly clustered rather than strongly clustered. When simulating the data, we varied the distribution of herd locations, from strongly clustered to less clustered around each village centre. In Fig. 3, we present the accuracy and CRPS for estimated breeding values using genomic data under weak genetic connectedness for the three clustering levels. Figure 3 shows that as herds were less clustered, the accuracy and CRPS improved across all models. We observed the same trend for predicted breeding values and using pedigree data, but not with intermediate and strong genetic connectedness, where the genetic and spatial effects were less confounded. Tables showing the accuracy and CRPS between true and inferred breeding values and the correlation between inferred breeding values and the true spatial effects for all levels of genetic connectedness and herd clustering are in Additional file 3.
Average accuracy (a) and CRPS (smaller is better) (b) with 95% confidence intervals by model and herd clustering in the scenario with weak genetic connectedness and using the genomic model
In this section, we present the results from fitting the models to the subset of real dairy cattle data. We present the posterior distributions of the hyper-parameters, the DIC, the estimated spatial field from model GHS, and compare the estimated breeding values from models GH and GHS. The corresponding results for the full data set are in Additional file 2 and Additional file 3: Table S15. Overall, the results showed that (i) models GH and GHS explained most of the variation in the data and had the best fit, (ii) the data had a spatially dependent structure captured by models GS and GHS, and (iii) the two models with the best fit, GH and GHS, separated the genetic and environmental effects differently for animals living in areas with relatively large spatial effects.
Explained variation and model fit
Models GH and GHS explained most of the variation in the data and had the best fit according to DIC. In Fig. 4, we show the posterior distributions for the model hyper-parameters. Figure 4 has five panels showing additive genetic variance \(\sigma _a^2\), residual variance \(\sigma _e^2\), herd effect variance \(\sigma _h^2\), spatial variance \(\sigma _s^2\), and spatial range \(\rho\) in km.
Posterior distributions of hyper-parameters from models G, GH, GS and GHS fitted to the real data
The posterior additive genetic variance was similar between models GH and GHS, larger in model GS, and even larger in model G. The same tendency was seen for the posterior residual variance. The posterior herd effect variance was smaller in model GHS than in model GH, which was reasonable since the herd effect in model GH captured the spatial component of the phenotype, which model GHS assigned to the spatial effect. The posterior spatial variance in model GS was larger than in model GHS since model GS captured herd effects. Finally, the posterior spatial range was smaller in model GS than in model GHS, since model GS captured herd effects in the spatial effects, which means shorter range of dependency between spatial locations. The mean posterior range from model GHS indicated that herds more than 22 km apart had close to independent (large scale) environments.
Since model G cannot separate variation due to herd or other environmental effects, it is possible that some of the estimated genetic effects were confounded with other effects, which explains the high estimate found for the additive genetic variance with this model. A similar reasoning could be used for model GS, which assigned variation due to herd effects, either to genetic, spatial or residual effects. From Fig. 4 it seems that the variation from herd effects was distributed to all other effects, which explains why the estimated additive genetic variance and estimated residual variance were larger in model GS than in models GH and GHS, and why the estimated spatial variance was larger than in model GHS. It seems that models GH and GHS distributed variation similarly except for the herd effect, which is expected to be higher in model GH than in model GHS.
Table 5 shows the DIC for each model and indicates that model GHS had the best fit, followed by model GH, then model GS and finally model G. These numbers are in line with the estimated hyper-parameters, that showed that models GHS and GH could explain most of the variation in the phenotype. Although model GS also has the potential to explain much of the variation, it is forced to assign herd effects either to genetic or spatial effects. We saw from the results with the simulated data that model GS had a worse model fit than model GH when most of the environmental variation was due to herd effects, which seems to be the case here considering the small posterior spatial variance. Finally, model G was not able to separate genetic and environmental effects, which leads to a poor model fit. A rule of thumb is that a complex model should be preferred over a less complex model if the DIC is reduced by more than ten units. When it comes to choosing between models GH and GHS, model GHS should be preferred, as its DIC was 36 units smaller.
Table 5 Deviance information criterion (DIC) by model fitted to the real data
The estimated spatial effects
The data had a spatially dependent structure captured by models GS and GHS, and the estimated spatial field from model GHS is shown in Fig. 5. Figure 5 shows the estimated mean (posterior mean), in panel (a), and uncertainty (posterior standard deviation) in panel (b). The axes show coordinates in the Transverse Mercator coordinate system in km using datum WGS84.
Posterior mean (a) and standard deviation (b) of the estimated spatial effect (in units of posterior spatial standard deviation) from model GHS fitted to the real data—the axis units are in km
In the western part of Slovenia, model GHS suggests two environmental regions with a mean different from zero, one with a positive effect, and one with a negative effect. In the central part of Slovenia, there are several smaller regions with either a positive or negative effect. In the northeast part of Slovenia, there were not many observations, so there is only a small region with a positive effect, and zero effects otherwise. These estimates are in line with the natural geographic conditions in Slovenia. The magnitude of these spatial effects ranges from \(-\,2.2\) to 1.7 posterior spatial standard deviations. The uncertainty was lowest where observations were available and was highest where there were no observations.
Comparing breeding values from models GH and GHS
The two models with the best fit, models GH and GHS, separated the genetic and environmental effects differently for animals living in areas with relatively large spatial effects.
The DIC in Table 5 and the estimated hyper-parameters in Fig. 4, indicated that models GH and GHS had the best model fit and a similar decomposition of the genetic and environmental variation. Furthermore, the estimated breeding values from models GH and GHS were highly correlated, with a correlation of about 0.995.
To evaluate how well models separated genetic and environmental effects, we computed the correlation between estimated breeding values from models GH and GHS with estimated spatial effects from model GHS. For model GH, this correlation was about 0.14, whereas for model GHS it was about 0.07. This suggests that there were some effects that were assigned as genetic effects in model GH, but assigned as spatial effects in model GHS.
Figure 6, presents the differences in estimated breeding values between models GH and GHS as boxplots according to estimated spatial effects from model GHS. This shows that the difference was correlated with spatial effect from model GHS. When estimated spatial effects were negative, estimated breeding values from model GH were smaller than from model GHS. When estimated spatial effects were positive, estimated breeding values from model GH were larger than from model GHS. The magnitude of the difference ranged from \(-\,0.2\) to 0.2 posterior genetic standard deviations, which indicates confounding for animals living in areas with large spatial effects. The figure also shows how many cows were used in each boxplot, which indicates that, for a majority of the cows, the difference in estimated breeding values was not large.
The difference in estimated breeding values (in units of posterior genetic standard deviation) between models GH and GHS by the estimated spatial effect (in units of posterior spatial standard deviation) from model GHS fitted to the real data
The correlation between differences in estimated breeding values and estimated spatial effects from model GHS was about 0.62. This is in line with what we saw from the simulation results, and suggests that although the two models had highly correlated estimated breeding values, there were differences between estimated breeding values for animals in regions with large spatial effects.
We also compared the top 10 and 20 ranked cows and bulls from the models GH and GHS, to see if a difference in estimated breeding value influenced ranking. We found that the difference was not critical for ranking since the top cows and bulls were present in areas with relatively small spatial effects. For the cows, we had an overlap of 7 (18) cows when comparing the top 10 (20) from each model. For the bulls, we had an overlap of 9 (18) bulls when comparing the top 10 (20).
The results show that spatial modelling improves genetic evaluation in smallholder systems. In particular, it increases the accuracy of genetic evaluation under weak genetic connectedness by establishing environmental connectedness, and with this, more accurate separation of genetic and environmental effects. These observations highlight two broad points for discussion: (i) why does spatial modelling improve genetic evaluation and (ii) what are the limitations of this study and future possibilities.
Why spatial modelling improves genetic evaluation
Spatial modelling improves genetic evaluation because it separates environmental variation that is common to nearby herds more accurately from the other effects on the phenotype. Since spatial effects are estimated jointly for all herds and other effects, this induces environmental connectedness and, in turn, enhances separation of environmental and genetic effects. Animal breeders are very aware of the data structure that is required for accurate genetic evaluation [7,8,9] and there are formal methods to assess genetic connectedness between contemporary groups [14, 15, 47,48,49]. An interesting future work would be to extend these methods to account for environmental connectedness. Achieving sufficient genetic connectedness is particularly difficult when contemporary groups are small and there is limited genetic connectedness between them.
A way to increase genetic connectedness is to use genomic data, although this was not sufficient in our case. Using genomic data reveals more genetic connectedness than pedigree data because animals likely share at least some alleles, and this has been shown to increase the accuracy of genetic evaluation [7, 50, 51]. However, our targeted setting consisted of smallholder herds, which are an extreme case of challenging data structure for genetic evaluation. Furthermore, we varied genetic connectedness between herds and villages. We found that across all genetic connectedness scenarios, spatial modelling increased accuracy more than using genomic data instead of pedigree data. Furthermore, with the weakest genetic connectedness, genomic data was not effective at all, while spatial modelling was. This is, in a way, not surprising because our herds were so small that we had strong confounding between genetic and environmental effects, as well as weak genetic connectedness. Genomic data could not separate genetic and environmental effects, since herds were too small for accurate estimation of their effect, even with random effects. In this case, spatial modelling, at least environmentally, connected nearby herds and created effective contemporary groups. These results show that in addition to genomics other tools are also needed to improve smallholder systems [52]. As expected accuracy was low in this extreme setting, although surprisingly not very low (see the next sub-section on possible reasons). These scenarios might seem too extreme, but they are a reflection of real situations in many countries around the world, e.g. [11].
Spatial modelling has a long tradition and has already been used in animal breeding, e.g. [23, 53]. We have used it in the extreme scenario of small herds and for this reason we used the geostatistical approach that accounts for the fine-grained herd coordinate information. An alternative approach could be to cluster herds into village groups and possibly further cluster villages into region groups. In this case, we could model the village groups as an independent fixed or random effect to account for small scale environmental (management) effects, and possibly further model the region groups as a dependent random effect accounting for covariance between neighbouring regions to account for large-scale environmental effects [24, 25]. An issue with this approach is that we lose the ability to model each individual herd, and that administrative regions often do not represent correctly geography and other environmental effects. Given that the clustering approach has trade-offs, that there are efficient geostatistical models that adapt to data, and that efficient and easy to use implementations exist, we recommend the use of geostatistical models.
We recommend routine use of spatial modelling in quantitative genetic models. Namely, collected data will always come from some area with likely variation in environmental effects. Our results show that spatial modelling is robust even when there is no spatial variation. The observed gains from this study will likely be smaller in cases with larger herds, but even in those cases, spatial modelling can induce environmental connectedness, and it can also provide estimates of spatial effects. These estimates could be used to target interventions or policies. Importantly, our analysis of simulated and real data indicates that spatial modelling can separate environmental and genetic effects more accurately. Such modelling improvements will also be very useful beyond animal breeding populations; for example, in quantitative genetic analyses of human populations and wild populations. These populations also have similarly challenging data structure with rampant population structure (genetic disconnectedness) [54, 55] and the existence of biases in estimated genetic effects in line with geographic variation has been reported [56].
In line with the potential of spatial modelling to account for spatial variation, we recommend a geographically broad collection of data to train robust models. Genomics is revolutionising breeding in developed and developing countries [6, 7, 32]. To deliver its full potential, breeding organisations should ensure broad geographic coverage when collecting data. This will avoid bias towards a specific region, in particular with genomic prediction. Spatial modelling can account for variation between and within regions, but it needs data from the regions to estimate optimal model parameters.
In relation to data collection guidance, we were surprised to find that environmental covariates did not improve the accuracy of genetic evaluation beyond simple distance-based relationships between herds. Here, we simulated the total spatial effect as a sum of eight spatial processes with a range of model parameters that made the processes quite different and we assumed that we could observe these with some noise. Our hypothesis was that modelling the observed environmental covariates would reveal the underlying spatial processes and increase accuracy in the same way that the use of genomic data reveals the underlying genetic process behind the pedigree expectations [32, 33]. There are at least three possible explanations for this. First, we simulated a small number of spatial processes, and the distance-based relationships were sufficient to model spatial variation. Second, the noise in observations was larger than the signal or our data set was too small to capture the signal. Third, the two-dimensional form of the space constrains the value of environmental covariates for increasing accuracy beyond the distance-based relationships. More studies are needed to address this question.
The limitations of this study and future possibilities
There is a huge number of possible scenarios and parameter combinations that we could have tested. For example, we assumed the absence of non-additive genetic effects, genotype-by-environment interaction, data errors, heterogeneous variances and considered only a single trait and breed. Furthermore, the animals were initially distributed to herds randomly, and the farms using artificial insemination were chosen randomly. Such simplifications are likely to yield higher accuracies than expected in real smallholder systems. However, the analysis of real data corroborates the main conclusions from the simulations. Future studies could, for example, consider non-random distribution of animals among herds as well as the use of artificial insemination and the best bulls. These non-random associations are real since well-resourced farmers are more likely to use artificial insemination and the best bulls [22]. With the real data analysis, we tried to mimic a smallholder setting by using only a subset of the data. However, it should be noted that this data has a much higher level of artificial insemination than most smallholder systems, even in the strong genetic connectedness scenario in our simulation.
Genotype-by-environment interactions have been modelled in several studies [53, 57,58,59,60] and such interactions are likely to be substantial in smallholder systems, in particular when native and exotic breeds are used [6]. We ignored these interactions in our study. Of particular notice regarding these interactions and in relation to our work is the study of [53]. They used geographical location and weather data in addition to herd summaries to describe environmental conditions in genetic evaluations, with and without genotype-by-environment interactions and concluded that the farming environment explained variation in the data, as well as the genotype-by-environment component. Further work is needed to embrace the rich set of tools from the spatial statistics community to address genotype-by-environment interactions [61, 62].
Yet another important source of phenotypic variation that we ignored are heterogeneous variances, which are also likely to be substantial in smallholder systems. There are multiple models and methods used by breeders and geneticists to account for such variation, e.g. [63,64,65]. We note that there is also a rich spatial literature on models that can deal with non-stationarity in dependency and variance, e.g. [29, 30, 66,67,68], which for example could enable the modelling of directional dependence based on local anisotropy, e.g. [69]. Using and benefitting from non-stationary models can be challenging due to computational costs and the amount of data needed to fit these models [70]. However, this will become increasingly possible and desired as data sets increase in size with the progression of the digital revolution in agriculture and more computationally efficient methods become available.
Breeding programmes interested in spatial modelling will have to invest in software modification. This is not a limitation of this study, but interested breeding programmes would either have to use the R-INLA package [44] or implement an extension of their existing software. While the R-INLA package is a mature project, it does not support all animal breeding models, most notably multi-trait models. However, it handles a rich set of likelihoods (Gaussian, Poisson, Bernoulli, Weibull, etc.), link functions, independent or correlated random effects (time-series, regions, points, generic such as pedigree, etc.) and priors. It uses the same key underlying linear algebra routines as standard genetic evaluation software [25, 71,72,73], and enables both full Bayesian analysis with fast and very accurate approximate algorithm [74] or even faster empirical Bayesian analysis. We have used the R-INLA package extensively for standard quantitative genetic studies [75,76,77], accounting for selection [78], spatial modelling of plant and tree trials [79] and for modelling of phenotypes on phylogeny [80]. While the R-INLA package is fast for models with a sparse structure (time-series, spatial regions or points and pedigree), it does not fare well for genomic models that have dense a structure [32, 33]. However, use of recently proposed approximate genomic models [34, 35] and sparse-dense libraries would help [81, 82]. A simple alternative for spatial modelling with standard software such as [83, 84], would be to force the setup and inversion of the spatial covariance matrix using Gaussian model. This would suffice for a few thousand well-dispersed herds, but might lead to numeric issues with nearby herds (near matrix singularity) or much larger numbers of herds that will soon become a reality with the digital revolution of agriculture.
Furthermore, since INLA does a full Bayesian analysis, the user has to set prior distributions for all model parameters. This is not always straightforward, but setting a prior based on the knowledge about the process is likely to improve inference substantially, particularly when data is sparse. There is a number of ways to set mildly informative priors. We used penalised complexity priors [41] since these avoid over-fitting and can accommodate prior knowledge about the relative importance of different effects [85, 86].
The take-home message from this study is that spatial modelling can improve genetic evaluation in smallholder systems by inducing environmental connectedness, and with this can enhance separation of genetic and environmental effects beyond an independent herd effect. We have demonstrated this with simulated data with different levels of genetic connectedness, proportions of spatial to management (herd) variation, herd clustering and pedigree or genomic modelling. These results have to be further corroborated with a range of smallholder datasets for which we also have to account for multiple breeds and their crosses, genotype-by-environment interactions and heterogeneous variances. We expected that environmental covariates would improve spatial modelling following the analogy of genetic modelling with observed genomic versus expected pedigree data, but this was not the case in our simulations. Based on all these results, we suggest routine spatial modelling in genetic evaluations, particularly for smallholder systems. Spatial modelling could also have a major impact in studies of human and wild populations.
The scripts for data simulation and model fitting are available in Additional file 1. The real data are owned by the Slovenian Brown-Swiss breeding programme and were prepared for this study by Jana Obšteter (Agricultural Institute of Slovenia) and Barbara Luštrek (University of Ljubljana).
Weigel KA, VanRaden PM, Norman HD, Grosu H. A 100-year review: methods and impact of genetic selection in dairy cattle-from daughter-dam comparisons to deep learning algorithms. J Dairy Sci. 2017;100:10234–50.
Dekkers JC, Hospital F. Multifactorial genetics: the use of molecular genetics in the improvement of agricultural populations. Nat Rev Genet. 2002;3:22–32.
Rademaker CJ, Bebe BO, van der Lee J, Kilelu C, Tonui C. Sustainable growth of the Kenyan dairy sector: a quick scan of robustness, reliability and resilience. Wageningen University & Research; 2016. https://library.wur.nl/WebQuery/wurpubs/508760. Accessed 16 Aug 2020.
Philipsson J, Zonabend E, Bett RC, Okeyo AM. Global perspectives on animal genetic resources for sustainable agriculture and food production in the tropics. In: Ojango M, Malmfors B, Okeyo AM, editors. Animal genetics training resource, version 3. Nairobi: University of Nairobi; 2011. https://cgspace.cgiar.org/bitstream/handle/10568/3665/Module1.pdf?sequence=5. Accessed 16 Aug 2020.
Majiwa EB, Kavoi MM, Murage H. Smallholder dairying in Kenya: the assessment of the technical efficiency using the stochastic production frontier model. J Agric Sci Technol. 2017;14:3–16.
Ojango JM, Mrode R, Rege JEO, Mujibi D, Strucken EM, Gibson J, et al. Genetic evaluation of test-day milk yields from smallholder dairy production systems in Kenya using genomic relationships. J Dairy Sci. 2019;102:5266–78.
Powell O, Mrode R, Gaynor RC, Johnsson M, Gorjanc G, Hickey JM. Genomic data enables genetic evaluation using data recorded on low-middle income country smallholder dairy farms. bioRxiv. 2019. https://doi.org/10.1101/827956.
Foulley JL, Bouix J, Goffinet B, Elsen JM. Connectedness in genetic evaluation. In: Gianola D, Hammond K, editors. Advances in statistical methods for genetic improvement of livestock. Advanced Series in Agricultural Sciences, vol. 18. Berlin: Springer; 1990. p. 277–308.
Jorjani H, Philipsson J, Mocquot JC. Interbull guidelines for national and international genetic evaluation systems in dairy cattle with focus on production traits. Interbull Bull. 2001;28:1–27.
Chawala AR, Mwai AO, Peters A, Banos G, Chagunda GG. Towards a better understanding of breeding objectives and production performance of dairy cattle in sub-Saharan Africa: a systematic review and meta-analysis. CAB Rev. 2020;15:1–15.
Lawrence F, Mutembei H, Lagat J, Mburu J, Amimo J, Okeyo AM, et al. Constraints to use of breeding services in Kenya. Inter J Vet Sci. 2015;4:211–5.
Bebe BO, Udo HM, Rowlands GJ, Thorpe W. Smallholder dairy systems in the Kenya highlands: breed preferences and breeding practices. Livest Prod Sci. 2003;82:117–27.
Baltenweck I, Ouma R, Anunda F, Okeyo Mwai A, Romney D. Artificial or natural insemination: the demand for breeding services by smallholders. In: Proceedings of the 9th KARI Biennial scientific conference and research week, 8–12 November 2004, Nairobi; 2004.
Kennedy BW, Trus D. Considerations on genetic connectedness between management units under an animal model. J Anim Sci. 1993;71:2341–52.
Laloë D. Precision and information in linear models of genetic evaluation. Genet Sel Evol. 1993;25:557–76.
Laloë D, Phocas F. A proposal of criteria of robustness analysis in genetic evaluation. Livest Prod Sci. 2003;80:241–56.
Henderson CR. Applications of linear models in animal breeding. Guelph: University of Guelph; 1984.
Visscher PM, Goddard ME. Fixed and random contemporary groups. J Dairy Sci. 1993;76:1444–54.
Pereira RJ, Schenkel FS, Ventura RV, Ayres DR, El Faro L, Machado CHC, et al. Contemporary group alternatives for genetic evaluation of milk yield in small populations of dairy cattle. Anim Prod Sci. 2019;59:1022–30.
Mrode RA. Linear models for the prediction of animal breeding values. 3rd ed. Wallingford: CAB International; 2014.
Frey M, Hofer A, Künzi N. Comparison of models with a fixed or a random contemporary group effect for the genetic evaluation for litter size in pigs. Livest Prod Sci. 1997;48:135–41.
Schaeffer LR. Necessary changes to improve animal models. J Anim Breed Genet. 2018;135:124–31.
Sæbø S, Frigessi A. A genetic and spatial Bayesian analysis of mastitis resistance. Genet Sel Evol. 2004;36:527–42.
Besag J. Spatial interaction and the statistical analysis of lattice systems. J R Stat Soc Ser B Stat Methodol. 1974;36:192–236.
Rue H, Held L. Gaussian Markov random fields: theory and applications. 1st ed. Boca Raton: Chapman and Hall/CRC; 2005.
Gelfand AE, Diggle P, Guttorp P, Fuentes M. Handbook of spatial statistics. 1st ed. Boca Raton: CRC Press; 2010.
Cressie NAC. Statistics for spatial data. Revised ed. New York: Wiley; 2015.
Cressie N, Wikle CK. Statistics for spatio-temporal data. 1st ed. New York: Wiley; 2011.
Lindgren F, Rue H, Lindström J. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. J R Stat Soc Ser B Stat Methodol. 2011;73:423–98.
Ingebrigtsen R, Lindgren F, Steinsland I. Spatial models with explanatory variables in the dependence structure. Spat Stat. 2014;8:20–38.
Matérn B. Spatial variation: stochastic models and their application to some problems in forest surveys and other sampling investigations. Meddelanden från Statens Skogsforskningsintitut. 1960;49:1–144.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
Gorjanc G, Whalen A, Hickey JM. Modelling segmental inheritance of complex traits in pedigreed and genotyped populations. In: Proceedings of the 11th world congress on genetics applied to livestock production, 11–16 February 2018, Auckland; 2018.
Misztal I, Legarra A, Aguilar I. Using recursion to compute the inverse of the genomic relationship matrix. J Dairy Sci. 2014;97:3943–52.
Misztal I. Inexpensive computation of the inverse of the genomic relationship matrix in populations with small effective population size. Genetics. 2016;202:401–9.
MacLeod IM, Larkin DM, Lewin HA, Hayes BJ, Goddard ME. Inferring demography from runs of homozygosity in whole-genome sequence, with correction for sequence errors. Mol Biol Evol. 2013;30:2209–23.
Chen GK, Marjoram P, Wall JD. Fast and flexible simulation of DNA sequence data. Genome Res. 2009;19:136–42.
Faux AM, Gorjanc G, Gaynor RC, Battagin M, Edwards SM, Wilson DL, et al. AlphaSim: software for breeding program simulation. Plant Genome. 2016;9:1–14.
Gaynor RC, Gorjanc G, Hickey JM. AlphaSimR: an R-package for breeding program simulations. bioRxiv. 2020. https://doi.org/10.1101/2020.08.10.245167.
Lynch M, Walsh B, et al. Genetics and analysis of quantitative traits. 1st ed. Sunderland: Sinauer Associates Inc.; 1998.
Simpson D, Rue H, Riebler A, Martins TG, Sørbye SH, et al. Penalising model component complexity: a principled, practical approach to constructing priors. Stat Sci. 2017;32:1–28.
Gneiting T, Raftery AE. Strictly proper scoring rules, prediction, and estimation. J Am Stat Assoc. 2007;102:359–78.
Spiegelhalter DJ, Best NG, Carlin BP, van der Linde A. Bayesian measures of model complexity and fit. J R Stat Soc Ser B Stat Methodol. 2002;64:583–639.
Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J R Stat Soc Ser B Stat Methodol. 2009;71:319–92.
Martins TG, Simpson D, Lindgren F, Rue H. Bayesian computing with INLA: new features. Comput Stat Data Anal. 2013;67:68–83.
Rue H, Riebler A, Sørbye SH, Illian JB, Simpson DP, Lindgren FK. Bayesian computing with INLA: a review. Annu Rev Stat Appl. 2017;4:395–421.
Foulley JL, Hanocq E, Boichard D. A criterion for measuring the degree of connectedness in linear models of genetic evaluation. Genet Sel Evol. 1992;24:315–30.
Laloë D, Phocas F, Menissier F. Considerations on measures of precision and connectedness in mixed linear models of genetic evaluation. Genet Sel Evol. 1996;28:359–78.
Yu H, Morota G. GCA: an R package for genetic connectedness analysis using pedigree and genomic data. bioRxiv. 2019. https://doi.org/10.1101/696419.
Yu H, Spangler ML, Lewis RM, Morota G. Genomic relatedness strengthens genetic connectedness across management units. G3 (Bethesda). 2017;7:3543–56.
Yu H, Spangler ML, Lewis RM, Morota G. Do stronger measures of genomic connectedness enhance prediction accuracies across management units? J Anim Sci. 2018;96:4490–500.
Muchadeyi FC, Ibeagha-Awemu EM, Javaremi AN, Gutierrez Reynoso GA, Mwacharo JM, Rothschild MF, et al. Editorial: why livestock genomics for developing countries offers opportunities for success. Front Genet. 2020;11:626.
Tiezzi F, de Los Campos G, Gaddis KP, Maltecca C. Genotype by environment (climate) interaction improves genomic prediction for production traits in US Holstein cattle. J Dairy Sci. 2017;100:2042–56.
Barton N, Hermisson J, Nordborg M. Why structure matters. Elife. 2019;8:e45380.
Charmantier A, Garant D, Kruuk LE. Quantitative genetics in the wild. 1st ed. Oxford: Oxford University Press; 2014.
Kerminen S, Martin AR, Koskela J, Ruotsalainen SE, Havulinna AS, Surakka I, et al. Geographic variation and bias in the polygenic scores of complex diseases and traits in Finland. Am J Hum Genet. 2019;104:1169–81.
Strandberg E, Brotherstone S, Wall E, Coffey M. Genotype by environment interaction for first-lactation female fertility traits in UK dairy cattle. J Dairy Sci. 2009;92:3437–46.
Hayes BJ, Bowman PJ, Chamberlain AJ, Savin K, Van Tassell CP, Sonstegard TS, et al. A validated genome wide association study to breed cattle adapted to an environment altered by climate change. PLoS One. 2009;4:e6676.
Yao C, De Los Campos G, VandeHaar MJ, Spurlock DM, Armentano LE, Coffey M, et al. Use of genotype × environment interaction model to accommodate genetic heterogeneity for residual feed intake, dry matter intake, net energy in milk, and metabolic body weight in dairy cattle. J Diary Sci. 2017;100:2007–16.
Schultz NE, Weigel KA. Inclusion of herdmate data improves genomic prediction for milk-production and feed-efficiency traits within North American dairy herds. J Dairy Sci. 2019;102:11081–91.
Heaton MJ, Datta A, Finley AO, Furrer R, Guinness J, Guhaniyogi R, et al. A case study competition among methods for analyzing large spatial data. J Agric Biol Environ Stat. 2019;24:398–425.
van Niekerk J, Bakka H, Rue H, Schenk L. New frontiers in Bayesian modeling using the INLA package in R. 2019. arXiv:1907.10426.
Wiggans GR, VanRaden PM. Method and effect of adjustment for heterogeneous variance. J Dairy Sci. 1991;74:4350–7.
Visscher PM, Hill WG. Heterogeneity of variance and dairy cattle breeding. Anim Sci. 1992;55:321–9.
Meuwissen THE, De Jong G, Engel B. Joint estimation of breeding values and heterogeneous variances of large data files. J Dairy Sci. 1996;79:310–6.
Sampson PD, Guttorp P. Nonparametric estimation of nonstationary spatial covariance structure. J Am Stat Assoc. 1992;87:108–19.
Fuentes M. A high frequency Kriging approach for non-stationary environmental processes. Environmetrics. 2001;12:469–83.
Higdon D. Space and space-time modeling using process convolutions. In: Anderson CW, Barnett V, Chatwin PC, El-Shaarawi AH, editors. Quantitative methods for current environmental issues. London: Springer; 2002. p. 37–56.
Fuglstad GA, Lindgren F, Simpson D, Rue H. Exploring a new class of non-stationary spatial Gaussian random fields with varying local anisotropy. Stat Sin. 2015;25:115–33.
Fuglstad GA, Simpson D, Lindgren F, Rue H. Does non-stationary spatial data always require non-stationary random fields? Spat Stat. 2015;14:505–31.
Takahashi K. Formation of sparse bus impedance matrix and its application to short circuit study. In: Proceedings of the 8th PICA conference, 3–6 June 1973, Minneapolis; 1973.
De Coninck A, De Baets B, Kourounis D, Verbosio F, Schenk O, Maenhout S, et al. Needles: toward large-scale genomic prediction with marker-by-environment interaction. Genetics. 2016;203:543–55.
Verbosio F, De Coninck A, Kourounis D, Schenk O. Enhancing the scalability of selected inversion factorization algorithms in genomic prediction. J Comput Sci. 2017;22:99–108.
Rue H, Martino S. Approximate Bayesian inference for hierarchical Gaussian Markov random field models. J Stat Plan Inference. 2007;137:3177–92.
Holand AM, Steinsland I, Martino S, Jensen H. Animal models and integrated nested Laplace approximations. G3 (Bethesda). 2013;3:1241–51.
Larsen CT, Holand AM, Jensen H, Steinsland I, Roulin A. On estimation and identifiability issues of sex-linked inheritance with a case study of pigmentation in Swiss barn owl (Tyto alba). Ecol Evol. 2014;4:1555–66.
Muff S, Niskanen AK, Saatoglu D, Keller LF, Jensen H. Animal models with group-specific additive genetic variances: extending genetic group models. Genet Sel Evol. 2019;51:7.
Steinsland I, Larsen CT, Roulin A, Jensen H. Quantitative genetic modeling and inference in the presence of nonignorable missing data. Evolution. 2014;68:1735–47.
Selle ML, Steinsland I, Hickey JM, Gorjanc G. Flexible modelling of spatial variation in agricultural field trials with the R package INLA. Theor Appl Genet. 2019;132:3277–93.
Selle ML, Steinsland I, Lindgren F, Brajkovic V, Cubric-Curik V, Gorjanc G. Hierarchical modeling of haplotype effects based on a phylogeny. bioRxiv. 2020. https://doi.org/10.1101/2020.01.31.928390.
Masuda Y, Baba T, Suzuki M. Application of supernodal sparse factorization and inversion to the estimation of (co) variance components by residual maximum likelihood. J Anim Breed Genet. 2014;131:227–36.
Masuda Y, Aguilar I, Tsuruta S, Misztal I. Acceleration of sparse operations for average-information REML analyses with supernodal methods and sparse-storage refinements. J Anim Sci. 2015;93:4670–74.
Misztal I, Tsuruta S, Lourenco DAL, Masuda Y, Aguilar I, Legarra A, et al. Manual for BLUPF90 family programs. 2018. http://nce.ads.uga.edu/wiki/doku.php?id=documentation. Accessed 16 Aug 2020.
Butler D, Cullis BR, Gilmour A, Gogel B. ASReml-R reference manual. The State of Queensland, Brisbane: Department of Primary Industries and Fisheries. 2009.
Fuglstad GA, Hem IG, Knight A, Rue H, Riebler A, et al. Intuitive joint priors for variance parameters. Bayesian Anal. 2020. https://doi.org/10.1214/19-BA1185.
Hem IG, Selle ML, Gorjanc G, Fuglstad GA, Riebler A. Robust genomic modelling using expert knowledge about additive, dominance and epistasis variation. bioRxiv. 2020. https://doi.org/10.1101/2020.04.01.019497.
MLS and IS acknowledge support from The Research Council of Norway, Grant Number: 250362. GG and JMH acknowledge support from the BBSRC to The Roslin Institute (BBS/E/D/30002275) and GG acknowledges support from The University of Edinburgh's Data-Driven Innovation Chancellor's fellowship.
Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway
Maria L. Selle & Ingelin Steinsland
The Roslin Institute and Royal (Dick) School of Veterinary Studies, University of Edinburgh, Edinburgh, UK
Owen Powell, John M. Hickey & Gregor Gorjanc
Maria L. Selle
Ingelin Steinsland
Owen Powell
John M. Hickey
Gregor Gorjanc
GG conceived the study. MLS, JMH and GG designed the study. OP and MLS simulated data. MLS performed the analysis and wrote the manuscript. MLS, GG and IS interpreted the results. GG, JMH, OP and IS refined the manuscript. All authors read and approved the final manuscript.
Correspondence to Maria L. Selle.
Additional file 1.
Simulation code available from https://doi.org/10.6084/m9.figshare.12403898.
Additional figures.
Additional tables.
Selle, M.L., Steinsland, I., Powell, O. et al. Spatial modelling improves genetic evaluation in smallholder breeding programs. Genet Sel Evol 52, 69 (2020). https://doi.org/10.1186/s12711-020-00588-w
DOI: https://doi.org/10.1186/s12711-020-00588-w
|
CommonCrawl
|
Title: Forecasting global temperature with time-series methods Authors: Marco Lippi - Universita di Roma La Sapienza (Italy)
Umberto Triacca - University of L Aquila (Italy)
Alessandro Giovannelli - University of Rome Tor Vergata (Italy) [presenting]
Antonello Pasini - National Research Council (Italy)
Alessandro Attanasio - University of L Aquila (Italy)
Abstract: The impact of climate change on territories, ecosystems and humans has been dramatic in the last fifty years and is likely to become heavier in the next decades, this making modeling and forecasting climate indicators, Global Temperature in particular, of the utmost importance. We propose the use of time-series methods, which are only weakly based on physical knowledge but provide an efficient use of the data available. We study and compare the forecasting performance of two models. The first is a standard Vector AutoRegression (VAR), in which Global Temperature is predicted using past values of (1) Global Temperature itself, (2) the greenhouse gases radiative forcing, GHG-RF, (3) the Southern Oscillation Index, SOI. The second is a Large-Dimensional Dynamic Factor Model (DFM). We use the information contained in 140 time series of local temperatures, corresponding to a grid with spacial resolution of $2.5 \times 2.5$ degrees. Our main findings are: (a) The cointegrated VAR, including GHG-RF, and SOI, performs better than the Factor Model at all horizons from 1 to 10 years.(b) However, augmenting the data in the VAR with factors (FAVAR) we obtain a competitive model. Moreover, averaging the forecasts of the FAVAR and the cointegrated VAR we obtain the best results.
|
CommonCrawl
|
Lie algebra symmetry analysis of the Helfrich and Willmore surface shape equations
CPAA Home
Continuous dependence in hyperbolic problems with Wentzell boundary conditions
January 2014, 13(1): 435-452. doi: 10.3934/cpaa.2014.13.435
Geometric conditions for the existence of a rolling without twisting or slipping
Mauricio Godoy Molina 1, and Erlend Grong 2,
Mathematisches Institut, Georg-August-Universität, Bunsen-str. 3-5, D-37073 Göttingen, Germany
Department of Mathematics, University of Bergen, P.O. Box 7803, Bergen N-5020
Received June 2012 Revised May 2013 Published July 2013
We give a complete answer to the question of when two curves in two different Riemannian manifolds can be seen as trajectories of rolling one manifold on the other without twisting or slipping. We show that, up to technical hypotheses, a rolling along these curves exists if and only if the geodesic curvatures of each curve coincide. By using the anti-developments of the curves, which we claim can be seen as a generalization of the geodesic curvatures, we are able to extend the result to arbitrary absolutely continuous curves. For a manifold of constant sectional curvature rolling on itself, two such curves can only differ by an isometry. In the case of surfaces, we give conditions for when loops in the manifolds lift to loops in the configuration space of the rolling.
Keywords: anti-development., Rolling maps, geodesic curvatures.
Mathematics Subject Classification: Primary: 37J60, 53A55; Secondary: 53A1.
Citation: Mauricio Godoy Molina, Erlend Grong. Geometric conditions for the existence of a rolling without twisting or slipping. Communications on Pure & Applied Analysis, 2014, 13 (1) : 435-452. doi: 10.3934/cpaa.2014.13.435
A. Agrachev, Rolling balls and octonions,, Proc. Steklov Inst. Math., 258 (2007), 13. doi: 10.1134/S0081543807030030. Google Scholar
A. Agrachev and Y. Sachkov, "Control Theory from the Geometric Viewpoint,'', Springer, (2004). doi: 10.1007/978-3-662-06404-7. Google Scholar
A. M. Bloch, J. E. Marsden and D. V. Zenkov, Nonholonomic dynamics,, Notices Amer. Math. Soc., 52 (2005), 324. Google Scholar
G. Bor and R. Montgomery, $G_2$ and the rolling distribution,, L'Ens. Math., 55 (2009), 157. Google Scholar
É. Cartan, Les systèmes de Pfaff, à cinq variables et les équations aux dérivées partielles du second ordre,, Ann. Sci. \'Ecole Norm. Sup., 27 (1910), 109. Google Scholar
S. A. Chaplygin, On some generalization of the area theorem, with applications to the problem of rolling balls, (Russian), Mat. Sbornik, XX (1897), 1. doi: 10.1134/S1560354712020086. Google Scholar
S. A. Chaplygin, On a ball's rolling on a horizontal plane, (Russian), Mat. Sbornik, XXIV (1903), 139. doi: 10.1070/RD2002v007n02ABEH000200. Google Scholar
Y. Chitour, A. Marigo and B. Piccoli, Quantization of the rolling-body problem with applications to motion planning,, Systems Control Lett., 54 (2005), 999. doi: 10.1016/j.sysconle.2005.02.012. Google Scholar
Y. Chitour, M. Godoy Molina and P. Kokkonen, Symmetries of the rolling model, preprint,, \arXiv{1301.2579}., (). Google Scholar
Y. Chitour and P. Kokkonen, Rolling manifolds: Intrinsic formulation and controllability, preprint,, \arXiv{1011.2925}., (). Google Scholar
Y. Chitour and P. Kokkonen, Rolling manifolds on space forms,, Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire, 29 (2012), 927. doi: 10.1016/j.anihpc.2012.05.005. Google Scholar
B. E. J. Dahlberg, The converse of the four vertex theorem,, Proc. Amer. Math. Soc., 133 (2005), 2131. doi: 10.1090/S0002-9939-05-07788-9. Google Scholar
M. Godoy Molina, E. Grong, I. Markina and F. Silva Leite, An intrinsic formulation of the problem on rolling manifolds,, J. Dyn. Control Syst., 18 (2012), 181. doi: 10.1007/s10883-012-9139-2. Google Scholar
E. Grong, Controllability of rolling without twisting or slipping in higher dimensions,, SIAM J. Control Optim., 50 (2012), 2462. doi: 10.1137/110829581. Google Scholar
E. Hsu, "Stochastic Analysis on Manifolds,'' Graduate Studies in Mathematics 38,, American Mathematical Society, (2002). Google Scholar
K. Hüper and F. Silva Leite, On the geometry of rolling and interpolation curves on $S^n$, $SO_n$ and Grassmann manifolds,, J. Dyn. Control Syst., 13 (2007), 467. doi: 10.1007/s10883-007-9027-3. Google Scholar
B. D. Johnson, The nonholonomy of the rolling sphere,, Amer. Math. Monthly, 114 (2007), 500. Google Scholar
V. Jurdjevic and J. A. Zimmerman, Rolling sphere problems on spaces of constant curvature,, Math. Proc. Cambridge Philos. Soc., 144 (2008), 729. doi: 10.1017/S0305004108001084. Google Scholar
M. Levi, Geometric phases in the motion of rigid bodies,, Arch. Rational Mech. Anal., 122 (1993), 213. doi: 10.1007/BF00380255. Google Scholar
K. Nomizu, Kinematics and differential geometry of submanifolds. Rolling a ball with a prescribed locus of contact,, T\^ohoku Math. J., 30 (1978), 623. doi: 10.2748/tmj/1178229921. Google Scholar
J. W. Robbin and D. A. Salamon, "Introduction to Differential Geometry,", Available from: \url{http://www.math.ethz.ch/\, (). Google Scholar
R. W. Sharpe, "Differential Geometry,'', GTM 166, (1997). Google Scholar
M. Spivak, "A Comprehensive Introduction to Differential Geometry'',, Volume IV, (1999). Google Scholar
I. Zelenko, On variational approach to differential invariants of rank two distributions,, Differential Geom. Appl., 24 (2006), 235. doi: 10.1016/j.difgeo.2005.09.004. Google Scholar
I. Zelenko, Fundamental form and the Cartan tensor of $(2,5)$-distributions coincide,, J. Dyn. Control Syst., 12 (2006), 247. doi: 10.1007/s10450-006-0383-1. Google Scholar
J. A. Zimmerman, Optimal control of the sphere $S^n$ rolling on $E^n$,, Math. Control Signals Systems, 17 (2005), 14. doi: 10.1007/s00498-004-0143-2. Google Scholar
Yanjun He, Wei Zeng, Minghui Yu, Hongtao Zhou, Delie Ming. Incentives for production capacity improvement in construction supplier development. Journal of Industrial & Management Optimization, 2021, 17 (1) : 409-426. doi: 10.3934/jimo.2019118
Hanyu Gu, Hue Chi Lam, Yakov Zinder. Planning rolling stock maintenance: Optimization of train arrival dates at a maintenance center. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020177
Pablo D. Carrasco, Túlio Vales. A symmetric Random Walk defined by the time-one map of a geodesic flow. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020390
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3215-3233. doi: 10.3934/dcds.2019228
Magdalena Foryś-Krawiec, Jiří Kupka, Piotr Oprocha, Xueting Tian. On entropy of $ \Phi $-irregular and $ \Phi $-level sets in maps with the shadowing property. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1271-1296. doi: 10.3934/dcds.2020317
2019 Impact Factor: 1.105
PDF downloads (42)
HTML views (0)
on AIMS
Mauricio Godoy Molina Erlend Grong
|
CommonCrawl
|
topological structure
In addition to exterior surface features, interior features can be modeled, and non-manifold features can be represented directly. Topological Sorting. The authors thank the members of the Ma lab for helpful discussion, and appreciate the researchers who provide us with source code for a comparison. What is topological ordering data structure? As we know that the source vertex will come after the destination vertex, so we need to use a stack to store previous elements. For every edge U-V of a directed graph, the vertex u will come before vertex v in the ordering. Abstract. Topological structures are effective descriptors of the nonequilibrium dynamics of diverse many-body systems. These results are explained with the help of examples. Structured data. 2. The authors . The technique is based on "partition decoupled null models," a new class of null models that incorporate the interaction of clustered partitions into a random model and generalize the Gaussian ensemble. which in turn facilitates generation of a better topological graph structure. (C) The electronic structure measured by angle-resolved photoemission that confirmed the theoretical prediction and the . This notion of iterative improvement of node embeddings and topological graph structure is in the same spirit as [5]. Category theoretic definition of topological spaces. R !S. Set of homomorphisms between two structures is a closed set in the set of all functions between the domains. This manifests itself in the fact that when you put the two in contact, the curled up band structure of the TI must unwind so that the band structure fits the one in .
Selecting an appropriate and adequate topological structure of interconnection networks will become a critical issue, on which many research efforts have been made over the past decade. Here are the data structures we created: indegrees this has one entry for each node in the graph, so it's space. In some book I read "continuous fuctions between topological spaces preserve their topological structure" and they say that this is similar to the case when homomorphims perserve algebraic structures. a set among whose elements limit relations are defined in some way. The comment of Xiao-Gang Wen is correct and to explain it requires going into certain deep issues about what is topological order and what is a topological insulator and what the relation between them is. The topological sorting for a directed acyclic graph is the linear ordering of . Sign in to download full-size image Figure 3. For example, another topological sorting of the following graph is "4 5 2 0 3 1. Such sets may be formed by elements of any kind. It should be noted that topological structures in many studies [3,4,5,6,8,9,10,11,12,13] are known. The topological structure of a compact Riemann surface C is determined by its genus g ( Figure 3 ). 4. 4. Secondly, we developed a Comprehensive Structure Matching (CSM) algorithm to identify all common structures between the complex trajectories of . Furthermore, the topological structure of affinity network highly associates with cell types, where graph patterns shed light on revealing the mechanisms of tissues. As an application, we analyze a correlation .
Set of homomorphisms between two structures is a closed set in the set of all functions between the domains. Since the standard model structure on simplicial sets is a presentation of the (,1)-category Grpd of -groupoid s realized as Kan complex es, this identifies topological spaces with -groupoid s in an (,1)-categorical sense. Topological Sorting. On top of our advanced technologies in bioinformatics, we provide informatics and statistical support and advice for protein network construction and topological analysis services. Further on, it was possible to identify known events by observing specific topological properties of an MST (Bonanno et al. Many research works are introduced to recognize the topological structures of electromag-netic elds. Detailed solution for Topological Sort (BFS) - Problem statement: Given a graph, find the topological order for the given graph. Surface conduction of topological insulators. Notably it says that every -groupoid is, up to equivalence, the fundamental -groupoid of some topological . Recently, the research in this field has been stimulated by the discovery of nontrivial topological phases in various thermopower materials such as EuIn2As26,7, EuSn2As28,9, and other CaAl2Si2-type. Definitions of Open Set and Topological Space. Topological Sort is a linear ordering of the vertices in such a way that. a set among whose elements limit relations are defined in some way. We provide assistance with de novo tandem repeat detection. The distinction between these two notions is very important and there are lots of misuse of terminology in the literature where these are . topological space n. A set of points together with a topology defined on them. The book is aimed to attract the readers' attention to such an important research area. Hippocampal place-cell replay has been proposed as a fundamental mechanism of learning and memory, which might support navigational learning and planning. In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. Topological Space. Continuous mapping between topological spaces. Definitions of Open Set and Topological Space. Bulk water molecular dynamics simulations based on a series of atomistic water potentials (TIP3P, TIP4P/Ew, SPC/E and OPC) are compared using new techniques from the field of topological data analysis. More specifically, a topological space is a set whose elements are called points, along with an additional structure called a topology that can be defined as a set of neighbourhoods for each point, that satisfy some axioms . Any DAG has at least one topological ordering, and there exist techniques for building topological orderings in linear time for any DAG. However, what is missing currently are efficient ways to rapidly explore and optimize band structures and to classify their topological characteristics for arbitrary unit-cell geometries. We define certain concepts of SVHF topology such as the interior of SVNHFS, the closure of SVNHFS, the exterior of SVNHFS, the . In this article, the ideas of parallel-wound topological structure and Koch curve will be fused to design the exciting coil of a planar EC probe, while the series-wound topological structure will be reported in another article. Topological Materials Topological insulators are a new state of quantum matter with a bulk gap and odd number of relativistic Dirac fermions on the surface. We propose an unsupervised learning methodology with descriptors based on topological data analysis (TDA) concepts to describe the local structural properties of materials at the atomic scale. These elements provide the necessary constructs to create TFM, topological class diagram, and topological use case diagram. Since the standard model structure on simplicial sets is a presentation of the (,1)-category Grpd of -groupoid s realized as Kan complex es, this identifies topological spaces with -groupoid s in an (,1)-categorical sense. We present a new method for articulating scale-dependent topological descriptions of the network structure inherent in many complex systems.
Herein, progress over the last decade in the study of topological structures in ferroic thin films and heterostructures is explored, including the observation of topological structures and control of their structures and emergent physical phenomena through epitaxial strain, layer thickness, electric, magnetic fields, etc. Topological structure of electrospun membrane regulates immune response, angiogenesis and bone regeneration The fate of biomaterials is orchestrated by biocompatibility and bioregulation characteristics, reported to be closely related to topographical structures. Original file (SVG file, nominally 263 242 pixels, file size: 3 KB) File information.
A topological sort is a graph traversal in which each node v is only visited after all of its dependencies have been visited. The first vertex in topological sorting is always a vertex with in-degree as 0 (a vertex with no in-coming edges). It is important to note that-. Essentially, topological sort is an algorithm which sorts a directed graph by returning an array or a vector, or a list, that consists of nodes where each node appears before all the nodes it points to. conditions for the topological structures of electromagnetic elds. Continuous mapping between topological spaces.
Such sets may be formed by elements of any kind. But thinking about it, it seems to me that this structure preservation is made backwards. This approach is based on quadratic structure of primes which is related with the square summable . Here's an example: . In this work, we show how deep learning can address this . Notably it says that every -groupoid is, up to equivalence, the fundamental -groupoid of some topological . Aizawa and Uezu67 8 performed the topological analysis of the perturbed Lorenz model and specified the bifurcating orbits by the linking number of two orbits L (C 1 , C 2 ), torsion number n i and . if there is an edge in the DAG going from vertex 'u' to vertex 'v', then 'u' comes before 'v' in the ordering. If you're just joining us, you may want to start with Learn JavaScript Graph Data Structure. In this paper, I show that the spherical geometry of the nanoparticle imposes constraints on the nature of the topological defects associated with the Soft Matter Emerging Investigators In fact, topological structures in many practical applications are usually unknown or uncertain. The foremost purpose of the paper is to construct the topological structure on a single-valued neutrosophic hesitant fuzzy set and to derive significant results. However, while [5] computes the adjacency matrix based on node similarity, we learn the graph metric using a relational decoder to extract room . The main tool is a refined version of the structure theorem for mixing graph maps. CRYSTAL GROWTH AND STRUCTURE Recent systematic, synthetic . Recent years have seen a further expansion of this field by engineering topological properties. Topological Space. In conclusion, we have demonstrated that with tailored optical vortex laser one can control the topological structure of laser-driven ion beams, such that well-collimated, submicron, tens-femtosecond, and GeV-level ion beams can be produced from interaction with thin solid-density foils. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. 4.2.2 Topological Structure Package The topological structure package contains all structure elements that are created by Topological UML profile. Thus, an MST is perhaps a rather drastic approach for improving our understanding of the behaviors and structures of stock market network from a more macro point of view. If the graph contains no directed cycles, then it is a directed acyclic graph. The topological invariants (the different degrees of homology) derived from each simulation frame are used to create a series of persistence diagrams from the atomic positions. Available Services. Moreover, we observe several competing magnetic states in MnBi4Te7 which, in combination with the presence of topological surface states, could provide a versatile platform for tunability between different topological regimes. Topologically, a Riemann surface of genus g is a sphere with g handles or, equivalently, a torus with g holes. Acknowledgements. 1. . Such sets may be formed by elements of any kind. For example, a topological sorting of the following graph is "5 4 2 3 1 0". Abstract Interaction networks are basic descriptions of ecological communities and are at the core of community dynamics models. The limit relations whose existence makes a given set X a topological space consist in the following: each subset A of X has a closure [ A ], which consists of the elements of A and the limit points of A. For example, motile, point-like topological defects capture the salient features of two-dimensional active liquid crystals composed of energy-consuming anisotropic units. A complex trajectory is first represented by a graph structure which consists of nodes and edges. A new data structure, the Radial Edge structure, which provides access to all topological adjacencies in a non-manifold boundary representation, is described and its completeness is verified. the electrical and topological structure of electrical power systems. 3. 5 to hunt for QSHI behavior in single-layer WTe 2 since this is the only single-layer topological TMD candidate whose ground state structure lies in the 1T' phase. In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. It also yields new proofs of some known results, including Blokh's theorem (topological mixing implies the specification property for maps on graphs). However, the relationships between dynamical properties of communities and qualitative descriptors of network structure remain unclear. In both examples, the computed structures have similar topological folds as the crystal structures (Fig. In Geography and GIS, surfaces can be analysed and visualised through various data structures, and topological data structures describe surfaces in the form of a relationship between certain surface-specific features. PMID: 7134969 . Other resolutions: 261 240 pixels | 522 480 pixels | 835 768 pixels | 1,113 1,024 pixels | 2,226 2,048 pixels. Topological Structure in Visual Perception L. Chen Science 12 Nov 1982 Vol 218, Issue 4573 pp. Using a certain transfor- 1b). The bulk of such materials is insulating but the surface can conduct electric current with well-defined spin texture. outdegree. American Heritage Dictionary of the English Language, Fifth Edition. The book is aimed to attract the readers' attention to such an important research area. However, despite the intensive research efforts worldwide, to date only a few intrinsic (non-heterostructures) materials have been proposed as TSC . 0. We dispersed force-generating microtubule bundles in a passive . Topology, general) tries to explain such concepts as convergence and continuity known from classical analysis in a general setting. Category theoretic definition of topological spaces. Selecting an appropriate and adequate topological structure of interconnection networks will become a critical issue, on which many research efforts have been made over the past decade. Related to topological: topological space, Topological group . The topological structure produces synergistic effects of space charge enhancement and local electric field modulation, and it gives rise to an ultrahigh dielectric permittivity (113.4, at 1 kHz) and excellent comprehensive electromechanical performance for the dielectric elastic nanocomposite. Learning data structures will help you understand how software works and improve your problem-solving skills. 0. Therefore, it is important to identify the unknown topological structures of stochastic multi-group models with multiple dispersals. Drawn from many disciplines with a strong applied aspect, this is a research-led, interdisciplinary approach to the creation, analysis and visualisation of surfaces, focussing on . Results Theoretical framework The degenerate optical cavity. The limit relations whose existence makes a given set X a topological space consist in the following: each subset A of X has a closure [ A ], which consists of the elements of A and the limit points of A. Author L Chen. First, the evolution . 2 [2]. In 1989, Ranada~ proved that in free space, Maxwell's equations are derived from some topological structure represented by the map : S. 3. An important hypothesis of relevance to these proposed functions is that the information encoded in replay should reflect the topological structure of experienced environments; that is, which places in the environment are connected with . Based only on atomic positions and without a priori knowledge, our method allows for an autonomous identifi In this tutorial, you will learn topological sort using a depth-first search graph traversal in JavaScript. Moreover, as a new type of magnetic domain structure with special topological structures, skyrmions feature outstanding magnetic and transport properties and may well have applications in data storage and other advanced spintronic devices, as readers will see in this book. We proposed a new framework to measure the similarity of topological structure between complex trajectories. Based only on atomic positions and without a priori knowledge, our method allows for an autonomous identifi Here, we'll simply refer to it as an array, you can use a vector or a list too. Topological or geometric information is extracted from the structures built on the top of the data. Topological structures are effective descriptors of the nonequilibrium dynamics of diverse many-body systems. 3. T opological Structure of Complex Pythagorean F uzzy Sets Ibtesam Alshammari , M. Parimala and Cenap Ozel Abstract The novel concept of topological spaces in complex Pythagorean fuzzy. 6-8 The Crommie group, in collaboration with Z. X. Shen and S. K. Mo, adopted a successful strategy of growing single layers of 1T'-WTe 2 on graphene-coated SiC substrates using . In a quasiclique proteins tend to interact with each other (Fig. These are . 7). We propose an unsupervised learning methodology with descriptors based on topological data analysis (TDA) concepts to describe the local structural properties of materials at the atomic scale. Introduction Recent research in complex networks has elucidated strong links between the structure of a network and risks within that network. The topological sorting for a directed acyclic graph is the linear ordering of vertices. Is a topological space a structure? Topological sort takes a directed graph and returns an array of the nodes where each node appears before all the nodes it points to. (A) The spin of electrons on the surface is correlated with their direction of motion.
3. Then, two topological structures, which are series-wound and parallel-wound topological structure, will be obtained. Say we had a graph, a --> b --> c Knowledge of their structure should enable us to understand dynamical properties of ecological communities. Topological sort: The linear ordering of nodes/vertices such that if there exists an edge between 2 nodes u,v then 'u' appears before 'v'. Many groups around the world were inspired by ref. (B) The lattice structure of Bi 2 Te 3 and the predicted relativistic "Dirac cone" like electronic structure formed by the surface electrons. Topographic study of a given place, especially the history of a region as indicated by its topography. Novkovic et al. The topological sorting for a directed acyclic graph is the linear ordering of vertices. General topology (also called set-theoretic topology or analytic topology, cf. For example, a topological sorting of the following graph is "5 4 2 3 1 0?. There are three steps in the framework. Topological Space a set among whose elements limit relations are defined in some way. Albert et al. Is a topological space a structure? We apply a topological approach to analytic number theory to prove Riemann hypothesis. Physically, this result can be attributed to the unique . II. [4] found that scale-free networks [2], which are characterized by strongly heteroge- Topological structure - definition of Topological structure by The Free Dictionary topological space (redirected from Topological structure) Also found in: Thesaurus, Encyclopedia. There can be more than one topological sorting for a graph. Once a topology, or topological structure, has been introduced or defined on a set $X$, the set is called a topological space, its elements are called points and the elements of the collection $\mathfrak G$, respectively $\mathfrak F$, are called the open, respectively closed, sets of this topological space. For example, motile, point-like topological defects capture the salient features of two-dimensional active liquid crystals composed of energy-consuming anisotropic units. Size of this PNG preview of this SVG file: 263 242 pixels. 1a), while in a quasibipartite, proteins between sets have denser interactions than those within sets (Fig. In mathematics, a topological space is, roughly speaking, a geometrical space in which closeness is defined but cannot necessarily be measured by a numeric distance. Topological Sorting. To improve our . Genus of Riemann surfaces. 2001). Graph theory is a fundamental and powerful mathematical tool for de . Example: Example: Input: Output: 4 5 2 0 3 1 Explanation: 4 is appearing before its neighbours (1,0) 5 structure and an intrinsic net magnetization. As we know that the source vertex will come after the destination vertex, so we need to use a stack to store previous elements. Our experiments open the door to directly explore high-dimensional topological physics with synthetic dimensions in a simple system. Topological Sorting for a graph is not possible if the graph is not a DAG. Systems of spherical nanoparticles with capping ligands have been shown to self-assemble into beautiful superlattices of fascinating structure and complexity. Chapters address the relationships between physical properties of . For example, motile, point-like topological defects capture the salient . 699 - 700 DOI: 10.1126/science.7134969 Abstract References Abstract Three experiments on tachistoscopic perception of visual stimuli demonstrate that the visual system is sensitive to global topological properties. Abstract. A topological superconductor (TSC) is a new type of superconductor with non-trivial topology in its bulk electronic structure, which has great potential in both fundamental research and application. Topological Sort-. In these two test cases, the accuracy of the computed topological structures is limited only by lack of sufficient numbers of long range restraints, which appear to be obtainable, both experimentally [33, 34] and computationally [13, 24]. Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge u v, vertex u comes before v in the ordering. For every edge U-V of a directed graph, the vertex u will come before vertex v in the ordering. 2. The twisting of the band structure is what the phrase non-trivial topology is referring to; an analogy would be the way a Mobius strip is a twisted version of an ordinary strip. 1982 Nov 12;218(4573):699-700. doi: 10.1126/science.7134969. Graph theory is a fundamental and powerful mathematical tool for de . Retrieval Practice gies 1. Identification of the above topological structures could not only represent the complicated . indegree. We provide services in computing all possible crosslinks between proteins. The two topological structures show different interaction patterns. An interdigital capacitive strain sensor is . 3. The results indicate that extraction of global topological properties is a basic factor in perceptual organization.
The FRC network exhibited higher tolerance and robustness to perturbation than the conduit system did. Topological Sorting is possible if and only if the graph is a Directed Acyclic Graph. The limit relations whose existence makes a given set X a topological space consist in the following: each subset A of X has a closure [ A ], which consists of the elements of A and the limit points of A. Topological structure in visual perception Science. elaborate the topological organization of the FRC network and conduit system in lymph nodes and demonstrate their distinct small-world topologies, which are maintained in a lymphotoxin--receptor-independent manner. Topological structures are effective descriptors of the nonequilibrium dynamics of diverse many-body systems.
Jiu-jitsu Tournaments 2022 Near Me
Guardian Litigation Group Debt Consolidation
Strixhaven Elemental Token
Title Icon Html W3schools
Goran Ivanisevic Wimbledon 2001 Speech
5500 N River Rd, Rosemont Il 60018
Legacy Clothing Brand
Reticulospinal Tract Vs Rubrospinal Tract
Baby Bjorn Mini Carrier Instructions Newborn
Hacienda Del Mar Tomatoes Menu
What Color Jewelry Goes With Everything
topological structure関連
topological structureapple cinemas warwick
topological structurealbert einstein college of medicine parking
topological structureap psychology developmental psychology
topological structurejet helical planer blades
topological structurechick-fil-a pathway login
topological structureschuman traineeship monthly grant
topological structurewhere is the reticular formation located
topological structuremarathons in november 2022
topological structurespecial leave accrual covid-19 army
|
CommonCrawl
|
Why does combining $\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$ into a $dx\,dy$ integral work?
I was looking here: How to compute the integral $\int_{-\infty}^\infty e^{-x^2}\,dx$? to find out how to evaluate $\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx$, and didn't understand why $$\left(\displaystyle\int\limits_{-\infty}^{\infty} e^{-x^2} dx\right)\left(\int\limits_{-\infty}^{\infty} e^{-y^2} dy\right)=\displaystyle\int\limits_{-\infty}^{\infty}\int\limits_{-\infty}^{\infty} e^{-x^2} e^{-y^2} dx\thinspace dy$$ I know you can use Fubini's Theorem from this: Why does $\left(\int_{-\infty}^{\infty}e^{-t^2} dt \right)^2= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy$?, but I'm still confused about how exactly you can just multiply two integrals together like that.
A detailed answer would be very nice.
calculus definite-integrals
D.R.D.R.
$\begingroup$ See Fubini or Tonelli: en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli.27s_theorem $\endgroup$ – gobucksmath Jan 10 '17 at 0:50
$\begingroup$ Everyone cites Fubini's theorem, but it's not really needed. If you develop multivariable integration by partitioning the domain into squares, then yes it is needed. But it can also be developed as iterated single variable integration, which is how Rudin does it; this is basically how they initially teach you to integrate multiple variables in calculus. That is, to integrate $\int\int f(x,y)dydx$ hold $x$ constant and integrate along $y$. In this case $f(x,y)=g(x)h(y)$, and since $g(x)$ is being held constant it can be moved outside the integral. $\endgroup$ – juan arroyo Jan 10 '17 at 3:04
$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} e^{-x^2}e^{-y^2} dx dy$$
Because we treat $y$ constant in the first, inner, integral we can pull it out.
$$=\int_{-\infty}^{\infty} e^{-y^2} \int_{-\infty}^{\infty} e^{-x^2} dx dy$$
Now because $\int_{-\infty}^{\infty} e^{-x^2} dx$ is some constant we can pull it out,
$$=\int_{-\infty}^{\infty} e^{-x^2} dx\int_{-\infty}^{\infty} e^{-y^2} dy$$
The result we got is generalizable, given $g(x,y)=f(x)h(y)$ we have,
$$\int_{a}^{b} \int_{c}^{d} g(x,y) dx dy=\int_{a}^{b} h(y) dy \int_{c}^{d} f(x) dx$$
Provided everything we write down exists.
Ahmed S. AttaallaAhmed S. Attaalla
$\begingroup$ This is frobenius, right? $\endgroup$ – Tac-Tics Jan 10 '17 at 1:16
$\begingroup$ Sorry, never heard that term before. @Tac-Tics $\endgroup$ – Ahmed S. Attaalla Jan 10 '17 at 1:22
$\begingroup$ Oops, nvm. It's fubini's theorem. $\endgroup$ – Tac-Tics Jan 10 '17 at 7:29
Fubini-Tonelli theorem implies that if $\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy |f(x,y)|\right) < \infty$ then $$\int_{\mathbb{R}^2} f(x,y)dxdy = \int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy f(x,y)\right).$$ First, to establish the premise, note that $f(x,y) = e^{-(x^2+y^2)},$ is positive so $|f(x,y)| = f(x,y).$ So we just need to establish the finiteness of the RHS. The inner integral is $$ \int_{\mathbb{R}} dy e^{-(x^2+y^2)} = e^{-x^2}\int_{\mathbb{R}} dy e^{-y^2}. $$ By comparing the integrand with $e^{-|y|}$ which can be integrated to 2 by elementary means, we see that $$\int_{\mathbb{R}} e^{-y^2}dy = C < \infty$$ so that $$\int_{\mathbb{R}} dx \left(\int_{\mathbb{R}} dy e^{-(x^2+y^2)}\right) = C\int_{\mathbb{R}} dx e^{-x^2} = C^2 < \infty.$$
Thus the theorem gives $$ \int_{\mathbb{R}^2} dxdy e^{-(x^2+y^2)} = C^2 $$ where $$ C = \int_{\mathbb{R}} dx e^{-x^2}.$$
The two-dimensional integral can then be transformed to polar coordinates by the change of variables theorem, which (by Fubini again) is a doable iterated integral that comes out to $\pi.$
spaceisdarkgreenspaceisdarkgreen
The integral $\displaystyle \int_{-\infty}^\infty e^{-y^2} \,dy$ does not change as $x$ goes from $-\infty$ to $\infty$, so you have \begin{align} \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \int_{-\infty}^\infty e^{-y^2} \, dy & = \int_{-\infty}^\infty e^{-x^2} \, dx \cdot \text{constant} \\[10pt] & = \int_{-\infty}^\infty \left( e^{-x^2}\cdot\text{constant} \right)\,dx \\[10pt] & = \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right)\,dx \end{align} The factor $e^{-x^2}$ does not change as $y$ goes from $-\infty$ to $\infty$, so you now have \begin{align} \int_{-\infty}^\infty \left( e^{-x^2} \int_{-\infty}^\infty e^{-y^2}\,dy \right) \, dx &= \int_{-\infty}^\infty \left( \text{constant} \cdot \int_{-\infty}^\infty e^{-y^2}\,dy \right) \,dx \\[10pt] & = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty \text{constant} \cdot e^{-y^2}\,dy \right) \,dx \\[10pt] & = \int_{-\infty}^\infty \left( \int_{-\infty}^\infty e^{-x^2} \cdot e^{-y^2}\,dy \right) \,dx \end{align}
Michael HardyMichael Hardy
$\begingroup$ The presentation is nice. +1 $\endgroup$ – Simply Beautiful Art Jan 11 '17 at 21:31
$\begingroup$ +1 Always enjoy reading a Michael Hardy answer $\endgroup$ – Ovi Apr 26 '18 at 9:30
$\begingroup$ @Ovi : Thank you. $\endgroup$ – Michael Hardy Apr 26 '18 at 17:15
Sorry This is not a complete answer but i hope it can help you
if we integral of dx then $e^{-y^2}$ assume constant $$\displaystyle\int\limits_{y=-\infty}^{y=\infty}\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} e^{-y^2} dx\thinspace dy=\Big(\int\limits_{y=-\infty}^{y=\infty} e^{-y^2} dy\Big)\Big(\displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx\Big)=\thinspace\int\limits_{y=-\infty}^{y=\infty} e^{-y^2}dy \displaystyle\int\limits_{x=-\infty}^{x=\infty} e^{-x^2} dx $$
W.R.P.SW.R.P.S
Not the answer you're looking for? Browse other questions tagged calculus definite-integrals or ask your own question.
How to compute the integral $\int_{-\infty}^\infty e^{-x^2}\,dx$?
Why does $\left(\int_{-\infty}^{\infty}e^{-t^2} dt \right)^2= \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-(x^2 + y^2)}dx\,dy$?
Calculating $\int_{0}^{\infty}\sin(x^{2})dx$
Calculus Question: Improper integral $\int_{0}^{\infty}\frac{\cos(2x+1)}{\sqrt[3]{x}}dx$
Finding the integral of $\int_{-\infty}^{\infty}e^{-|4x|}$.
Computing a double integral $\int_{-\infty}^\infty\int_{-\infty}^\infty\frac{f(t)}{1+{(x+g(t))}^2}dt\ dx$
Why can $\int_{x=0}^{\infty} x\,\mathrm{e}^{-\alpha x^2}\mathrm {d}x$ not be evaluated by parts to obtain $\frac{1}{2\alpha}$?
Integral: $\int_{0}^{x}\lfloor\dfrac{1}{1-t}\rfloor dt$
Definite integral $\int_{-\infty}^{\infty}dx \frac{e^{-ix^2}}{x+a}$
Definite integral $\int_{y}^{\infty}$ involving two Meijer's G function
How to evaluate $\int_{-\infty}^{\infty}\frac{x^2e^x}{\left(1 + e^x\right)^2}dx$
Compute $\lim\limits_{n\to \infty} \left(n^3 \int_{n} ^{2n}\frac{x dx} {1+x^5}\right) $
|
CommonCrawl
|
Decoherence in quantum systems always produces $\vert0\rangle$
I was recently asked two questions concerning error in quantum computing:
Is it possible for quantum computers to exhibit behavior similar to flip errors in classical computers where a state $\vert0\rangle$ becomes $\vert1\rangle$ due to this error?
Is it possible for a quantum computer to decohere to $\vert1\rangle$ or some probability distribution between $\vert0\rangle$ and $\vert1\rangle$ based on the state the system is currently in.
My own thoughts are that the first is possible depending on where in the Bloch sphere your qubits are and the operations you're trying to do. That's why we have quantum error-correction code.
The second question was more tricky. My explanation is that, if I have a density matrix representing the pure state $\begin{bmatrix}1 & 0 & 0 & 1\end{bmatrix} / \sqrt 2$: $$\begin{bmatrix}0.5 & 0 & 0 & 0.5 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0.5& 0& 0& 0.5\end{bmatrix}$$
then I can see the probabilities of all given outcomes for the system. However, if I introduce some noise, I might get something like:
$$\begin{bmatrix}0.3 & 0 & 0 & 0.1 \\ 0 & 0.2 & 0 & 0 \\ 0 & 0 & 0.2 & 0 \\ 0.1& 0& 0& 0.3\end{bmatrix}$$
If I try to measure the state corresponding to this density matrix, I get the result 0.08, which is very close to 0. What I glean from this experiment is that, as the system interacts with the environment, information is lost, returning the system to the ground state, which is 0.
Is this interpretation correct? If not, can somebody provide more detail into what's going on?
error-correction density-matrix decoherence
Woody1193Woody1193
Is it possible for quantum computers to exhibit behavior similar to flip errors in classical computers where a state |0⟩ becomes |1⟩ due to this error?
Yes. It would be a very abrupt error if you're talking about errors on physical qubits. Usually, we'd think of an error as being a little bit of an X rotation (for example). However, the effect of performing an error correction step, and measuring the syndrome, divides that into two cases, "no error" and "bit flip", the latter occurring with small probability. Hence, in particular when you're talking about the effects of error correction/fault tolerance on an encoded qubit, the ultimate errors will generally be of this Pauli form.
Is it possible for a quantum computer to decohere to |1⟩ or some probability distribution between |0⟩ and |1⟩ based on the state the system is currently in.
That depends very much on your noise model. In the context of error correction, people often describe the errors as the action of Pauli operators (i.e. the depolarising channel on each qubit). The only stationary state of this channel is the maximally mixed state (NB Not $|0\rangle$), so the ultimate state after a long time is independent of the initial state. Perhaps you're thinking about the specific noise model that is relaxation/amplitude damping? In that case, anything in the excited state has a tendency to relax to the ground state, and so the only stationary state is $|0\rangle$.
However, I can easily give you noise models where the final state depends on the initial state. Dephasing noise (effectively, Pauli Z errors on each qubit) acting on several qubit is one such example. To see why, note that $Z$ never changes the number of qubits in the $|1\rangle$ state, so the final state must depend on how many $|1\rangle$s there were in the initial state. (Ultimately, you get a mixed state which is maximally mixed in each 'excitation subspace', but the weight on that subspace is determined by the initial state.)
Not the answer you're looking for? Browse other questions tagged error-correction density-matrix decoherence or ask your own question.
How is measurement modelled when using the density operator?
How do we derive the density operator of a subsystem?
Density matrix after measurement on density matrix
Computing von Neumann entropy of pure state in density matrix
What is the probability that measurement finds it in the $|0\rangle$ state?
What's an easy way to determine a local density matrix?
Quantum error correction using bit-flip code for the amplitude damping channel
Mixed state vs superposition , experiment test
Calculate probability of a state after depolarization
|
CommonCrawl
|
Notre Dame Journal of Formal Logic
Notre Dame J. Formal Logic
Volume 51, Number 4 (2010), 427-442.
A Covering Lemma for HOD of K(ℝ)
Daniel W. Cunningham
More by Daniel W. Cunningham
Enhanced PDF (306 KB) PDF File (249 KB)
Working in ZF+AD alone, we prove that every set of ordinals with cardinality at least Θ can be covered by a set of ordinals in HOD of K(ℝ) of the same cardinality, when there is no inner model with an ℝ-complete measurable cardinal. Here ℝ is the set of reals and Θ is the supremum of the ordinals which are the surjective image of ℝ.
Notre Dame J. Formal Logic, Volume 51, Number 4 (2010), 427-442.
First available in Project Euclid: 29 September 2010
https://projecteuclid.org/euclid.ndjfl/1285765797
doi:10.1215/00294527-2010-027
Primary: 03E15: Descriptive set theory [See also 28A05, 54H05]
Secondary: 03E45: Inner models, including constructibility, ordinal definability, and core models 03E60: Determinacy principles
descriptive set theory determinacy fine structure
Cunningham, Daniel W. A Covering Lemma for HOD of K (ℝ). Notre Dame J. Formal Logic 51 (2010), no. 4, 427--442. doi:10.1215/00294527-2010-027. https://projecteuclid.org/euclid.ndjfl/1285765797
[1] Barwise, J., Admissible Sets and Structures, Springer-Verlag, Berlin, 1975.
Mathematical Reviews (MathSciNet): MR0424560
[2] Cunningham, D. W., "The real core model and its scales", Annals of Pure and Applied Logic, vol. 72 (1995), pp. 213--89.
Digital Object Identifier: doi:10.1016/0168-0072(94)00023-V
[3] Cunningham, D. W., ``Is there a set of reals not in $K(\R)$?'' Annals of Pure and Applied Logic, vol. 92 (1998), pp. 161--210.
Digital Object Identifier: doi:10.1016/S0168-0072(98)00003-7
[4] Cunningham, D. W., "A covering lemma for $L({\R})$", Archive for Mathematical Logic, vol. 41 (2002), pp. 49--54.
Digital Object Identifier: doi:10.1007/s001530200003
[5] Cunningham, D. W., "A covering lemma for $K(\R)$", Archive for Mathematical Logic, vol. 46 (2007), pp. 197--221.
Digital Object Identifier: doi:10.1007/s00153-007-0040-8
[6] Jech, T., Set Theory, 3d millennium edition, Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2003.
[7] Kanamori, A., The Higher Infinite, Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1994.
[8] Kechris, A. S., "Determinacy and the structure of $L({\R})$", pp. 271--83 in Recursion Theory, vol. 42 of Proceedings of Symposia in Pure Mathematics, American Mathematical Society, Providence, 1985.
[9] Kunen, K., Set Theory, vol. 102 of Studies in Logic and the Foundations of Mathematics, North-Holland Publishing Co., Amsterdam, 1980.
[10] Moschovakis, Y. N., Descriptive Set Theory, vol. 100 of Studies in Logic and the Foundations of Mathematics, North-Holland Publishing Co., Amsterdam, 1980.
[11] Steel, J. R., "Scales in $\mathbf{K}(\mathbb{R})$", pp. 176--208 in Games, Scales, and Suslin Cardinals. The Cabal Seminar. Vol. I, vol. 31 of Lecture Notes in Logic, Association for Symbolic Logic, Chicago, 2008.
[12] Woodin, H., handwritten notes.
Note on Volumes 35-40
A Diamond Principle Consistent with AD
Cunningham, Daniel, Notre Dame Journal of Formal Logic, 2017
The stable core
Friedman, Sy-David, Bulletin of Symbolic Logic, 2012
Algebraicity and Implicit Definability in Set Theory
Hamkins, Joel David and Leahy, Cole, Notre Dame Journal of Formal Logic, 2016
A Hierarchy of Maps between Compacta
Bankston, Paul, Journal of Symbolic Logic, 1999
Ehrenfeucht's Lemma in Set Theory
Fuchs, Gunter, Gitman, Victoria, and Hamkins, Joel David, Notre Dame Journal of Formal Logic, 2018
The ground axiom
Reitz, Jonas, Journal of Symbolic Logic, 2007
The Largest Countable Inductive Set is a Mouse Set
Rudominer, Mitch, Journal of Symbolic Logic, 1999
Jonsson Cardinals, Erdos Cardinals, and the Core Model
Mitchell, W. J., Journal of Symbolic Logic, 1999
Powers of the Ideal of Lebesgue Measure Zero Sets
Burke, Maxim R., Journal of Symbolic Logic, 1991
The Baire Category Theorem and Cardinals of Countable Cofinality
Miller, Arnold W., Journal of Symbolic Logic, 1982
euclid.ndjfl/1285765797
|
CommonCrawl
|
Hot answers tagged history
What does the "Lambda" in "Lambda calculus" stand for?
An excerpt from History of Lambda-calculus and Combinatory Logic by F. Cardone and J.R. Hindley(2006): By the way, why did Church choose the notation "$\lambda$"? In [Church, 1964, §2] he stated clearly that it came from the notation "$\hat{x}$" used for class-abstraction by Whitehead and Russell, by first modifying "$\hat{x}$" to "$\wedge x$" to ...
terminology computation-models lambda-calculus history
Anton Trunov
What is the earliest use of the "this" keyword in any programming language?
Simula 67 is generally considered the first object-oriented language and predates Smalltalk by a number of years. It also used the this keyword for the same concept, which can be seen in this book chapter extract: class Linker; begin ref(Linker) Next, Sex, Employment; text ID; procedure Add_to_List(LHead); name LHead; ref(Linker) LHead; ...
programming-languages history object-oriented
answered Mar 7 '20 at 9:02
Brian Tompsett - 汤莱恩
Is there any reason why the modulo operator is denoted as %?
The earliest known use of % for modulo was in B, which was the progenitor of C, which was the ancestor (or at least godparent) of most languages that do the same, hence the operator's ubiquity. Why did Thompson and Richie pick %? It had to be a printable ASCII character that wouldn't conflict with B's other features. % was available, and it resembles the / ...
programming-languages notation history
Why do we need assembly language?
Why do we need assembly language? Well, there's actually only one language we will ever need, which is called "machine language" or "machine code". It looks like this: 0010000100100011 This is the only language your computer can speak directly. It is the language a CPU speaks (and technically, different types of CPUs speak different versions). It also ...
programming-languages education history
Why are computable functions also called recursive functions?
The founders of computability theory were mathematicians. They founded what is now called computability theory before there was any computers. What was the way mathematicians defined functions that could be computed? By recursive definitions! So there were recursive function before there were any other model of computation like Turing machines or lambda ...
computability terminology history
Did 'Eugene Goostman' really pass the Turing test?
There is no "official Turing test" so there's no concept of "officially pass[ing] the test". Turing described a methodology that one might use to evaluate artificial intelligences. The organizers of the event that Eugene Goostman won implemented that methodology in a particular way and the program satisfied the criteria the organizers had chosen. In that ...
reference-request artificial-intelligence history turing-test
Define some basic functions: zero function $$ zero: \mathbb{N} \rightarrow \mathbb{N} : x \mapsto 0 $$ successor function $$ succ: \mathbb{N} \rightarrow \mathbb{N} : x \mapsto x + 1 $$ projection function $$p_i^n: \mathbb{N}^n \rightarrow \mathbb{N} : (x_1, x_2, \dots, x_n) \mapsto x_i $$ From now on I will use $\bar{x_n}$ to denote $(x_1, x_2, \...
Why did RSA encryption become popular for key exchange?
There is no strong technical reason. We could have used Diffie-Hellman (with appropriate signatures) just as well as RSA. So why RSA? As far as I can tell, non-technical historical reasons dominated. RSA was patented and there was a company behind it, marketing and advocating for RSA. Also, there were good libraries, and RSA was easy to understand and ...
cryptography history
What is the earliest use of "trees" in computer science?
Wikipedia says that the first use of tree in mathematics was by Cayley in 1857. Since the use in computer science is taken directly from mathematics, it seems more fundamental to ask when they originated there. Unless computer scientists originally called trees something else, the first computer scientist to use "tree" doesn't seem any more significant than,...
data-structures reference-request trees history
This is very likely a historical development. Looking at this table, we see that C was likely the first language to use % for modulo. Its predecesor BCPL used rem, and older languages such as Fortran, Algol, Lisp, and Cobol did not use %. But that's just my uninformed guess.
Andrej Bauer
According to Donald Knuth's TAOCP, Vol. 1, pg. 459 the following papers might be considered as one of the first appearances of trees in CS. H. G. Kahrimanian, Analytical Differentiation by a Digital Computer, Symposium on Automatic Programming, 6–14, 1952 K.E. Iverson and L.R. Johnson, IBM Corp. research reports RC-390, RC-603, 1961 A.J. Perils and C. ...
A.Schulz
What is the origin of λ for empty string?
The German Wikipedia claims that $\lambda$ comes from "leer", which means "empty" in German. That seems plausible, as German used to be one of the major languages in mathematics. Chomsky used $I$ as the empty string (or actually as the identity element for string concatenation) in his early papers. Some people in combinatorics still use $1$ as the empty ...
formal-languages automata reference-request notation history
Jouni Sirén
Why call it 'Time Complexity'?
Perhaps the earliest place in which time complexity appears is On the computational complexity of algorithms by Hartmanis and Stearns. Their goal is to study computation complexity, which they define as follows: The computational complexity of a sequence is to be measured by how fast a multitape Turing machine can print out the terms of the sequence. ...
algorithm-analysis time-complexity terminology runtime-analysis history
What was Robert Floyd's algorithm for inserting brackets?
The seminal paper referred to is "Syntactic Analysis and Operator Precedence" (1963), which describes the operator precedence algorithm still used by many simple expression parsers today. The basic approach described by Floyd was not exactly new. It was described by Edsger Dijkstra in 1961; Dijsktra's procedure was a pragmatic, special-purpose ...
algorithms reference-request parsers history
Why the 127 encodings of ASCII needed to be extended to 256?
ASCII has 128 characters. Many countries had similar encodings for 128 characters. That is all history. Nobody uses ASCII anymore. There was a phase with lots of different encodings for more than 128 characters, some with 256 (Mac Roman and Windows 1152 were quite popular) and some like the Chinese GB with thousands of characters. Nowadays people mostly ...
data-structures history encoding-scheme binary
When did polynomial-time algorithm become of interest?
Since this question was reopened and made more explicit, I would like to convert my comment into an answer. Now the OP wants to understand why and when polynomial algorithms became of interest. I especially focus on the sub-question: When did people realize the role and importance of efficient versus non-efficient algorithms? Because algorithms, in ...
algorithms reference-request efficiency polynomial-time history
hengxin
Diffie–Hellman lacks a crucial feature: authentication. You know you are sharing a secret with someone, but you can't know if it's the recipient or a man in the middle. With RSA, you may have a few trusted parties who store public keys. If you want to connect to your bank, you can ask the trusted party (let's say Verisign) for the bank's public key, as ...
How did 'Isabelle' (the theorem prover) get its name?
A little google-fu (and my own memory) tells me it was apparently named by Larry Paulson after Gerard Huet's daughter. Gerard Huet happens to be one of the people behind the less poetically named Coq theorem prover. Small world!
history automated-theorem-proving isabelle
here is some other near-firsthand info/ angle on this by Church student Dana Scott as just reported by Ghica and documented in a youtube video.[1] He says that when Church was asked what the meaning of the λ was, he just replied "Eeny, meeny, miny, moe.", which can only mean one thing. It was a random, meaningless choice. Prof. Scott claimed that the ...
Lambda Calculus as a branch of set theory
It's false. The $\lambda$-calculus arose through efforts to understand foundations of mathematics. Nowadays some people mistakenly equate foundations with set theory. The Stanford Encyclopaedia of Philosophy has a very good writeup on the $\lambda$-calculus, as well as its history, I recommend it.
lambda-calculus sets history
I think the prizes you're referring to are the Loebner Prize. According to the Wikipedia page (see prior link), the winner for 2014 is 'Rose' by Bruce Wilcox. That program did not win one of the one-time-only prizes, but did get $4,000 in prize money. 'Eugene Goostman' competed in 2005 and 2008, finishing second both times. The competition 'Eugene Goostman'...
What was the first programming language with loop break?
I sent an email to Martin Richards to try and get some details/context about CPL and he gave me a reply. To partially quote: "CPL did have a break command to cause an exit from a repetitive command such as while or until. As far as I remember it did not have an equivalent of continue, but the equivalent loop command was added to BCPL early in 1967." [...]...
programming-languages history
mlhaufe
What is the origin of dot notation?
In [1] (authored by one of the co-creators of Simula), there is a suggestion that Simula 67 may have been the first to use this dot notation. Given that Simula is widely credited for being the first OO language, it may be tricky to find an earlier example specifically in an OO context. EDIT: On DiscreteLizard's suggestion in comments, I took a peek at the ...
mhum
Why are struct and class essentially the same in C++?
Bjarne Stroustrup writes in his The Design and Evolution of C++ book (item 3.5.1): At this point, the object model becomes real in the sense that an object is more than the simple aggregation of the data members of a class. An object of a C++ class with a virtual function is a fundamentally different beast from a simple C struct. Then why did I not ...
history language-design c++
HEKTO
Why was Alan Turing important? [closed]
He laid down the foundations to understanding "computing" from a mathematical perspective. His paper about what is today called the Turing Machine shows his reflections on a model, in mathematical terms, of how the human_brain/thought_process works. Based on that, he develop a theory of computation (with the aim of automating math thinking; kind of similar ...
carlosayam
Official Name for the "First" Programming Language Developed by Turing?
Turing machines are not programmable. Each Turing machine computes only one function. Hence they do not use any programming language. What you see as a language is the description of the machine itself, not of a program in some programming language. Thus the name "Turing machine" is the only appropriate terminology. Now it turns out that there are devices ...
turing-machines programming-languages history
Robert Soare wrote an essay about this issue. According to him, the term (general) recursive functions was coined by Gödel, who defined them using some sort of mutual recursion. The name stuck, though later on other equivalent definitions were found. For more information, I recommend Soare's essay.
How did each class of languages receive their name?
Regular Languages: There's some good discussion of this here: https://ell.stackexchange.com/questions/83917/how-did-regex-get-its-name Context-Free vs Context-Sensitive Grammars: For CFGs and CSGs, the "context" part is the idea that certain rules can be extended to apply based on relative positions of symbols rather than on single specific symbols. ...
formal-languages terminology history chomsky-hierarchy
mdxn
Probably the notation originates from the "Finnish school". My copy of 'Formal Languages' by Arto Salomaa (Academic Press, ACM monograph series, 1973) uses $\lambda$ for the empty string. And so does his 1969 book 'Theory of Automata' (Pergamon Press). We move back. The classic 'Finite Automata and Their Decision Problems' by M.O. Rabin and D.Scott (April ...
Hendrik Jan
Who are the legislators of Paxos?
This is an educated guess of the transliterated names I could find in the Paxos paper. Most of these are people mentioned in the paper's references. Λ˘ινχ∂: Lynch, N. - Legislator Φισ∂ερ: Fischer, M. J. - Legislator Tωυεγ: Toueg, S. - Legislator Ωκι: Oki, B. M. - Legislator ∆ωλεφ: Dolev, D. - Farmer Σκεεν: Skeen, M. D. - Merchant Στωκµε˘ιρ: Stockmeyer, L. - ...
distributed-systems history
history × 131
terminology × 24
reference-request × 24
programming-languages × 23
turing-machines × 9
complexity-theory × 6
formal-languages × 6
computability × 5
computation-models × 5
data-structures × 4
compilers × 4
artificial-intelligence × 4
lambda-calculus × 4
type-theory × 4
notation × 4
language-design × 4
automata × 3
machine-learning × 3
object-oriented × 3
c × 3
lisp × 3
graphs × 2
|
CommonCrawl
|
Ultrafast destruction and recovery of the spin density wave order in iron based pnictides: a multi-pulse optical study (1804.05769)
M. Naseska, D. Mihailovic Complex Matter Dept., Jozef Stefan Institute, Jamova 39, Ljubljana, SI-1000, Ljubljana, Slovenia Radboud University, Institute for Molecules, Materials, Nijmegen 6525 AJ, The Netherlands Department of Physics, Zhejiang University, Hangzhou 310027, People's Republic of China CENN Nanocenter, Jamova 39, Ljubljana SI-1000, Slovenia)
April 16, 2018 cond-mat.str-el
We report on systematic excitation-density dependent all-optical femtosecond time resolved study of the spin-density wave state in iron-based superconductors. The destruction and recovery dynamics are measured by means of the standard and a multi-pulse pump-probe technique. The experimental data are analyzed and interpreted in the framework of an extended three temperature model. The analysis suggests that the optical-phonons energy-relaxation plays an important role in the recovery of almost exclusively electronically driven spin density wave order.
Components of polarization-transfer to a bound proton in a deuteron measured by quasi-elastic electron scattering (1801.01306)
A1 Collaboration: D. Izraeli, P. Achenbach, R. Böhm, I. Friščić, J. Lichtenstadt, M. Mihovilovič, J. Pochodzalla, S. Širca, A. Weber School of Physics, Astronomy, Tel Aviv University, Tel Aviv 69978, Israel., Institut für Kernphysik, Johannes Gutenberg-Universität, 55099 Mainz, Germany., Jožef Stefan Institute, 1000 Ljubljana, Slovenia., Department of Physics, University of Zagreb, HR-10002 Zagreb, Croatia., Rutgers, The State University of New Jersey, Piscataway, NJ 08855, USA., Department of Physics, NRCN, P.O. Box 9001, Beer-Sheva 84190, Israel., Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel., Faculty of Mathematics, Physics, University of Ljubljana, 1000 Ljubljana, Slovenia., University of South Carolina, Columbia, South Carolina 29208, USA.)
Jan. 18, 2018 nucl-ex
We report the first measurements of the transverse ($P_{x}$ and $P_{y}$) and longitudinal ($P_{z}$) components of the polarization transfer to a bound proton in the deuteron via the $^{2}\mathrm{H}(\vec{e},e'\vec{p})$ reaction, over a wide range of missing momentum. A precise determination of the electron beam polarization reduces the systematic uncertainties on the individual components, to a level that enables a detailed comparison to a state-of-the-art calculation of the deuteron that uses free-proton electromagnetic form factors. We observe very good agreement between the measured and the calculated $P_{x}/P_{z}$ ratios, but deviations of the individual components. Our results cannot be explained by medium modified electromagnetic form factors. They point to an incomplete description of the nuclear reaction mechanism in the calculation.
VBSCan Split 2017 Workshop Summary (1801.04203)
C.F. Anders, B. Biedermann, L.S. Bruni, V. Ciulli, L. Di Ciaccio, P. Ferrari, N. Glover, G. Gonella, E. Gross, T. Herrmann, X. Janssen, M. Klute, J.G.E. Lauwers, K. Lohwasser, F. Maltoni, M.U. Mozer, M. Pellen, M. Pleier, I. Puljak, V. Rothe, E. Sauvan, M. Selvaggi, P. Sommer, J. Strandberg, M. Trott, M. Voutilainen, D. Zeppenfeld, University, INFN Torino Technische Universitaet Dresden Niels Bohr International Academy, Discovery Center, Niels Bohr Institute, Copenhagen University Department of Physics, Astronomy, University College London Sorbonne Universités, CNRS, LPTHE, Paris University, INFN, Firenze LLR, École polytechnique, CNRS/IN2P3, Université Paris-Saclay LAPP, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS/IN2P3, Annecy Albert-Ludwigs-Universität Freiburg University of Wisconsin-Madison Department of Physics, University of Warwick, Coventry University of Split, FESB Institute for Particle Physics Phenomenology, Department of Physics, University of Durham Massachusetts Institute of Technology, Cambridge University, INFN, Milano-Bicocca Deutsches Elektronen-Synchrotron, Hamburg Weizmann Institute of Science University College Dublin University of Tübingen Institute of Physics, Academy of Sciences of the Czech Republic, Praha, Department of Experimental Particle Physics, Jožef Stefan Institute, Department of Physics, University of Ljubljana Aristotle University of Thessaloníki, School of Physics, State Key Laboratory of Nuclear Physics, Technology, Peking University, Beijing University of Sheffield Centre for Cosmology, Particle Physics, Phenomenology Université catholique de Louvain KIT - Karlsruhe Institute of Technology MOX - Department of Mathematics, Politecnico di Milano, University, INFN, Pavia Polish Academy of Sciences KTH Royal Institute of Technology National Center for Nuclear Research, Warsaw, Università degli Studi di Milano
Jan. 12, 2018 hep-ph, hep-ex
This document summarises the talks and discussions happened during the VBSCan Split17 workshop, the first general meeting of the VBSCan COST Action network. This collaboration is aiming at a consistent and coordinated study of vector-boson scattering from the phenomenological and experimental point of view, for the best exploitation of the data that will be delivered by existing and future particle colliders.
A Strange Metal from Gutzwiller correlations in infinite dimensions (1703.02206)
Wenxin Ding, Edward Perepelitsky Physics Department, University of California, Santa Cruz, California, Jožef Stefan Institute, Ljubljana, Slovenia, Faculty for Mathematics, Physics, University of Ljubljana, Ljubljana, Slovenia)
Aug. 18, 2017 cond-mat.str-el
Recent progress in extremely correlated Fermi liquid theory (ECFL) and dynamical mean field theory (DMFT) enables us to compute in the $d \to \infty$ limit the resistivity of the $t-J$ model after setting $J\to0$. This is also the $U=\infty$ Hubbard model. We study three densities $n=.75,.8,.85$ that correspond to a range between the overdoped and optimally doped Mott insulating state. We delineate four distinct regimes characterized by different behaviors of the resistivity $\rho$. We find at the lowest $T$ a Gutzwiller Correlated Fermi Liquid regime with $\rho \propto T^2$ extending up to an effective Fermi temperature that is dramatically suppressed from the non-interacting value. This is followed by a Gutzwiller Correlated Strange Metal regime with $\rho \propto (T-T_0)$, i.e. a linear resistivity extrapolating back to $\rho=0$ at a positive $T_0$. At a higher $T$ scale, this crosses over into the Bad Metal regime with $\rho \propto (T+T_1)$ extrapolating back to a finite resistivity at $T=0$, and passing through the Ioffe-Regel-Mott value where the mean free path is a few lattice constants. This regime finally gives way to the High $T$ Metal regime, where we find $\rho \propto T$. The present work emphasizes the first two, where the availability of an analytical ECFL theory is of help in identifying the changes in related variables entering the resistivity formula that accompany the onset of linear resistivity, and the numerically exact DMFT helps to validate the results. We also examine thermodynamic variables such as the magnetic susceptibility, compressibility, heat capacity and entropy, and correlate changes in these with the change in resistivity. This exercise casts valuable light on the nature of charge and spin correlations in the strange metal regime, which has features in common with the physically relevant strange metal phase seen in strongly correlated matters.
Evolution of coherent collective modes through consecutive CDW transitions in (PO$_{2}$)$_{4}$(WO$_{3}$)$_{12}$ mono-phosphate tungsten bronze (1704.05245)
L. Stojchevska, J.-P. Pouget Complex Matter Department, Jozef Stefan Institute, Faculty of Mathematics, Physics, Center of Excellence on Nanoscience, Nanotechnology Nanocenter
All optical femtosecond relaxation dynamics in a single crystal of mono-phosphate tungsten bronze (PO$_{2}$)$_{4}$(WO$_{3}$)$_{2m}$ with alternate stacking m=6 of WO$_{3}$ layers was studied through the three consequent charge density wave (CDW) transitions. Several transient coherent collective modes associated to the different CDW transitions were observed and analyzed in the framework of the time dependent Ginzburg-Landau theory. Remarkably, the interference of the modes leads to an apparent rectification effect in the transient reflectivity response. A saturation of the coherent-mode amplitudes with increasing pump fluence well below the CDWs destruction threshold fluence indicates a decoupling of the electronic and lattice parts of the order parameter under strong optical drive.
Fluence dependent femtosecond quasi-particle and Eu^{2+} -spin relaxation dynamics in EuFe_{2}(As,P)_{2} (1607.08813)
A. Pogrebna, D. Mihailovic Complex Matter Dept., Jozef Stefan Institute, Jamova 39, Ljubljana, SI-1000, Ljubljana, Slovenia, CENN Nanocenter, Jamova 39, Ljubljana SI-1000, Slovenia, Department of Physics, Zhejiang University, Hangzhou 310027, Peoples Republic of China)
July 29, 2016 cond-mat.supr-con
We investigated temperature and fluence dependent dynamics of the time resolved optical reflectivity in undoped spin-density-wave (SDW) and doped superconducting (SC) EuFe$_{2}$(As,P)$_{2}$ with emphasis on the ordered Eu$^{2+}$-spin temperature region. The data indicate that the SDW order coexists at low temperature with the SC and Eu$^{2+}$-ferromagnetic order. Increasing the excitation fluence leads to a thermal suppression of the Eu$^{2+}$-spin order due to the crystal-lattice heating while the SDW order is suppressed nonthermally at a higher fluence.
Polarization-transfer measurement to a large-virtuality bound proton in the deuteron (1602.06104)
A1 Collaboration: I. Yaron, H. Arenhövel, L. Debenjak, R. Gilman, D. G. Middleton, S. Širca, B. S. Schlimme, A. Tyukin School of Physics, Astronomy, Tel Aviv University, Tel Aviv 69978, Israel., Institut für Kernphysik, Johannes Gutenberg-Universität, 55099 Mainz, Germany., Jožef Stefan Institute, 1000 Ljubljana, Slovenia., Department of Physics, University of Zagreb, HR-10002 Zagreb, Croatia., Rutgers, The State University of New Jersey, Piscataway, NJ 08855, USA., Department of Physics, NRCN, P.O. Box 9001, Beer-Sheva 84190, Israel., Department of Physics, University of Ljubljana, 1000 Ljubljana, Slovenia., University of South Carolina, Columbia, South Carolina 29208, USA., Racah Institute of Physics, Hebrew University of Jerusalem, Jerusalem 91904, Israel)
Feb. 19, 2016 nucl-ex
Possible differences between free and bound protons may be observed in the ratio of polarization-transfer components, $P'_x/P'_z$. We report the measurement of $P'_x/P'_z$, in the $^2\textrm{H}(\vec{e},e^{\prime}\vec{p})n$ reaction at low and high missing momenta. Observed increasing deviation of $P'_x/P'_z$ from that of a free proton as a function of the virtuality, similar to that observed in \hefour, indicates that the effect in nuclei is due to the virtuality of the knock-out proton and not due to the average nuclear density. The measured differences from calculations assuming free-proton form factors ($\sim10\%$), may indicate in-medium modifications.
Real time measurement of the emergence of superconducting order in a high temperature superconductor (1207.2879)
I.Madan, V. V. Kabanov Complex Matter Department, Jozef Stefan Institute, Ljubljana, Slovenia, Department of Physics, Faculty of Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Japan)
Jan. 17, 2016 cond-mat.supr-con
Systems which rapidly evolve through symmetry-breaking transitions on timescales comparable to the fluctuation timescale of the single-particle excitations may behave very differently than under controlled near-ergodic conditions. A real-time investigation with high temporal resolution may reveal new insights into the ordering through the transition that are not available in static experiments. We present an investigation of the system trajectory through a normal-to-superconductor transition in a prototype high-temperature superconducting cuprate in which such a situation occurs. Using a multiple pulse femtosecond spectroscopy technique we measure the system trajectory and time-evolution of the single-particle excitations through the transition in La$_{1.9}$Sr$_{0.1}$CuO$_{4}$ and compare the data to a simulation based on time-dependent Ginzburg-Landau theory, using laser excitation fluence as an adjustable parameter controlling the quench conditions in both experiment and theory. The comparison reveals the presence of significant superconducting fluctuations which precede the transition on short timescales. By including superconducting fluctuations as a seed for the growth of superconducting order we can obtain a satisfactory agreement of the theory with the experiment. Remarkably, the pseudogap excitations apparently play no role in this process.
Spectrally-resolved femtosecond reflectivity relaxation dynamics in undoped SDW 122-structure iron based pnictides (1402.5811)
A. Pogrebna, Z. A. Xu Complex Matter Dept., Jozef Stefan Institute, Jamova, Ljubljana, Slovenia, Institute of Physics, Bijenička, Zagreb, Croatia, Department of Physics, Zhejiang University, Hangzhou, People's Republic of China, Geballe Laboratory for Advanced Materials, Department of Applied Physics, Stanford University, USA)
Feb. 24, 2014 cond-mat.supr-con
We systematically investigate temperature- and spectrally-dependent optical reflectivity dynamics in AAs$_{2}$Fe$_{2}$, (A=Ba, Sr and Eu), iron-based superconductors parent spin-density-wave (SDW) compounds. Two different relaxation processes are identified. The behavior of the slower process, which is strongly sensitive to the magneto-structural transition, is analyzed in the framework of the relaxation-bottleneck model involving magnons. The results are compared to recent time resolved angular photoemission results (TR-ARPES) and possible alternative assignment of the slower relaxation to the magneto-structural order parameter relaxation is discussed.
Incoherent topological defect recombination dynamics in TbTe_3 (1208.1105)
T. Mertelj, D. Mihailovic Complex Matter Department, Jozef Stefan Institute, Ljubljana, Slovenia, CENN Nanocentre, Ljubljana, Slovenia)
Dec. 6, 2012 cond-mat.other, cond-mat.mes-hall
We study the incoherent recombination of topological defects created during a rapid quench of a charge-density-wave system through the electronic ordering transition. Using a specially devised 3-pulse femtosecond optical spectroscopy technique we follow the evolution of the order parameter over a wide range of timescales. By careful consideration of thermal processes we can clearly identify intrinsic topological defect annihilation processes on a timescale ~30 ps and find a signature of extrinsic defect-dominated relaxation dynamics is found to occurring on longer timescales.
Doping dependence of femtosecond quasi-particle relaxation dynamics in Ba(Fe,Co)_2As_2 single crystals: possible evidence for normal state nematic fluctuations (1107.5934)
L. Stojchevska, Ian R. Fisher Complex Matter Dept., Jozef Stefan Institute, Ljubljana, Slovenia, Geballe Laboratory for Advanced Materials, Department of Applied Physics, Stanford University, Stanford, USA, Stanford Institute for Materials, Energy Sciences, SLAC National Accelerator Laboratory, Menlo Park, USA)
We systematically investigate the photoexcited (PE) quasi-particle (QP) relaxation and low-energy electronic structure in electron doped Ba(Fe_{1-x}Co_{x})_{2}As_{2} single crystals as a function of Co doping, 0<= x <=0.11. The evolution of the photoinduced reflectivity transients with $x$ proceeds with no abrupt changes. In the orthorhombic spin-density-wave (SDW) state a bottleneck associated with a partial charge-gap opening is detected, similar to previous results in different SDW iron-pnictides. The relative charge gap magnitude decreases with increasing x. In the superconducting (SC) state an additional relaxational component appears due to a partial (or complete) destruction of the SC state proceeding on a sub-0.5-picosecond timescale. From the SC component saturation behavior the optical SC-state destruction energy, U_p/k_B=0.3 K/Fe, is determined near the optimal doping. The subsequent relatively slow recovery of the SC state indicates clean SC gaps. The T-dependence of the transient reflectivity amplitude in the normal state is consistent with the presence of a pseudogap in the QP density of states. The polarization anisotropy of the transients suggests that the pseudogap-like behavior might be associated with a broken point symmetry resulting from nematic electronic fluctuations persisting up to T~200 K at any x. The second moment of the Eliashberg function, obtained from the relaxation rate in the metallic state at higher temperatures, indicates a moderate electron phonon coupling, lambda <~0.3, that decreases with increasing doping.
Measurements of Cherenkov Photons with Silicon Photomultipliers (0812.0531)
S. Korpar, K. Hara, R. Pestotnik Department of Chemistry, Chemical Engineering, University of Maribor, Slovenia, Jozef Stefan Institute, Ljubljana, Slovenia, Nagoya University, Japan, Department of Mathematics, Physics, University of Ljubljana, Slovenia,
Dec. 2, 2008 physics.ins-det
A novel photon detector, the Silicon Photomultiplier (SiPM), has been tested in proximity focusing Ring Imaging Cherenkov (RICH) counters that were exposed to cosmic-ray particles in Ljubljana, and a 2 GeV electron beam at the KEK research facility. This type of RICH detector is a candidate for the particle identification detector upgrade of the BELLE detector at the KEK B-factory, for which the use of SiPMs, microchannel plate photomultiplier tubes or hybrid avalanche photodetectors, rather than traditional Photomultiplier Tubes (PMTs) is essential due to the presence of high magnetic fields. In both experiments, SiPMs are found to compare favourably with PMTs, with higher photon detection rates per unit area. Through the use of hemispherical and truncated pyramid light guides to concentrate photons onto the active surface area, the light yield increases significantly. An estimate of the contribution to dark noise from false coincidences between SiPMs in an array is also presented.
a-b plane optical conductivity in YBa_2Cu_3O_{7-delta} above and below T* (cond-mat/9801032)
D. Mihailovic Solid State Physics Department, Jozef Stefan Institute, Ljubljana, Slovenia, Faculty of Mathematics, Physics, Ljubljana, Slovenia, Physics Department, University of Zurich, CH-Zurich, Switzerland)
Jan. 6, 1998 cond-mat.supr-con
Analysis of the a-b plane optical conductivity $\sigma_{ab}$ for both twinned and untwinned YBa$_{2}$Cu$_{3}$O$_{7-\delta}$ as a function of temperature and doping shows that below a well defined temperature T*, a dip in the spectrum systematically appears separating the infrared charge excitation spectrum into two components with distinct energy scales. The change from monotonic behaviour in $\sigma_{ab}$ is found to be concurrent with the onset of phonon anomalies in Raman and infrared spectra below T*. The optical data are suggested to be evidence for the appearance of an inhomogeneous distribution of carriers rather than the opening of a simple gap for charge excitations below T*, an interpretation wich is consistent with recent angle-resoved photoemission and electronic Raman spectra. We find that the behaviour below T* and the absence of any anomalies at Tc can be interpreted assuming a Bose-Einstein condensation of preformed pairs.
|
CommonCrawl
|
February 2022, 42(2): 989-1010. doi: 10.3934/dcds.2021144
Leticia Pardo-Simón
Department of Mathematics, The University of Manchester, Manchester, M13 9PL, UK
Received October 2020 Revised July 2021 Published February 2022 Early access October 2021
It is known that, for many transcendental entire functions in the Eremenko-Lyubich class $ \mathcal{B} $, every escaping point can eventually be connected to infinity by a curve of escaping points. When this is the case, we say that the functions are criniferous. In this paper, we extend this result to a new class of maps in $ \mathcal{B} $. Furthermore, we show that if a map belongs to this class, then its Julia set contains a Cantor bouquet; in other words, it is a subset of $ \mathbb{C} $ ambiently homeomorphic to a straight brush.
Keywords: Transcendental entire function, absorbing Cantor bouquets, dynamic rays, criniferous, Eremenko-Lyubich class.
Mathematics Subject Classification: Primary 37F10; Secondary 54H20, 30D05, 54F15.
Citation: Leticia Pardo-Simón. Criniferous entire maps with absorbing Cantor bouquets. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 989-1010. doi: 10.3934/dcds.2021144
J. Aarts and L. Oversteegen, The geometry of Julia sets, Trans. Amer. Math. Soc., 338 (1993), 897-918. doi: 10.1090/S0002-9947-1993-1182980-3. Google Scholar
K. Barański, Trees and hairs for some hyperbolic entire maps of finite order, Math. Z., 257 (2007), 33-59. doi: 10.1007/s00209-007-0114-7. Google Scholar
K. Barański, X. Jarque and L. Rempe, Brushing the hairs of transcendental entire functions, Topology Appl., 159 (2012), 2102-2114. doi: 10.1016/j.topol.2012.02.004. Google Scholar
A. M. Benini and N. Fagella, Singular values and non-repelling cycles for entire transcendental maps, Indiana Univ. Math. J., 69 (2020), 1543-1558. doi: 10.1512/iumj.2020.69.8000. Google Scholar
A. Benini and L. Rempe, A landing theorem for entire functions with bounded post-singular sets, Geom. Funct. Anal., 30 (2020), 1465-1530. doi: 10.1007/s00039-020-00551-3. Google Scholar
A. Douady and J. H. Hubbard, Étude dynamique des polynomes complexes, Mathématiques d'Orsay [Mathematical Publications of Orsay], Université de Paris-Sud, Département de Mathématiques, Orsay, (1984), 75pp. Google Scholar
A. E. Erëmenko, On the iteration of entire functions, Banach Center Publications, 23 (1989), 339-345. Google Scholar
A. Erëmenko and M. Lyubich, Dynamical properties of some classes of entire functions, Ann. Inst. Fourier (Grenoble), 42 (1992), 989-1020. doi: 10.5802/aif.1318. Google Scholar
L. R. Goldberg and L. Keen, A finiteness theorem for a dynamical class of entire functions, Ergodic Theory Dynam. Systems, 6 (1986), 183-192. doi: 10.1017/S0143385700003394. Google Scholar
O. Lehto, An extension theorem for quasiconformal mappings, Proc. London Math. Soc., 14a (1965), 187-190. doi: 10.1112/plms/s3-14A.1.187. Google Scholar
H. Mihaljević-Brandt, Semiconjugacies, pinched Cantor bouquets and hyperbolic orbifolds, Trans. Amer. Math. Soc., 364 (2012), 4053-4083. doi: 10.1090/S0002-9947-2012-05541-3. Google Scholar
L. Pardo Simón, Dynamics of transcendental entire functions with escaping singular orbits, PhD thesis, University of Liverpool. Google Scholar
L. Pardo-Simón, Splitting hairs with transcendental entire functions, Preprint, arXiv: 1905.03778v3. Google Scholar
L. Pardo-Simón, Topological dynamics of cosine maps, Preprint, arXiv: 2003.07250. Google Scholar
L. Pardo-Simón, Orbifold expansion and entire functions with bounded Fatou components, Ergodic Theory and Dynamical Systems, 1–40. Google Scholar
L. Rempe, Rigidity of escaping dynamics for transcendental entire functions, Acta Math., 203 (2009), 235-267. doi: 10.1007/s11511-009-0042-y. Google Scholar
L. Rempe-Gillen, Arc-like continua, Julia sets of entire functions, and remenko's Conjecture, Preprint, arXiv: 1610.06278v3. Google Scholar
G. Rottenfußer, J. Rückert, L. Rempe and D. Schleicher, Dynamic rays of bounded-type entire functions, Ann. of Math. (2), 173 (2011), 77-125. doi: 10.4007/annals.2011.173.1.3. Google Scholar
G. Rottenfußer and D. Schleicher, Escaping points of the cosine family, Transcendental Dynamics and Complex Analysis, Cambridge University Press, 348 (2008), 396–424. doi: 10.1017/CBO9780511735233.016. Google Scholar
D. Schleicher and J. Zimmer, Escaping points of exponential maps, J. London Math. Soc., 67 (2003), 380-400. doi: 10.1112/S0024610702003897. Google Scholar
Figure 1. On the left, hairs of a Cantor bouquet intersecting a circle $ \partial {{\mathbb{D}}}_R $, some of them multiple times. For each hair, dashes represent points with lower potential than that of the last point that intersects $ \partial {{\mathbb{D}}}_R $. On the right, the image of the hairs to a straight brush under an ambient homeomorphism $ \psi $. $ [-Q, Q]^2 $ is a square whose boundary the hairs intersect at most once, and $ S_R: = \psi^{-1}((-Q, Q)^2) $
Figure 2. Construction of a neighbourhood of $ z_n(\eta) $ in Claim Claim 2 by pulling back balls centred at $ f^j(z_n) $ for all $ 1\leq j\leq n $ such that $ f^j(z_n) \in \partial S_R $
Figure 3. Proof of Proposition 8 by interpolating the maps $ \psi_g $ and $ {\varphi}_f $ using the annulus $ \mathcal{R} $ shown in orange
Núria Fagella, David Martí-Pete. Dynamic rays of bounded-type transcendental self-maps of the punctured plane. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3123-3160. doi: 10.3934/dcds.2017134
Agnieszka Badeńska. No entire function with real multipliers in class $\mathcal{S}$. Discrete & Continuous Dynamical Systems, 2013, 33 (8) : 3321-3327. doi: 10.3934/dcds.2013.33.3321
Patricia Domínguez, Peter Makienko, Guillermo Sienra. Ruelle operator and transcendental entire maps. Discrete & Continuous Dynamical Systems, 2005, 12 (4) : 773-789. doi: 10.3934/dcds.2005.12.773
Koh Katagata. Transcendental entire functions whose Julia sets contain any infinite collection of quasiconformal copies of quadratic Julia sets. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5319-5337. doi: 10.3934/dcds.2019217
Piermarco Cannarsa, Peter R. Wolenski. Semiconcavity of the value function for a class of differential inclusions. Discrete & Continuous Dynamical Systems, 2011, 29 (2) : 453-466. doi: 10.3934/dcds.2011.29.453
Bahaaeldin Abdalla, Thabet Abdeljawad. Oscillation criteria for kernel function dependent fractional dynamic equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3337-3349. doi: 10.3934/dcdss.2020443
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3821-3836. doi: 10.3934/dcdss.2020436
S. Astels. Thickness measures for Cantor sets. Electronic Research Announcements, 1999, 5: 108-111.
Igor Shevchenko, Barbara Kaltenbacher. Absorbing boundary conditions for the Westervelt equation. Conference Publications, 2015, 2015 (special) : 1000-1008. doi: 10.3934/proc.2015.1000
Jianxun Fu, Song Zhang. A new type of non-landing exponential rays. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4179-4196. doi: 10.3934/dcds.2020177
David Cheban. Belitskii--Lyubich conjecture for $C$-analytic dynamical systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 945-959. doi: 10.3934/dcdsb.2015.20.945
Jihoon Lee, Vu Manh Toi. Attractors for a class of delayed reaction-diffusion equations with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3135-3152. doi: 10.3934/dcdsb.2020054
Agnieszka Badeńska. Measure rigidity for some transcendental meromorphic functions. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2375-2402. doi: 10.3934/dcds.2012.32.2375
E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1401-1409. doi: 10.3934/dcds.2003.9.1401
Mehdi Pourbarat. On the arithmetic difference of middle Cantor sets. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4259-4278. doi: 10.3934/dcds.2018186
Maik Gröger, Olga Lukina. Measures and stabilizers of group Cantor actions. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2001-2029. doi: 10.3934/dcds.2020350
Xilin Fu, Zhang Chen. New discrete analogue of neural networks with nonlinear amplification function and its periodic dynamic analysis. Conference Publications, 2007, 2007 (Special) : 391-398. doi: 10.3934/proc.2007.2007.391
Hongming Yang, C. Y. Chung, Xiaojiao Tong, Pingping Bing. Research on dynamic equilibrium of power market with complex network constraints based on nonlinear complementarity function. Journal of Industrial & Management Optimization, 2008, 4 (3) : 617-630. doi: 10.3934/jimo.2008.4.617
Kim Dang Phung. Decay of solutions of the wave equation with localized nonlinear damping and trapped rays. Mathematical Control & Related Fields, 2011, 1 (2) : 251-265. doi: 10.3934/mcrf.2011.1.251
|
CommonCrawl
|
Environmental Engineering Research
Korean Society of Environmental Engineers (대한환경공학회)
2005-968X(eISSN)
The Environmental Engineering Research (EER) is published quarterly by the Korean Society of Environmental Engineers (KSEE). The EER covers a broad spectrum of the science and technology of air, soil, and water management while emphasizing scientific and engineering solutions to environmental issues encountered in industrialization and urbanization. Particularly, interdisciplinary topics and multi-regional/global impacts (including eco-system and human health) of environmental pollution as well as scientific and engineering aspects of novel technologies are considered favorably. The scope of the Journal includes the following areas, but is not limited to: 1. Atmospheric Environment & Climate Change: Global and local climate change, greenhouse gas control, and air quality modeling 2. Renewable Energy & Waste Management: Energy recovery from waste, incineration, landfill, and green energy 3. Environmental Biotechnology & Ecology: Nano-biosensor, environmental genomics, bioenergy, and environmental eco-engineering 4. Physical & Chemical Technology: Membrane technology and advanced oxidation 5. Environmental System Engineering: Seawater desalination, ICA (instrument, control, and automation), and water reuse 6. Environmental Health & Toxicology: Micropollutants, hazardous materials, ecotoxicity, and environmental risk assessment
http://submit.eeer.org/ KSCI KCI SCOPUS SCIE
DEVELOPMENT OF ADSORBENT USING BYPRODUCTS FROM KOREAN MEDICINE FOR REMOVING HEAVY METALS
Kim, S.W.;Lim, J.L. 1
https://doi.org/10.4491/eer.2007.12.1.001 PDF KSCI
Most of the herb residue producing from oriental medical clinics(OMC) and hospitals(OMH) is wasted in Korea. To develop of adsorbent for removing heavy metal from wastewater, the various pre-treatment methods of the herb residue were evaluated by potentiometric titration, Freundlich isotherm adsorption test and the kinetic adsorption test. The herb residue was pre-treated for increasing the adsorption capacity by cleaning with distilled water, 0.1 N HCl and 0.1 N NaOH and by heating at $370^{\circ}C$ for 30 min. It showed a typical weak acid-weak base titration curve and a short pH break like commercial activated carbon during photentiometric titration of pre-treated herb residue. The log-log plots in the Freundlich isotherm test were linear on the herb residue pre-treated with NaOH or HCl like commercial activated carbon. The adsorption capacity(qe) in the Freundlich isotherm test for $Cr^{6+}$ was 1.5 times higher in the pre-treated herb residue with HCl than in activated carbon. On the other hand the herb residue pre-treated with NaOH showed the good adsorption capacities for $Pb^{2+}$, $Cu^{2+}$ and $Cd^{2+}$ even though those adsorption capacities were lower than that of activated carbon. In kinetic test, most of heavy metals removed within the first 10 min of contact and then approached to equilibrium with increasing contact time. The removal rate of heavy metals increased with an increase of the amount of adsorbent. Likewise, the removal rates of heavy metals were higher in the herb residue pre-treated with NaOH than in that pre-treated with HCl. The adsorption preference of herb residues pre-treated with NaOH or HCl was $Pb^{2+}>Cu^{2+}$ or $Cd^{2+}>Cr^{6+}$ in the order. Conclusively, the herb residue can be used as an alternative adsorbent for the removal of heavy metals depending on pr-treatment methods.
PERFORMANCE OF TWO-PHASE UASB REACTOR IN ANAEROBIC TREATMENT OF WASTEWATER WITH SULFATE
Oh, Sae-Eun 8
Two phase UASB reactors for treating wastewater with sulfate were operated to assess the performance and competition of organics between sulfate reducing bacteria(SRB) and methane producing bacteria(MPB), and the change of characteristics of microorganisms. The reactors were fed in parallel with a synthetic wastewater of 4,000-5,000 mgCOD/L and sulfate concentration of $800-1,000\;mgSO_4/L$. In the MPR(methane producing reactor) and CR(control reactor), COD removal efficiencies were 90% and 60%, respectively, at the OLR(organic loading rate) of 6 gCOD/L, while the amount of biogas and methane content were 6.5 L/day and 80%, and 3 L/day and 50%, respectively. However, the portion of electron flow used by SRB at the OLR of 6 gCOD/L day in MPR and CR was 3% and 26%, respectively. This indicated that the increase of OLR of wastewater containing high sulfate like CR resulted in activity decrease and cell decay of MPB, while SRB was adapted immediately to new environment. The MPB activities in MPR and CR were 2 and $0.38\;kgCH_4-COD$/gVSS day at the OLR of 6 gCOD/L. This indicated hat SRB dominated gradually over MPB during long-term operation with wastewater containing sulfate as a consequence of outcompeting of SRB over MPB. In addition, the solution within AFR was maintained around pH 5.0, the MPB such as Methanothrix spp. which was very important to formation of granules was detached from the surface of granules due to the decrease of activity by limitation of substrate transportation into MPB. Therefore, a significant amount of sludge was washed out from the reactor.
TOXICITY IDENTIFICATION AND CONFIRMATION OF METAL PLATTING WASTEWATER
Kim, Hyo-Jin;Jo, Hun-Je;Park, Eun-Joo;Cho, Ki-Jong;Shin, Key-Il;Jung, Jin-Ho 16
Toxicity of metal plating wastewater was evaluated by using acute toxicity tests on Daphnia magna. To identify toxicants of metal plating wastewater, several manipulations such as solid phase extraction (SPE), ion exchange and graduated pH adjustment were used. The SPE test had no significant effect on baseline toxicity, suggesting absence of toxic non-polar organics in metal plating wastewater. However, anion exchange largely decreased the baseline toxicity by 88%, indicating the causative toxicants were inorganic anions. Considering high concentration of chromium in metal plating wastewater, it is thought the anion is Cr(VI) species. Graduated pH test showing independence of the toxicity on pH change strongly supports this assumption. However, as revealed by toxicity confirmation experiment, the initial toxicity of metal plating wastewater (24-h TU=435) was not explained only by Cr(VI) (24-h TU = 725 at $280\;mg\;L^{-1}$). Addition of nickel($29.5\;mg\;L^{-1}$) and copper ($26.5\;mg\;L^{-1}$) largely decreased the chromium toxicity up to 417 TU, indicating antagonistic interaction between heavy metals. This heavy metal interaction was successfully predicted by an equation of 24-h $TU\;=\;3.67\;{\times}\;\ln([Cu]\;+\;[Ni])\;+\;79.44$ at a fixed concentration of chromium.
POTABLE WATER TREATMENT BY POLYACRYLAMIDE BASE FLOCCULANTS, COUPLED WITH AN INORGANIC COAGULANT
Bae, Young-Han;Kim, Hyung-Jun;Lee, Eun-Joo;Sung, Nak-Chang;Lee, Sung-Sik;Kim, Young-Han 21
For this study, we polymerized polyacrylamide base flocculants (PAA) and tested their properties and settling efficiency as a treatment for potable water. The most common chemicals for potable water treatment in Korea are alum or PAC. However, due to various reasons (such as rainy season or algae), inorganic flocculants cannot be solely depended on to solve all the problems caused by the poor quality of inflow water. When PAA coupled with coagulants in a potable water purification process is used, the turbidity removal efficiency increases by a factor of three on a single chemical system using PAC (Raw water: 5.21 NTU; Treated PAA+PAC: 0.34 NTU; and, Treated PAC: 1.04 NTU). It is possible to offset the toxic effect of residual monomers in treated water using PAA, because the concentrations of residual acrylamide are less than 400 mg/L in the polymer itself and less than $0.04\;{\mu}g/L$ in the treated water base at a dosage of 0.1 mg/L. Therefore, PAAs may be a workable, and dependable, potable water treatment process for the high pollutant level of resource water.
TREATMENT OF PHENOL CONTAINED IN WASTE WATER USING THE HETEROGENIZED FENTON SYSTEM
Kim, Seong-Bo 30
Fenton system using homogeneous iron catalyst is very powerful in the degradation of organic compounds, but has a disadvantage to remove Fe ions from water after wastewater treatment. Thus, iron catalyst was bounded to support such as inorganic and polymer materials. The PVP supporting iron catalyst showed a good catalytic performance in degradation of phenol contained in waste water and iron catalyst supported on ${SO_4}^{2-}$ type PVP (KEX 511) showed the best catalytic performance. Also, reaction kinetic study was carried out in this system. Reaction constants on various catalysts was obtained from the pseudo first order equation. Reaction rate constants with the heterogenized $FeCl_2/PVP$ catalyst is a three-fold smaller than that of $FeCl_2$ catalyst.
|
CommonCrawl
|
Cornerstones
Spearman's Rank Sum Correlation Test
There is a non-parametric test for an association (not necessarily linear) between two variables, called Spearman's Rank Correlation Test that can be used when the assumptions/requirements of the (parametric) correlation test are not satisfied.
The only requirements of this non-parametric test are that the data is paired and the result of a simple random sample, and that the data can be ranked (if they are not ranks already).
Essentially, all this test does is find ranks $x_i$ and $y_i$ for each pair of $X_i$ and $Y_i$ values and then run Pearson's correlation test on these ranks.
Recall that $$r = \frac{s_{xy}}{s_x s_y} = \frac{\sum_i (x_i - \overline{x})(y_i - \overline{y})}{\sqrt{\sum_i (x_i-\overline{x})^2} \sqrt{\sum_i (y_i - \overline{y})^2}}$$
We denote this value as $r_S$ when it is computed from ranks to avoid confusion.
Procedurally, one ranks each sample separately. Then for each pair, one finds the difference of ranks $d_i$.
The test statistic $r_S$, when there are no rank ties, can be simplified to
$$r_S = 1 - \frac{6 \sum d_i^2}{n(n^2-1)}$$
To see this, first note that as there are no ties, the $x_i$'s and $y_i$'s both consist of the integers from $1$ to $n$, inclusive.
Consequently, we can rewrite the denominator as $$\frac{\sum_i (x_i - \overline{x})(y_i - \overline{y})}{\sum_i (x_i-\overline{x})^2}$$ Ultimately, the denominator is just a function of $n$: $$\begin{array}{rcl} \displaystyle{\sum_{i=1}^n (x_i-\overline{x})^2} & = & \displaystyle{\sum_{i=1}^n x_i^2 - 2\sum_{i=1}^n x_i\overline{x} + \sum_{i=1}^n \overline{x}^2}\\ & = & \displaystyle{\left[ \sum_{i=1}^n x_i^2 \right] - 2n\overline{x}\left[\frac{\sum_{i=1}^n x_i}{n}\right] + n \overline{x}^2}\\ & = & \displaystyle{\left[ \sum_{i=1}^n i^2 \right] - 2n\overline{x}^2 + n \overline{x}^2}\\ & = & \displaystyle{\left[ \sum_{i=1}^n i^2 \right] - n\overline{x}^2}\\ & = & \displaystyle{\frac{n(n+1)(2n+1)}{6} - n \left( \frac{n+1}{2} \right)^2}\\ & = & \displaystyle{n(n+1) \left( \frac{2n+1}{6} - \frac{n+1}{4} \right)}\\ & = & \displaystyle{n(n+1) \left( \frac{8n+4}{24} - \frac{6n+6}{24} \right)}\\ & = & \displaystyle{n(n+1) \left( \frac{2n-2}{24} \right)}\\ & = & \displaystyle{\frac{n(n+1)(n-1)}{12}}\\ & = & \displaystyle{\frac{n(n^2-1)}{12}}\\ \end{array}$$
As for the numerator...
$$\begin{array}{rcl} \displaystyle{\sum_{i=1}^n (x_i - \overline{x})(y_i - \overline{y})} & = & \displaystyle{\sum_{i=1}^n x_i(y_i-\overline{y}) - \sum_{i=1}^n \overline{x} (y_i - \overline{y})}\\ & = & \displaystyle{\sum_{i=1}^n x_i y_i - \overline{y} \sum_{i=1}^n x_i - \overline{x} \sum_{i=1}^n y_i + n \overline{x}\overline{y}}\\ & = & \displaystyle{\left[ \sum_{i=1}^n x_i y_i \right] - n\overline{x}\overline{y}}\\ & = & \displaystyle{\left[ \sum_{i=1}^n x_i y_i \right] - n \left( \frac{n+1}{2} \right)^2}\\ & = & \displaystyle{\left[ \sum_{i=1}^n x_i y_i \right] - \frac{n(n+1)(2n+1)}{6} + \frac{n(n^2-1)}{12}}\\ & = & \displaystyle{\left[ \sum_{i=1}^n x_i y_i \right] - \sum_{i=1}^n x_i^2 + \frac{n(n^2-1)}{12}}\\ & = & \displaystyle{\frac{2\sum_{i=1}^n x_i y_i}{2} - \frac{\sum_{i=1}^n (x_i^2 + y_i^2)}{2} + \frac{n(n^2-1)}{12}}\\ & = & \displaystyle{\frac{n(n^2-1)}{12} - \frac{\sum_{i=1}^n (x_i^2 - 2x_iy_i + y_i^2)}{2}}\\ & = & \displaystyle{\frac{n(n^2-1)}{12} - \frac{\sum_{i=1}^n (x_i - y_i)^2}{2}}\\ & = & \displaystyle{\frac{n(n^2-1)}{12} - \frac{\sum_{i=1}^n d_i^2}{2}}\\ \end{array}$$
Finally, dividing both numerator and denominator by $n(n^2-1)/12$, we can simplify things to
$$r_s = \frac{\displaystyle{\frac{n(n^2-1)}{12} - \frac{\sum_{i=1}^n d_i^2}{2}}}{\displaystyle{\frac{n(n^2-1)}{12}}} = 1 - \frac{6 \sum d_i^2}{n(n^2-1)}$$
Critical values can be found in the table below:
Suppose one wishes to use a non-parametric test to test the claim that there is a correlation between one's age and the number of parties they attend in a two-month period, given the following data:
$$\begin{array}{l|c|c|c|c|c|c|c} \textrm{Age} & 16 & 24 & 18 & 17 & 23 & 27 & 32\\\hline \textrm{Parties} & 3 & 2 & 5 & 4 & 0 & 6 & 1 \end{array}$$
First we rank the $x$'s and $y$'s separately:
$$\begin{array}{l|c|c|c|c|c|c|c} & 1 & 5 & 3 & 2 & 4 & 6 & 7 \\\hline \textrm{Age} & 16 & 24 & 18 & 17 & 23 & 27 & 32\\\hline \textrm{Parties} & 3 & 2 & 5 & 4 & 0 & 6 & 1\\\hline & 4 & 3 & 6 & 5 & 1 & 7 & 2 \end{array}$$
Then, for each pair, we find the difference of the ranks and its square.
$$\begin{array}{l|c|c|c|c|c|c|c} d & -3 & 2 & -3 & -3 & 3 & -1 & 5\\\hline d^2 & 9 & 4 & 9 & 9 & 9 & 1 & 25 \end{array}$$
Now we can calculate the test statistic:
$$r_S = 1 - \frac{6 \sum d_i^2}{n(n^2-1)} = 1 - \frac{(6)(66)}{(7)(49-1)} = -0.1786$$
Seeing this test statistic less in absolute value than the corresponding critical value at $\alpha = 0.05$ given in the table above (i.e., $C.V. = 0.786$), we would fail to reject the null hypothesis, inferring that there is no evidence of a correlation.
|
CommonCrawl
|
Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory
DCDS Home
A fractional Korn-type inequality
June 2019, 39(6): 3345-3364. doi: 10.3934/dcds.2019138
Hardy-Sobolev type inequality and supercritical extremal problem
José Francisco de Oliveira 1, , João Marcos do Ó 2,, and Pedro Ubilla 3,
Department of Mathematics, Federal University of Piauí, 64049-550 Teresina, PI, Brazil
Department of Mathematics, University of Brasília, 70910-900, Brasília, DF, Brazil
Departamento de Matematica, Universidad de Santiago de Chile, Casilla 307, Correo 2, Santiago, Chile
* Corresponding author: João Marcos do Ó
Received July 2018 Revised December 2018 Published February 2019
Fund Project: The third author was supported by FONDECYT grants 1181125, 1161635 and 1171691.
This paper deals with Hardy-Sobolev type inequalities involving variable exponents. Our approach also enables us to prove existence results for a wide class of quasilinear elliptic equations with supercritical power-type nonlinearity with variable exponent.
Keywords: Hardy-type inequality, critical exponents, supercritical growth, extremal problem, Sobolev space.
Mathematics Subject Classification: Primary: 46E35, 26D10, 35J62; Secondary: 35B33.
Citation: José Francisco de Oliveira, João Marcos do Ó, Pedro Ubilla. Hardy-Sobolev type inequality and supercritical extremal problem. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3345-3364. doi: 10.3934/dcds.2019138
A. Balinsky, W. D. Evans and R. T. Lewis, Hardy's inequality and curvature, J. Funct. Anal., 262 (2012), 648-666. doi: 10.1016/j.jfa.2011.10.001. Google Scholar
E. Berchio, D. Ganguly and G. Grillo, Sharp Poincaré-Hardy and Poincaré-Rellich inequalities on the hyperbolic space, J. Funct. Anal., 272 (2017), 1661-1703. doi: 10.1016/j.jfa.2016.11.018. Google Scholar
H. Brezis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.1090/S0002-9939-1983-0699419-3. Google Scholar
H. Brezis, M. Marcus and I. Shafrir, Extremal functions for Hardy's inequality with weight, J. Funct. Anal., 171 (2000), 177-191. doi: 10.1006/jfan.1999.3504. Google Scholar
H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. doi: 10.1002/cpa.3160360405. Google Scholar
P. Clément, D. G. de Figueiredo and E. Mitidieri, Quasilinear elliptic equations with critical exponents, Topol. Methods Nonlinear Anal., 7 (1996), 133-170. doi: 10.12775/TMNA.1996.006. Google Scholar
A. Cotsiolis and N. Labropoulos, Sharp Hardy inequalities on the solid torus, J. Math. Anal. Appl., 448 (2017), 841-863. doi: 10.1016/j.jmaa.2016.11.042. Google Scholar
D. G. de Figueiredo, J. V. Gonçalves and O. H. Miyagaki, On a class of quasilinear elliptic problems involving critical exponents, Commun. Contemp. Math., 2 (2000), 47-59. doi: 10.1142/S0219199700000049. Google Scholar
J. F. de Oliveira and J. M. do Ó, Trudinger-Moser type inequalities for weighted Sobolev spaces involving fractional dimensions, Proc. Amer. Math. Soc., 142 (2014), 2813-2828. doi: 10.1090/S0002-9939-2014-12019-3. Google Scholar
J. F. de Oliveira, On a class of quasilinear elliptic problems with critical exponential growth on the whole space, Topol. Methods Nonlinear Anal., 49 (2017), 529-550. doi: 10.12775/TMNA.2016.086. Google Scholar
L. Diening, P. Harjulehto, P. Hästö and M. Ruzicka, Lebesgue and Sobolev Spaces with Variable Exponents, Lecture Notes in Mathematics 2017, Springer, Heidelberg, 2011. doi: 10.1007/978-3-642-18363-8. Google Scholar
J. M. do Ó and J. F. de Oliveira, Concentration-compactness and extremal problems for a weighted Trudinger-Moser inequality, Commun. Contemp. Math., 19 (2017), 1650003, 26pp. doi: 10.1142/S0219199716500036. Google Scholar
J. M. do Ó, B. Ruf and P. Ubilla, On supercritical Sobolev type inequalities and related elliptic equations, Calc. Var. Partial Differential Equations, 55 (2016), Art. 83, 18 pp. doi: 10.1007/s00526-016-1015-6. Google Scholar
I. Ekeland, On the variational principle, J. Math. Anal. Appl., 47 (1974), 324-353. doi: 10.1016/0022-247X(74)90025-0. Google Scholar
G. H. Hardy, Note on a theorem of Hilbert, Math. Z., 6 (1920), 314-317. doi: 10.1007/BF01199965. Google Scholar
[16] G. H. Hardy, J. E. Littlewood and G. Pólya, Inequalities, University Press, ${2^{\mathit{nd}}}$ edition, Cambridge, at the University Press, 1952. Google Scholar
J. Jacobsen and K. Schmitt, Radial solutions of quasilinear elliptic differential equations, Handbook of Differential Equations, Elsevier/North-Holland, Amsterdam, (2004), 359-435. Google Scholar
J. Jacobsen and K. Schmitt, The Liouville-Bratu-Gelfand problem for radial operators, J. Differential Equations, 184 (2002), 283-298. doi: 10.1006/jdeq.2001.4151. Google Scholar
A. Kufner and B. Opic, Hardy-type Inequalities, Pitman Research Notes in Mathematics Series, vol. 219, Longman Scientific and Technical, Harlow, 1990. Google Scholar
E. Mitidieri, A simple approach to Hardy inequalities, Mat. Zametki, 67 (2000), 563-572. doi: 10.1007/BF02676404. Google Scholar
D. S. Mitrinovic, J. E. Pecaric and A. M. Fink, Inequalities Involving Functions and Their Integrals and Derivatives, Mathematics and its Applications (East European Series), 53, Kluwer Academic Publishers Group, Dordrecht, 1991. doi: 10.1007/978-94-011-3562-7. Google Scholar
D. S. Mitrinovic, J. E. Pecaric and A. M. Fink, Classical and New Inequalities in Analysis, Mathematics and its Applications (East European Series), 61, Kluwer Academic Publishers Group, Dordrecht, 1993. doi: 10.1007/978-94-017-1043-5. Google Scholar
B. G. Pachpatte, On some generalizations of Hardy's integral inequality, J. Math. Anal. Appl., 234 (1999), 15-30. doi: 10.1006/jmaa.1999.6294. Google Scholar
W. Strauss, Existence of solitary waves in higher dimensions, Comm. Math. Phys., 55 (1977), 149-162. doi: 10.1007/BF01626517. Google Scholar
G. Wang and D. Ye, A Hardy-Moser-Trudinger inequality, Advances in Mathematics, 230 (2012), 294-320. doi: 10.1016/j.aim.2011.12.001. Google Scholar
Jingbo Dou, Ye Li. Classification of extremal functions to logarithmic Hardy-Littlewood-Sobolev inequality on the upper half space. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 3939-3953. doi: 10.3934/dcds.2018171
Elvise Berchio, Debdip Ganguly. Improved higher order poincaré inequalities on the hyperbolic space via Hardy-type remainder terms. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1871-1892. doi: 10.3934/cpaa.2016020
Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1037-1054. doi: 10.3934/cpaa.2011.10.1037
Yinbin Deng, Qi Gao, Dandan Zhang. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$. Discrete & Continuous Dynamical Systems, 2007, 19 (1) : 211-233. doi: 10.3934/dcds.2007.19.211
Xiaorong Luo, Anmin Mao, Yanbin Sang. Nonlinear Choquard equations with Hardy-Littlewood-Sobolev critical exponents. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1319-1345. doi: 10.3934/cpaa.2021022
Masato Hashizume, Chun-Hsiung Hsia, Gyeongha Hwang. On the Neumann problem of Hardy-Sobolev critical equations with the multiple singularities. Communications on Pure & Applied Analysis, 2019, 18 (1) : 301-322. doi: 10.3934/cpaa.2019016
Yu Zheng, Carlos A. Santos, Zifei Shen, Minbo Yang. Least energy solutions for coupled hartree system with hardy-littlewood-sobolev critical exponents. Communications on Pure & Applied Analysis, 2020, 19 (1) : 329-369. doi: 10.3934/cpaa.2020018
Yanfang Peng, Jing Yang. Sign-changing solutions to elliptic problems with two critical Sobolev-Hardy exponents. Communications on Pure & Applied Analysis, 2015, 14 (2) : 439-455. doi: 10.3934/cpaa.2015.14.439
Minbo Yang, Fukun Zhao, Shunneng Zhao. Classification of solutions to a nonlocal equation with doubly Hardy-Littlewood-Sobolev critical exponents. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5209-5241. doi: 10.3934/dcds.2021074
Mousomi Bhakta, Debangana Mukherjee. Semilinear nonlocal elliptic equations with critical and supercritical exponents. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1741-1766. doi: 10.3934/cpaa.2017085
Ze Cheng, Congming Li. An extended discrete Hardy-Littlewood-Sobolev inequality. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1951-1959. doi: 10.3934/dcds.2014.34.1951
Hua Jin, Wenbin Liu, Huixing Zhang, Jianjun Zhang. Ground states of nonlinear fractional Choquard equations with Hardy-Littlewood-Sobolev critical growth. Communications on Pure & Applied Analysis, 2020, 19 (1) : 123-144. doi: 10.3934/cpaa.2020008
M. Nakamura, Tohru Ozawa. The Cauchy problem for nonlinear wave equations in the Sobolev space of critical order. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 215-231. doi: 10.3934/dcds.1999.5.215
Wei Dai, Zhao Liu, Guozhen Lu. Hardy-Sobolev type integral systems with Dirichlet boundary conditions in a half space. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1253-1264. doi: 10.3934/cpaa.2017061
F. R. Pereira. Multiple solutions for a class of Ambrosetti-Prodi type problems for systems involving critical Sobolev exponents. Communications on Pure & Applied Analysis, 2008, 7 (2) : 355-372. doi: 10.3934/cpaa.2008.7.355
Ze Cheng, Changfeng Gui, Yeyao Hu. Existence of solutions to the supercritical Hardy-Littlewood-Sobolev system with fractional Laplacians. Discrete & Continuous Dynamical Systems, 2019, 39 (3) : 1345-1358. doi: 10.3934/dcds.2019057
Chunhua Wang, Jing Yang. Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz'ya terms. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1603-1628. doi: 10.3934/dcds.2016.36.1603
Genggeng Huang, Congming Li, Ximing Yin. Existence of the maximizing pair for the discrete Hardy-Littlewood-Sobolev inequality. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 935-942. doi: 10.3934/dcds.2015.35.935
Ze Cheng, Genggeng Huang, Congming Li. On the Hardy-Littlewood-Sobolev type systems. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2059-2074. doi: 10.3934/cpaa.2016027
Zongming Guo, Juncheng Wei. Liouville type results and regularity of the extremal solutions of biharmonic equation with negative exponents. Discrete & Continuous Dynamical Systems, 2014, 34 (6) : 2561-2580. doi: 10.3934/dcds.2014.34.2561
José Francisco de Oliveira João Marcos do Ó Pedro Ubilla
|
CommonCrawl
|
Search SpringerLink
Genetic programming for iterative numerical methods
Dominik Sobania ORCID: orcid.org/0000-0001-8873-71431,
Jonas Schmitt ORCID: orcid.org/0000-0002-8891-00462,
Harald Köstler ORCID: orcid.org/0000-0002-6992-26902 &
Franz Rothlauf ORCID: orcid.org/0000-0003-3376-427X1
Genetic Programming and Evolvable Machines (2021)Cite this article
We introduce GPLS (Genetic Programming for Linear Systems) as a GP system that finds mathematical expressions defining an iteration matrix. Stationary iterative methods use this iteration matrix to solve a system of linear equations numerically. GPLS aims at finding iteration matrices with a low spectral radius and a high sparsity, since these properties ensure a fast error reduction of the numerical solution method and enable the efficient implementation of the methods on parallel computer architectures. We study GPLS for various types of system matrices and find that it easily outperforms classical approaches like the Gauss–Seidel and Jacobi methods. GPLS not only finds iteration matrices for linear systems with a much lower spectral radius, but also iteration matrices for problems where classical approaches fail. Additionally, solutions found by GPLS for small problem instances show also good performance for larger instances of the same problem.
Numerical methods are used in various disciplines to solve problems where an analytical solution does not exist or is difficult to find. In computational science and engineering, for example, one tries to model physical phenomena and then to approximate these usually continuous mathematical models numerically. The computation of a numerical solution often requires solving a system of (non-)linear equations. Since the number of unknowns can be huge in numerous real-world applications, efficient and scalable solvers for such systems are necessary. Unfortunately, the optimal solver method depends on the system of equations itself and therefore it is impossible to formulate a single algorithm for this purpose. However, over the past decades, several numerical solvers have been proposed in the field of applied mathematics, which are usually efficient for certain classes of system matrices (the coefficient matrix of a linear system) [3, 21].
Genetic programming (GP) is an evolutionary computation technique that has been successfully applied to various real-world problems during the last decades [18]. Especially in the field of symbolic regression, where the aim is to find mathematical expressions solving a given problem, GP has been used to approximate even complex problems [12, 27]. This makes GP an interesting approach for finding new iterative numerical methods as it can be used to find the required mathematical expressions to generate iteration matrices based on certain classes of given system matrices.
This paper applies a novel GP approach for linear systems (GPLS); an approach that finds an iteration matrix for a given linear system. To ensure that the resulting iterative numerical method can be executed efficiently on parallel computer architectures, we are interested in a low spectral radius and a high sparsity of the found iteration matrices. We evaluate the found methods on standard test problems and real-world use cases to demonstrate the human competitiveness of GPLS and to compare it with traditional methods.
GPLS uses certain elements (e.g., some functions and terminals) previously proposed in a short paper by Mahmoodabadi et al. [15] in their presentation of a first prototype for solving linear systems with GP, and by Schmitt et al. [22] who focus on special classes of sparse linear systems. The aim of GPLS is to find a good iteration matrix based on an input system matrix in general and not one that only serves special cases. The iteration matrix is the core component of all considered numerical methods. For GPLS, we define an objective function that measures the generated iteration matrices' spectral radius and the sparsity, as well as the method's complexity (size of the generated mathematical term equal to the number of tree nodes). The spectral radius is an indicator for the convergence of the generated method, high sparsity provides performance advantages in the calculation and implementation of the method, and the complexity measure serves as bloat control.
Following this introduction, we present a background to iterative numerical methods and explain the relevant stationary iterative numerical methods in Sect. 2, describe the discretization of partial differential equations to systems of linear equations in Sect. 3, introduce GPLS in detail in Sect. 4, and present our experiments and discuss the results in Sect. 5. Section 6 concludes the paper.
Iterative numerical methods
The most fundamental problem within linear algebra is finding the solution of the linear system \(Ax = b\), where \(A \in {\mathbb {R}}^{m \times n}\) is the coefficient matrix, \(x \in {\mathbb {R}}^m\) the vector of unknowns, and \(b \in {\mathbb {R}}^m\) the right-hand side vector. If A is a squared nonsingular (or invertible) matrix, there exists a single unique solution \(x^*\) of the system.
Most linear systems derived from science and engineering phenomena do possess certain special structures. The most eminent property of these systems is sparsity which means that the majority of the entries of the coefficient matrix A are zero while the number of nonzero entries is usually of order n. Sparsity significantly reduces the number of required elementary matrix operations when solving the linear problem. For instance, assuming that matrix A has \(\alpha n\) nonzero entries, a matrix-vector multiplication can be performed in \({\mathcal {O}}(\alpha n)\) operations. Therefore, the design of efficient algorithms for solving these systems relies heavily on exploiting the sparsity of the coefficient matrix A.
In general, methods for solving linear systems can be classified either as direct or as iterative methods. Direct methods require only a finite number of steps. An example is Gaussian elimination, the standard textbook method for solving an arbitrary linear system of equations. The commonality of all direct methods is that they directly manipulate the individual entries of A and thus need to operate on an explicit representation of the matrix. Transformations applied within direct solvers such as Gaussian or Householder triangulation do not preserve the sparse structure of A [25].
In contrast, iterative methods perform successive approximations to a linear system to obtain more accurate solutions. Typically, these approximations only require the calculation of matrix-vector products, where all matrices involved can be derived from the system matrix without destroying its sparsity. As a result, although specialized direct methods for solving sparse systems exist [5], the largest, currently considered systems are solved using iterative methods [2, 25].
The two main classes of iterative methods for solving systems of linear equations are stationary and non-stationary methods whereby most of the latter belong to the subclass of Krylov subspace methods. Stationary methods solve a linear system by repeatedly applying an iteration matrix, derived from the coefficient matrix A, to an initial guess for the vector of unknowns x, to get a series of approximations that converge to the actual solution of the system. Stationary methods have the advantage that they are easier to implement and to analyze than non-stationary methods. This paper focuses exclusively on stationary methods to automatically generate iterative solvers for sparse linear systems.
In the following, we provide a brief overview of the stationary iterative methods. For a more comprehensive overview of the iterative methods, including non-stationary methods, see [3, 7, 21], and [6].
Stationary iterative methods are expressed in the general form
$$\begin{aligned} x^{(k+1)} = G x^{(k)} + f, \end{aligned}$$
where G is the iteration matrix, \(x^{(k)}\) the solution vector in iteration k (i.e. the current iterate), and f a vector that is obtained by transforming the right-hand side b. Neither G nor f depends on the iteration count. The standard iterative methods are Jacobi and Gauss–Seidel.
The Jacobi method
To derive the Jacobi method we consider the equations of the linear system \(A x = b\) in isolation, which leads to
$$\begin{aligned} \sum ^{n}_{j=1} a_{ij} x_j = b_i. \end{aligned}$$
By solving each of the equations for \(x_i\) we obtain
$$\begin{aligned} x_i = \left( b_i - \sum _{j \ne i} a_{ij}x_j\right) / a_{ii}. \end{aligned}$$
If we assume that all entries except \(x_i\) are fixed in every individual equation, the iterative scheme is defined by
$$\begin{aligned} x_i^{(k+1)} = \left( b_i - \sum _{j \ne i} a_{ij} x_j^{(k)}\right) / a_{ii}. \end{aligned}$$
This iteration is the scalar formulation of the Jacobi method in which \(x_i^{(k)}\) corresponds to the ith component of the solution vector in iteration k and \(a_{ij}\) to the entry of A in row i and column j. Since all equations are treated independently, the order of examination is irrelevant. In fact, \(x_i^{(k+1)}\) can be computed simultaneously for all equations, which makes the Jacobi method easily parallelizable.
To provide a definition of the Jacobi method in matrix form, we first introduce the splitting
$$\begin{aligned} A = D - L - U, \end{aligned}$$
where D is the diagonal, L the strictly lower triangular and U the strictly upper triangular part of A. The term strictly refers to the fact that the diagonal of A is excluded. We assume that all diagonal entries of A are nonzero. Using this splitting, the matrix form of the Jacobi method is obtained by
$$\begin{aligned} x^{(k+1)} = D^{-1} (L + U) x^{(k)} + D^{-1}b. \end{aligned}$$
Note that the inverse of a diagonal matrix is a diagonal matrix with the original diagonal entries inverted. The iteration, in addition, corresponds to our basic formulation of stationary iterative methods (see Eq. 2.1) with the iteration matrix
$$\begin{aligned} G&= D^{-1} (L + U) = I - D^{-1} A, \end{aligned}$$
$$\begin{aligned} f&= D^{-1}b. \end{aligned}$$
The Gauss–Seidel method
The Jacobi method can simultaneously compute the new iterate for all components of the solution vector. In contrast, the Gauss–Seidel method examines them in sequence, such that already computed components are taken into account:
$$\begin{aligned} x_i^{(k+1)} = \left( b_i - \sum _{j < i} a_{ij}x_j^{(k+1)} - \sum _{j > i}a_{ij}x_j^{(k)}\right) / a_{ii}. \end{aligned}$$
The computation of each new component depends on the previously computed components and cannot be performed simultaneously. While this implies a serialization of the computation, the order can be varied. Different orders will inevitably lead to different values of the new iterate \(x^{(k+1)}\) affecting the overall convergence of the method. Therefore, the Gauss–Seidel method's serial nature in general prohibits a parallel computation. However, when the matrix A is sparse, not all components of the new iterate \(x^{(k+1)}\) depend on the values of all components of the old iterate. Then, it is possible to define a partitioning of x such that there are no dependencies between the components in the same partition and consequently the Gauss–Seidel method can be applied to each partition in parallel. For a more detailed discussion of the parallelization of the Gauss–Seidel method, see [21]. The matrix formulation of the Gauss–Seidel method is defined by
$$\begin{aligned} x^{(k+1)} = (D - L)^{-1} U x^{(k)} + (D - L)^{-1} b. \end{aligned}$$
Note that the iteration contains the computation of the inverse of \((D - L)\), which is a lower triangular matrix. A multiplication with the inverse of this matrix corresponds to solving a linear system via backward substitution and therefore does not require the explicit computation of the inverse. The Gauss–Seidel method can also be formulated as a stationary iterative method (see Eq. 2.1) with the iteration matrix
$$\begin{aligned} G= (D - L)^{-1} U = I - (D - L)^{-1} A, \end{aligned}$$
$$\begin{aligned} f= (D - L)^{-1} b. \end{aligned}$$
Successive over-relaxation
The successive over-relaxation (SOR) method [29] extends the Gauss–Seidel method by applying a weighted average between the previous and the newly computed iterate. It is defined by
$$\begin{aligned} x_i^{(k+1)} = \omega x_i^{(k+1)} + (1 - \omega ) x_i^{(k)}. \end{aligned}$$
The idea is to choose \(\omega \) in a way that accelerates the convergence of the method to the actual solution. Note that if \(\omega = 1\), SOR corresponds to the Gauss–Seidel method. SOR only converges for values of \(\omega \in (0, 2)\) [11]. In general it is not possible to estimate the optimal value of \(\omega \) a priori and therefore a heuristic is usually employed to choose an \(\omega \). Like the Jacobi and Gauss–Seidel methods, the SOR method can also be defined in terms of matrices and vectors by the iteration
$$\begin{aligned} x^{(k+1)} = (D - \omega L)^{-1} (\omega U + (1 - \omega )D) x^{(k)} + \omega (D - \omega L)^{-1} b. \end{aligned}$$
Similar to Gauss–Seidel, all inverted matrices are lower triangular matrices and the respective matrix vector products can be obtained via backward substitution without explicitly computing the inverse. For the complete derivation of SOR, see [29].
Convergence of stationary methods
Both, the Jacobi method and the Gauss–Seidel method define a sequence of approximations of the basic form as defined in Eq. 2.1. In case convergence is reached, the limit x of this iteration satisfies
$$\begin{aligned} x = G x + f. \end{aligned}$$
Essential for the convergence of stationary iterative methods is the spectral radius \(\rho \) of the iteration matrix G defined by
$$\begin{aligned} \rho (G) = \max _{1 \le j \le n}|\lambda _j(G)|, \end{aligned}$$
where \(\lambda _j(G)\) are the eigenvalues of G.
Theorem 1
Let G be a square matrix with spectral radius \(\rho (G)\), then
$$\begin{aligned} \lim _{k \rightarrow \infty } G^{k} = 0 \end{aligned}$$
if and only if \(\rho (G) < 1\).
For a proof, see [21]. Equation 2.2 can be reformulated into the following system of linear equations:
$$\begin{aligned} x - G x= f, \end{aligned}$$
$$\begin{aligned} (I - G) x= f. \end{aligned}$$
Equation 2.5 has a unique solution \(x^*\) if and only if \((I - G)\) is a non-singular square matrix. Subtracting Eq. 2.2 from the basic iteration scheme presented in Eq. 2.1 leads to
$$\begin{aligned} x^{(k+1)} - x^{*} = G(x^{(k)} - x^{*}) = \ldots = G^{k+1}(x_0 - x^{*}). \end{aligned}$$
From Theorem 1 it follows that the sequence \(x^{(k+1)} - x^{*} = G^{k+1}(x_0 - x^{*})\) converges to zero. Therefore, we can conclude about the convergence of an arbitrary stationary iterative method:
Let G be a square matrix with \(\rho (G) < 1\), then \(I - G\) is non-singular and the iteration presented in Eq. 2.1converges for any f and \(x_0\). Conversely, if the iteration presented in Eq. 2.1converges for any f and \(x_0\), then \(\rho (G) < 1\).
As Theorem 2 states, the convergence of any stationary iterative method solely depends on finding an iteration matrix with a spectral radius smaller than one. Furthermore, the general convergence factor of an iterative method is equal to the spectral radius of its iteration matrix [21]. Therefore, the design of an efficient iterative method is equivalent to finding an iteration matrix with a low (or even minimal) spectral radius.
Since the computation of the spectral radius is expensive, it is often not practical to compute it directly. For Jacobi, Gauss–Seidel and other well-studied methods there exist certain criteria for the system matrix A that ensure the convergence of these methods [7]. One possibility to estimate the spectral radius for arbitrary stationary iterative methods is the use of Local Fourier Analysis (LFA) [20, 28].
Discretization of partial differential equations
Many problems in science and engineering can be mathematically modeled in the form of a partial differential equation (PDE). A classic example is the Navier–Stokes equation that describes the motion of a viscous fluid [24] and can be used to model a wide range of phenomena, with applications ranging from weather forecasting to aircraft design. Although there exists a rich theory about PDEs, only a few cases of analytical solutions for these equations are known. As a remedy, numerical methods can be applied to approximate the solution of a PDE at a finite number of points what transforms the problem of solving a PDE into a problem of solving a system of linear equations. This transformation is usually referred to as discretization. The most widely used methods of discretizing a PDE are the finite difference method (FDM), the finite volume method (FVM), and the finite element method (FEM). To provide a brief introduction to the discretization of PDEs, we focus on FDM. For more information on the numerical solutions of PDEs, see [1, 10, 17, 23], and [26].
One of the most basic but also quite common PDE is Poisson's equation which is defined as
$$\begin{aligned} \varDelta u = f, \end{aligned}$$
where \(\varDelta \) is the Laplace operator, u and f are real or complex-valued functions. Typically, f is given and one wants to solve the equation for u. For \(u, f \in {\mathbb {R}}^2\) it takes the form
$$\begin{aligned} \left( \frac{\partial ^2}{\partial x^2} + \frac{\partial ^2}{\partial y^2}\right) u(x,y) = f(x,y). \end{aligned}$$
If u is differentiable at a point x, we can create the Taylor expansion
$$\begin{aligned} u(x+h, y)= u(x,y) + h u_x(x,y) + \frac{h^2}{2} u_{xx}(x,y) + \frac{h^3}{6} u_{xxx}(x,y) + {\mathcal {O}}(h^4) \end{aligned}$$
$$\begin{aligned} u(x, y+h)= u(x,y) + h u_y(x,y) + \frac{h^2}{2} u_{yy}(x,y) + \frac{h^3}{6} u_{yyy}(x,y) + {\mathcal {O}}(h^4), \end{aligned}$$
in which \(u_x\) and \(u_y\) denote the first order partial derivatives of u with respect to x and y, respectively.
$$\begin{aligned} u_x(x,y) = \frac{\partial }{\partial x} u(x,y),\;u_{xx}(x,y) = \frac{\partial ^2}{\partial x^2} u(x,y),\ldots \end{aligned}$$
When stopping the Taylor expansion after the third term, the error that results from this approximation is of order \({\mathcal {O}}(h^4)\). Similarly, we can define
$$\begin{aligned} u(x-h, y)= u(x,y) - h u_x(x,y) + \frac{h^2}{2} u_{xx}(x,y) - \frac{h^3}{6} u_{xxx}(x,y) + {\mathcal {O}}(h^4) \end{aligned}$$
$$\begin{aligned} u(x, y-h)= u(x,y) - h u_y(x,y) + \frac{h^2}{2} u_{yy}(x,y) - \frac{h^3}{6} u_{yyy}(x,y) + {\mathcal {O}}(h^4). \end{aligned}$$
Adding Eq. 3.2 to Eq. 3.4 and Eq. 3.3 to Eq. 3.5 and dividing by \(h^2\) yield the following approximation for the second order partial derivative of u:
$$\begin{aligned} \frac{\partial ^2}{\partial x^2} u(x,y)= \frac{u(x + h, y) + u(x - h, y) - 2 u(x,y)}{h^2} + {\mathcal {O}}(h^2) \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2}{\partial y^2} u(x,y)= \frac{u(x, y+h) + u(x, y-h) - 2 u(x,y)}{h^2} + {\mathcal {O}}(h^2) \end{aligned}$$
In both cases, the approximation error is of order \({\mathcal {O}}(h^2)\). Although not covered here, the approximation of first or higher-order derivatives can be defined in a similar fashion. Equations 3.6 and 3.7 can now be used to define an approximation for the Laplace operator. This results in the discrete version of Poisson's equation (see Eq. 3.1)
$$\begin{aligned} \frac{1}{h^2}(u(x + h, y) + u(x - h, y) + u(x, y+h) + u(x, y-h) - 4 u(x,y)) = f. \end{aligned}$$
Consequently, to compute the solution of Poisson's equation on an arbitrary two dimensional domain using a finite difference approximation, a system of linear equations must be solved, whereby the solution at each point is represented by an equation of the form of the discrete version of Poisson's equation. A common decision is to choose a uniform h for the whole domain, such that the individual equations are independent of its value.
In order to solve the system, an additional set of equations must be defined that defines how the system behaves at the boundaries of the domain. These boundary conditions usually come in three types:
Dirichlet condition: \(u(x) = \phi (x)\)
Neumann condition: \(\frac{\partial }{\partial n} u(x) = 0\)
Cauchy condition: \(\frac{\partial }{\partial n} + \alpha (x) u(x) = \gamma (x)\)
The vector n refers to a unit vector normal to the domain and directed outwards. In many cases, boundary conditions are of mixed type, which means that at different parts of the boundary different conditions are defined. As we do not explicitly treat boundary conditions in this work, we assume that the boundary conditions are contained in the right-hand side vector b. Thus, our derivation results in a linear system of the form
$$\begin{aligned} A x = b. \end{aligned}$$
We present an example and assume a 5x5 grid with 9 interior grid points, Dirichlet conditions at all boundaries, and a natural ordering of the grid points. This problem can, for instance, result in a linear system with the matrix A and right-hand side b:
$$\begin{aligned} A = \begin{bmatrix} 4 &\quad -1 &\quad 0 &\quad -1 &\quad 0 &\quad 0 &\quad 0 &\quad 0 &\quad 0 \\ -1 &\quad 4 & \quad -1 &\quad 0 &\quad -1 &\quad 0 &\quad 0 &\quad 0 & \quad 0 \\ 0 &\quad -1 &\quad 4 &\quad 0 & \quad 0 &\quad -1 &\quad 0 &\quad 0 &\quad 0 \\ -1 &\quad 0 & \quad 0 &\quad 4 &\quad -1 &\quad 0 &\quad -1 &\quad 0 &\quad 0 \\ 0 & \quad -1 &\quad 0 &\quad -1 &\quad 4 &\quad -1 & \quad 0 &\quad -1 &\quad 0 \\ 0 &\quad 0 &\quad -1 &\quad 0 &\quad -1 & \quad 4 &\quad 0 &\quad 0 &\quad -1 \\ 0 & \quad 0 &\quad 0 &\quad -1 &\quad 0 &\quad 0 &\quad 4 &\quad -1 &\quad 0 \\ 0 &\quad 0 &\quad 0 &\quad 0 & \quad -1 &\quad 0 &\quad -1 &\quad 4 &\quad -1 \\ 0 &\quad 0 &\quad 0 &\quad 0 &\quad 0 &\quad -1 &\quad 0 &\quad -1 & \quad 4 \end{bmatrix},\quad b=\left[ {\begin{array}{c}-h^2f_{{22}}+u_{{12}}+u_{{21}}\\ -h^2f_{{32}}+u_{{31}}\\ -h^2f_{{42}}+u_{{52}}+u_{{41}}\\ -h^2f_{{23}}+u_{{13}}\\ -h^2f_{{33}}\\ -h^2f_{{43}}+u_{{53}}\\ -h^2f_{{24}}+u_{{14}}+u_{{25}}\\ -h^2f_{{34}}+u_{{35}}\\ -h^2f_{{44}}+u_{{54}}+u_{{45}}\end{array}}\right] \end{aligned}$$
\(f_{ij}\) and \(u_{ij}\) denote the values of u and f at position (ih, jh) within the domain. Note that A is a sparse band matrix. Therefore, we expect an iterative solver to preserve this property when computing an approximate solution for the system. Approximating Poisson's equation with finite differences in one or three dimensions can be performed in a similar manner, which also results in linear systems of the form \(A x = b\) with a band matrix A.
GPLS: genetic programming for linear systems
GPLS is a standard GP approach which aims to find mathematical expressions that define an iteration matrix. It uses the system matrix A of a linear system as input. In contrast to classical regression approaches [13, 14], the solutions of GPLS are iteration matrices G and we have no training points given. Our aim is to find iteration matrices G with a low spectral radius \(\rho (G)\). When obtaining such an iteration matrix G, we can use it with iterative methods to find an approximate solution for a linear system.
Representation of iteration matrices
We use a tree-based representation to describe an iteration matrix by a mathematical term. The result of this term is the iteration matrix. We use a function and terminal set that allow the application of well-known iterative numerical methods [3]. Accordingly, the function set is defined as
$$\begin{aligned} \{+, -, *\}. \end{aligned}$$
The terminal set contains multiple variations of the system matrix A, which can be calculated offline before the start of a GP run. For our experiments, we use
$$\begin{aligned} \{A, D, D^{-1}, (A-D), (L+D), (L+D)^{-1}, U\}, \end{aligned}$$
where A is the system matrix, D is the diagonal matrix of A, L is the strictly lower triangular part of A, and U is the strictly upper triangular part of A.
An example tree representation for a candidate iteration matrix described by the term \((A - D) + D^{-1}(D^{-1}+U)\)
We do not use the right hand side vector b in the terminal set because it is not necessary for the convergence of the used iterative method as defined in Theorem 2 [21]. Furthermore, the omission of the vector b allows us to work exclusively on square matrices, so it is not required to use a strongly typed GP (STGP) [16] with custom addition and subtraction operations for matrices and vectors.
Figure 1 shows an example tree representation for a candidate iteration matrix described by the term \((A - D) + D^{-1}(D^{-1}+U)\). The leaves contain the terminals which are the system matrix A and pre-calculated variations of A (the tree's lowest level in Fig. 1 depicts this for an example system matrix).
Objective function
The spectral radius of the iteration matrix G determines the convergence behavior of stationary methods (Eq. 2.3). Iterative methods converge if \(0< \rho (G) < 1\) holds and G is not a diagonal matrix [21], which makes the spectral radius the most important part of the objective function. In addition, we want to increase the sparsity of the iteration matrix. Therefore, our objective function rewards a high number of zeros in the iteration matrix. Finally, we also want to keep the complexity of the mathematical term under control. We combine these three goals into a single objective function and obtain
$$\begin{aligned} f(s,c,z) = {\left\{ \begin{array}{ll} w_s s/s_{max} + w_z z/z_{max} + w_c c/c_{max} & \text{ if } \rho (G) > 0 \wedge G \notin diag({\mathbb {R}}^{N \times N})\\ w_s + w_z z + w_c c & \text{ else }\end{array}\right. }, \end{aligned}$$
where s is the spectral radius \(\rho \) of the candidate iteration matrix, z is the number of non-zero entries in the considered candidate iteration matrix, and c is the number of nodes in the tree representing the candidate iteration matrix. \(s_{max}, z_{max}\) and \(c_{max}\) are the largest observed values. \(w_cc/c_{max}\) measures the number of nodes (complexity) of the expression's parse tree and serves as bloat control. This kind of bloat control is a variant of the well-known parsimony pressure [4, 19]. We assume a minimization problem. The coefficients \(w_s\), \(w_z\), and \(w_c\) are real-valued weights in the interval [0, 1] such that \(w_s + w_z + w_c = 1\). As mentioned before, for the convergence of the iterative methods, we must, in addition to a low spectral radius, ensure that \(\rho (G) \ne 0\) and G is not a diagonal matrix. Individuals that violate these conditions are penalized by setting \(s/s_{max} = 1\).
Box-plots of the time in milliseconds required to calculate the spectral radius \(\rho \) over problem size n
As the calculation of the spectral radius \(\rho \) is the component that determines the computational effort of the fitness function and consequently also of an entire GP run, we measure the run-time in milliseconds required for the calculation of the spectral radius \(\rho \) for random square matrices for increasing problem sizes n. Figure 2 shows box-plots of the time in milliseconds required to calculate the spectral radius \(\rho \) over problem size n. We use Python's Numpy module to calculate the spectral radius \(\rho \) and measure the run-times using an AMD Ryzen Threadripper 3990X (4.3 GHz maximum boost clock) with 64 cores. For every problem size n, we report 100 time measurements. The plots show that the required run-time to calculate the spectral radius \(\rho \) increases notably with the problem size n. However, even for matrices of size \(1000\times 1000\), the median run-time is still lower than one second.
Experiments and results
We study the performance of GPLS and the resulting iterative methods on various standard test problems with randomly generated dense system matrices, different types of randomly generated band matrices, and real-world applications.
For comparability, we use the same settings for our GP approach in all experiments. The individuals in the first generation (\(i = 0\)) are generated by the ramped-half-and-half method. As variation operators, we use standard subtree-crossover with a crossover probability of \(p_c = 0.8\) and standard subtree-mutation with a mutation probability of \(p_m = 0.05\). For selection, we use tournament selection of size 3. The population size is set to 1, 500 and we stop a GP run after 30 generations.
The weights for the objective function were determined based on some manual parameter tuning. We set the weight for the spectral radius to \(w_s = 0.8997\), the weight for the non-zero values to \(w_z = 0.1\), and method's complexity weight (nodes in the parse tree) to \(w_c = 0.0003\). The largest value is assigned to \(w_s\) because we require \(\rho (G) < 1\) to guarantee convergence of the iterative method. To favor small solutions, we assign a very low value to \(w_c\).
Performance of GPLS for random system matrices
Spectral radius over iterations for a \(100\times 100\) matrix
The main application of the iterative methods found by GPLS is the solution of a linear system discretized from PDEs. However, for a first analysis of the GP performance we use randomly generated system matrices as input. We study the change of the three components of the objective function—spectral radius, non-zero values, and the number of nodes in the parse tree—for the best individual over time in a GP run on randomly generated system matrices of increasing size. To generate a random system matrix, for a given matrix size we fill the elements with equally distributed integer values ranging from \(-10\) to 10.
Non-zero values over iterations for a \(100\times 100\) matrix
Complexity over iterations for a \(100\times 100\) matrix
Figure 3 shows the median spectral radius \({\tilde{\rho }}\) of 100 GP runs over the number of generations. As input we use a randomly generated \(100\times 100\) dense system matrix. We find that the average median of the spectral radius decreases from about 11 (at the beginning of the runs) to about 3e−14 at the end of the runs.
For the same randomly generated \(100\times 100\) system matrix used as input, Figs. 4 and 5 show the median non-zero entries \({\tilde{z}}\) and median number of nodes \({\tilde{c}}\), respectively. We find that the number of non-zero entries decrease over the run with a strong reduction between generations 10 and 15. The median number of nodes increases slightly over a run starting with about seven tree nodes and increasing to 13 nodes. Due to the low weight \(w_c\) for parsimony pressure, the number of nodes slightly increases while still allowing GPLS to improve on the spectral radius and sparsity. This is reflected by the choice of the weights in the objective function, where spectral radius and sparsity are more important than the size of the resulting iterative numerical method (\(w_s, w_z>w_c\)).
Table 1 Median and interquartile range of spectral radius, number of non-zero entries, and number of nodes in the parse tree (method's complexity) in first (\(i=0\)) and last (\(i=29\)) generation. We present results for different problem sizes
Table 1 extends the analysis and presents results for the spectral radius, the number of non-zero entries, and the number of nodes in the parse tree for random problems of size \(10\times 10\) to \(100\times 100\). For each problem size, we perform 100 runs with a random system matrix. We show the median as well as the interquartile range (IQR; in parentheses) of the best solution in the initial (\(i = 0\)) and last generation (\(i = 29\)). We use the IQR as a proxy for the variance of the results. It is defined as the difference between the 75th and the 25th percentile. Best median results of a run are printed in bold. All differences between the first and last generations were tested for significance with a Wilcoxon rank-sum test (\(p < 0.001\)).
We find that GPLS reliably finds solutions with low spectral radius (median spectral radius \({\tilde{\rho }} < 1.0\) for all studied problem instances). For some problem sizes, we observe a quite large IQR because the search space is complex and GPLS does not always find a successful solution (where \(\rho < 1.0\)). However, this is not a problem for the practical use of GPLS, since we can simply check the found solution for its suitability (calculate the spectral radius) and, if necessary, restart the GPLS run. In addition to the spectral radius, the GP approach also improves the sparsity of the found iteration matrices for all problem sizes. Only the number of nodes increase during a GP run. This is expected as the weight \(w_c\) is chosen very low to work only as slight bloat control, as a median size of 15 nodes is acceptable (comparable to the Gauss–Seidel and the Jacobi methods).
Generalization of iteration matrices found by GPLS
A direct comparison of GPLS and classical stationary iterative methods is difficult as GPLS' main effort comes from the search for a suitable term that builds an iteration matrix from a system matrix. This effort is high, especially if the considered linear systems are very large. In contrast, classical stationary iterative methods like Gauss–Seidel do not require any search process but are directly applicable.
A relevant question is whether GPLS finds iteration matrices that are general and can (analogously to classical stationary iterative methods) be applied to a wide range of different problems. When searching for such generalizable expressions, we can utilize the fact that linear systems discretized from PDEs often have similar structures and characteristics independently of their degree of detail and size. We can take advantage of this and evolve iteration matrices with GPLS for small linear systems and subsequently use the found solutions on larger systems with a similar structure, based on the assumption that the found solutions for the small systems also yield satisfactory results for the larger systems.
We study the generalization of the found solutions with a set of diagonal \(n\times n\) band matrices used as system matrices, which are also relevant for real-world problems (see tridiagonal Toeplitz matrices [8]). A band matrix is a sparse matrix with a main diagonal and additional diagonals on both sides of the main diagonal containing non-zero values [7]. We use diagonal matrices in 1D and 2D with additional diagonals on the upper side and on the lower side of the main diagonal [9]. The structure of these matrices is independent of the node size n because, for each matrix, we use consistent values for the diagonals.
In our experiments, we randomly generate 100 system matrices of low size (\(n=5\) and \(n=9\)). For each of the problems, GPLS determines an iteration matrix. In a next step, for each of the 100 system matrices (for each considered problem type) we generate appropriate system matrices with larger n. The larger system matrices are also diagonal matrices. We apply the solution that has been found by GPLS for the low value of n to the larger system matrices and evaluate the corresponding spectral radii \(\rho \) of the iteration matrices. Our hope is that the solutions found for small n are general and also work well for larger n.
Box-plots of the spectral radius \(\rho \) of diagonal 1D matrices over problem size n (starting with \(n=5\))
Figure 6 shows box-plots of the spectral radius \(\rho \) over the problem size n of the \(n\times n\) system matrices. Each box-plot contains the spectral radius of 100 iteration matrices. The dashed line shows a spectral radius of 1.0. In this experiment, GPLS was only applied to diagonally dominant and diagonal system matrices in 1D of size \(5\times 5\). Thus, only the spectral radii of the iteration matrices in the first box-plot are a direct result of GPLS. And for this first box-plot, we considered only found iteration matrices with a spectral radius \(\rho < 1.0\). For the larger system matrices, we did not apply GPLS anew but re-used the iterative methods evolved for the small system matrices (\(n=5\)).
As expected, the spectral radii become larger with increasing n. Nevertheless, the median spectral radius is always lower than 1.0 for the analyzed matrix sizes. For \(n=5\), GPLS finds solutions with a median spectral radius \(\rho = 9.23e-6\). Applying these solutions to a problem with \(n=1000\) still yields a median spectral radius \({\tilde{\rho }} < 1.0\).
Figure 7 shows the same analysis, but this time we start from \(9\times 9\) diagonal system matrices in 2D. Again, the median spectral radius is always lower than 1.0. However, with an increasing problem size n, we see an increase of the number of outliers with a spectral radius \(\rho > 1.0\).
In summary, on the analyzed problems, the iterative methods found by GPLS for small system matrices are generalizable and can be re-used for larger n, if the basic structure of the problem stays the same.
GPLS overcomes limitations of existing stationary iterative methods
The well-known Gauss–Seidel method converges if the system matrix A is either symmetric positive definite or strictly diagonally dominant. If this is not the case, there is no guarantee that the Gauss–Seidel method will find an appropriate iteration matrix G [7, 21]. To address such cases is a good challenge for GPLS because GP can search the whole space of potential methods and maybe come up with solutions for problems where the Gauss–Seidel method fails.
Consequently, we generate typical random system matrices where the Gauss–Seidel method cannot find an appropriate iteration matrix and study the properties of iteration matrices generated by GPLS. We use heat maps for the visual inspection of system and iteration matrices, which are graphical representations of the numerical elements in a matrix. Heat maps make it easier to see structural characteristics like diagonals and the sparsity of a matrix, as each entry/value is represented by a specific color.
Randomly generated dense system matrix
Corresponding iteration matrix found by GPLS
Figure 8 shows a randomly generated dense system matrix of size \(25\times 25\). For this example, we filled the matrix with equally distributed integer values ranging from \(-10\) to 10. The Gauss–Seidel method only finds an iteration matrix with a spectral radius of around 28,000. Hence, the Gauss–Seidel method cannot be used. In contrast, GPLS finds a solution for this example described by the term \((((A D)+((U+D)+(L+D)))-(((D^{-1}+U)+(((L+D)^{-1}-U)-(D^{-1}+(L+D)^{-1})))+((A D)+((U+D)+(L+D)))))\). Figure 9 shows the resulting iteration matrix. The matrix has a spectral radius of 2.22e−16 as well as high sparsity. The few non-zero values are concentrated in the upper triangular area because the found term is dominated by the terminals \(L+D\) and U.
Randomly generated not diagonally dominant band matrix as system matrix
A second example is a randomly generated tridiagonal band matrix of size \(25\times 25\) as system matrix. For each diagonal, we used an equally distributed random integer value from the interval \([-10, 10]\). Thus, the band matrix is not diagonally dominant. Figure 10 shows the heat map for this system matrix. The spectral radius of the iteration matrix found by the Gauss–Seidel method is 6.0. Thus, the Gauss–Seidel method is not usable in this case.
In contrast, GPLS again finds an expression that is able to solve the problem. The term found by GPLS is \(U + D^{-1}\). The resulting iteration matrix (see Fig. 11) has a spectral radius of 0.2 and is similar to the system matrix but has one diagonal less.
Convergence analysis of iteration matrices found by GPLS
This section studies the convergence speed of the iterative numerical methods found by GPLS for two types of dominant band matrices. We compare the solutions found by GPLS with those of the Jacobi, Gauss–Seidel, and SOR methods. For this purpose, we consider linear systems of the form
$$\begin{aligned} A x = 0. \end{aligned}$$
Sparse diagonally dominant band matrices
In a first set of experiments, we study the convergence behavior for linear equations that arise from the discretization of PDEs. In particular, we consider Poisson's equation in 1D, 2D, and 3D with the following boundary condition (Dirichlet):
$$\begin{aligned} u(x) = 0. \end{aligned}$$
We transform the PDEs into a system of linear equations (compare Sect. 3) using FDM, which leads to a system of the form of Eq. 5.1. In all three cases (1D, 2D, and 3D), the resulting system matrices are sparse diagonally dominant band matrices, for which, e.g., Jacobi and Gauss–Seidel are guaranteed to converge. GPLS evolved the following terms to calculate the iteration matrix G:
1D: \((D^{-1})^{13} ((L+D)^{-1})^2 U^6 (D^{-1} + U)^3\)
2D: \((D^{-1})^4 (U - D^{-1})\)
3D: \(U+D^{-1}\)
Table 2 Spectral radii of the iteration matrices for the discretized Poisson equations
Table 2 compares the spectral radii of the iteration matrices of the Jacobi, Gauss–Seidel, SOR and GPLS method for all three cases of the discretized Poisson equation. For SOR, we set the relaxation parameter \(\omega = 0.8\) [we tested values from the interval (0,2) with step size 0.1]. As expected \(\rho \) is lowest for iteration matrices found by GPLS. The spectral radii of iteration matrices constructed by the Jacobi or Gauss–Seidel method are only slightly lower than one.
To study the convergence behavior of the resulting iterative methods more closely, we employ the iteration scheme
$$\begin{aligned} x^{(i+1)} = G x^{(i)}, \end{aligned}$$
where \(x^{(i)}\) is the current solution and G the iteration matrix. As initial guess \(x^{(0)}\) for the solution of the system we use
$$\begin{aligned} x_{j}^{(0)} = 1 \quad \forall j = 1, \dots n, \end{aligned}$$
with n as the number of discretization points. As we know that the solution of the system defined in Eq. 5.1 is 0, the absolute error \(\epsilon \) is equal to the current approximation \(x^{(i)}\) during each iteration i:
$$\begin{aligned} \epsilon = x^{(i)} - 0 = x^{(i)}. \end{aligned}$$
Error over iterations for 1D Poisson
Figures 12, 13, and 14 plot the \(L^2\)-norm of the error \(\epsilon \) over the number of iterations for the Jacobi, Gauss–Seidel, SOR, and GPLS-evolved iteration methods for the solution of Poisson's equation in 1D, 2D, and 3D, respectively. As expected, all three iteration schemes converge to the solution of the system, although—as reflected in the lower spectral radius of its iteration matrix—the iteration schemes evolved by GPLS converge much faster than Gauss–Seidel and Jacobi. For example, in the 1D and 2D case convergence can be achieved with GPLS in only a few iterations. In the 3D case there is an increase of the error in the first few iterations followed by a fast decrease of the error. In all three instances, the convergence speed of SOR is similar to that of GPLS. However, the convergence speed strongly depends on the choice of the right relaxation parameter \(\omega \).
Being surprised by the extremely fast convergence of the iterative numerical methods evolved by GPLS (especially for the 1D case of Poisson's equation), we study whether GPLS has found as iteration matrix G the inverse of the system matrix A or a matrix that is very similar. If this is the case, the fast convergence behavior would be inevitable. Consequently, Fig. 15 shows the heat map of the product of A and the iteration matrix G found by GPLS. If the product would be the identity matrix I, then GPLS would have found \(A^{-1}\). However, the figure shows that \(A G \ne I\), because we have four diagonals in the upper triangular part of the matrix and no main diagonal.
Product of system matrix A and iteration matrix G for 1D Poisson
Non-diagonally dominant band matrices
As a second and more challenging test case, we consider the class of non-diagonally dominant band matrices. For this class of matrices, e.g., the Jacobi and Gauss–Seidel methods are not guaranteed to converge in the general case. Thus, it is uncertain if a stationary iterative method that converges to the solution of an arbitrary linear system with a non-diagonally dominant system matrix can be evolved. To generate a suitable instance of this class of matrices, we randomly generate a tridiagonal matrix of the form
$$\begin{aligned} A_{1} = \begin{bmatrix} a &\quad b \\ c &\quad a &\quad b \\ &\quad c &\quad \ddots &\quad \ddots \\ &\quad &\quad \ddots &\quad \ddots &\quad b \\ &\quad &\quad &\quad c & \quad a \end{bmatrix}, \end{aligned}$$
that satisfies \(|a |< |b |+ |c |\). As a test case, we randomly choose the values \(a = 4\), \(b = 8\), and \(c = 2\). We assume that this matrix corresponds to a one-dimensional problem. Thus, we can generate higher-dimensional problems of the same instance by computing the Kronecker sum of the matrix with itself:
$$\begin{aligned} A_{2}= {} A_{1} \oplus A_{1},\\ A_{3}= {} A_{2} \oplus A_{1}. \end{aligned}$$
The resulting system matrices are also non-diagonally dominant.
Table 3 shows the spectral radii of the resulting Jacobi, Gauss–Seidel, and SOR iteration matrices, as well as of the iteration matrices evolved by GPLS. For SOR, we set the relaxation parameter \(\omega = 0.6\) [again, we tested values from the interval (0,2) with step size 0.1]. The spectral radii of the iteration matrices generated by Jacobi and Gauss–Seidel are all larger than one. Thus, convergence cannot be guaranteed. In contrast, SOR and GPLS can evolve iteration matrices with a spectral radius smaller than one. For the band matrices in 1D, 2D, and 3D, GPLS evolved the following terms to calculate the iteration matrix G:
1D: \((D^{-1})^2 (D^{-1} U (D - A) + D)\)
2D: \(D^{-1}+U\)
Table 3 Spectral radii of the iteration matrices for a non-diagonally dominant band matrix
Analogous to the Poisson case, we study the convergence of the resulting iterative methods by solving the system defined in Eq. 5.1, using the same initial guess \(x^{(0)} = 1\). Again, we measure the \(L^2\) norm of the error \(\epsilon \) compared to the exact solution 0 during each iteration.
Error over iterations for a non-diagonally dominant band matrix in 1D
Figures 16, 17, and 18 plot the error over the number of iterations. As expected, the Jacobi and Gauss–Seidel methods do not converge in any of the three cases, but the error increases further during each iteration. In contrast, GPLS was able to evolve an iteration matrix that leads to convergence in all three cases. The convergence speed is on a level similar to the SOR method (in all three studied instances).
If we compare the convergence behavior of GPLS of non-diagonally dominant band matrices to the Poisson case (see Figs. 12, 13, and 14), we find that the evolved schemes on average require more iterations and that convergence is only achieved after an initial stagnation or even an increase of the error. Nevertheless, the evolved iteration matrices always lead to low errors in less than 100 iterations. The initial error increase can be explained by the fact that within a stationary iterative method, not all error components can be eliminated simultaneously. Consequently, the reduction of certain error components can cause an increase in the remaining ones and, thus, lead to the observed overall growth of the approximation error. However, after this initial error increase, the total error quickly decreases (with GPLS and SOR), which means that after particular error components are eliminated within the first few iterations, the remaining ones are efficiently reducible.
Numerical methods are used to solve problems where no analytical solutions exist or are difficult to find. In many real-world applications, the number of unknowns is huge which makes efficient and scalable solvers for such systems necessary. As GP is known for finding human-competitive results for many real-world problems [18], its combination with domain knowledge from the classical numerical methods allows us to come up with iteration matrices that beat existing iterative numerical methods.
This paper proposed GPLS, a GP-based approach that searches for mathematical expressions that define iteration matrices for given linear systems. The found iteration matrices are used by stationary iterative methods which numerically solve the system of linear equations. GPLS makes use of the elements of existing methods – like variations of the system matrix—to find iteration matrices that lead to a fast and reliable convergence of iterative numerical methods. Additionally, GPLS finds iteration matrices that are sparse in its structure such that the resulting iterative numerical methods can be executed efficiently on parallel computer architectures.
The results show that GPLS finds iteration matrices with a low spectral radius for both, dense and also sparse diagonal system matrices. Furthermore, the found iteration matrices are of high sparsity and the mathematical term describing these matrices is often of low complexity (small parse tree). The found solutions are often generalizable to larger dimensions in the sense that solutions found for small problems also work well for larger system matrices. We showed this for two classes of band matrices as the terms found by GPLS for small system matrices (\(n \le 9\)) can be often used to compute high quality iteration matrices with a spectral radius \(\rho < 1.0\), even for larger problem instances (up to \(n=1000\)).
We also found that GPLS can find solutions where the classical iterative methods (the Gauss–Seidel and the Jacobi methods) fail to find appropriate iteration matrices. Furthermore, the iterative methods found by GPLS converge much faster compared to the Gauss–Seidel and Jacobi methods on the studied test problems and perform like the SOR method but without the need of an additional relaxation parameter.
In this work, we demonstrated that GPLS can evolve effective stationary iterative methods for solving different sparse linear systems. Another direction is the use of these methods for the preconditioning of Krylov subspace methods [21]. In this case, the goal is not to directly solve a sparse linear system but, instead, to use a stationary iterative method to compute an approximation for the inverse of a preconditioning matrix P. This approximation is then applied to the original system A, for instance, to obtain a right-preconditioned system \(A P^{-1} u = b\), which is easier to solve than the original system \(A x = b\).
So in future work, we will study the ability of GPLS to evolve optimal stationary iterative methods to solve systems of the form \(P x = u\), where the solution x represents an approximation for \(P^{-1} u\). The evolved method can then easily be integrated into an existing solver for the resulting preconditioned system, such as a Krylov subspace method, to evaluate its effectiveness on different test cases.
Additionally, we will further analyze the scalability/generalizability of solutions found by GPLS and study ways to approximate the spectral radius—as a quality indicator for iteration matrices—and find other problem representations to enable an even faster computation.
W.F. Ames, Numerical Methods for Partial Differential Equations (Academic press, 2014)
A. Amritkar et al., Recycling Krylov subspaces for CFD applications and a new hybrid recycling solver. J. Comput. Phys. 303, 222–237 (2015)
R. Barrett et al., Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, Vol. 43 (SIAM, 1994)
D.S. Burke et al., Putting more genetics into genetic algorithms, in Evolutionary Computation 6.4 (1998), pp. 387–410
T.A. Davis, Direct Methods for Sparse Linear Systems, Vol. 2 (Society of Industrial and Applied Mathematics, 2006)
J.W. Demmel, Applied Numerical Linear Algebra, Vol. 56 (Society of Industrial and Applied Mathematics, 1997)
G.H. Golub, C.F. Van Loan, Matrix Computations, Vol. 3. (HU Press, 2012)
R.M. Gray, in Toeplitz and circulant matrices: a review (2006)
R.A. Horn, C.R. Johnson, Matrix Analysis, 2nd edn. (Cambridge University Press, Cambridge, 2012)
C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method (Courier Corporation, 2012)
W.M. Kahan, in Gauss–Seidel methods of solving large systems of linear equations (2002)
M. Keijzer, Improving symbolic regression with interval arithmetic and linear scaling, in European Conference on Genetic Programming (Springer, 2003), pp. 70–82
J.R. Koza, Genetic Programming Ii: Automatic Discovery of Reusable Programs (MIT Press, 1994)
J.R Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection (MIT Press, 1992)
R.G. Mahmoodabadi, H. Köstler, Genetic Programming Meets Linear Algebra: How Genetic Programming Can Be Used to Find Improved Iterative Numerical Methods, in Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO '17. Berlin, Germany: ACM (2017), pp. 1403–1406
D.J. Montana, Strongly typed genetic programming, in Evolutionary computation 3.2 (1995), pp. 199–230
K.W. Morton, D.F. Mayers, Numerical Solution of Partial Differential Equations: An Introduction (Cambridge university press, 2005)
R. Poli, W.B. Langdon, Nicholas Freitag McPhee. A field guide to genetic programming. (With contributions by J. R. Koza). Published via http://lulu.com (2008)
R. Poli, Nicholas Freitag McPhee. Parsimony Pressure Made Easy, in Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation. GECCO '08. Atlanta, GA, USA: ACM (2008), pp. 1267–1274
H. Rittich, Extending and Automating Fourier Analysis for Multigrid Methods. Ph.D. thesis, University of Wuppertal, June (2017)
Y. Saad, Iterative Methods for Sparse Linear Systems, Vol. 82 (Society of Industrial and Applied Mathematics, 2003)
J. Schmitt, S. Kuckuk, H. Köstler, Constructing Efficient Multigrid Solvers with Genetic Programming, in Proceedings of the 2020 Genetic and Evolutionary Computation Conference. GECCO '20. Cancún, Mexico: ACM (2020), pp. 1012–1020
G.D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods (Oxford university press, 1985)
R. Temam, Navier–Stokes Equations: Theory and Numerical Analysis, Vol. 343 (American Mathematical Soc., 2001)
L.N. Trefethen, D. Bau III, Numerical Linear Algebra, Vol. 50 (Society of Industrial and Applied Mathematics, 1997)
H.K. Versteeg, W. Malalasekera, An Introduction to Computational Fluid Dynamics: The Finite Volume Method (Pearson Education, 2007)
E.J. Vladislavleva, G.F. Smits, D.D. Hertog, Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic programming, in IEEE Transactions on Evolutionary Computation 13.2 (2008), pp. 333–349
R. Wienands, W. Joppich, Practical Fourier Analysis for Multigrid Methods (CRC Press, 2004)
D.M. Young, Iterative Solution of Large Linear Systems (Elsevier, 2014)
Open Access funding enabled and organized by Projekt DEAL.
Johannes Gutenberg University Mainz, Mainz, Germany
Dominik Sobania & Franz Rothlauf
Friedrich–Alexander University Erlangen–Nürnberg, Erlangen, Germany
Jonas Schmitt & Harald Köstler
Dominik Sobania
Jonas Schmitt
Harald Köstler
Franz Rothlauf
Correspondence to Dominik Sobania.
Sobania, D., Schmitt, J., Köstler, H. et al. Genetic programming for iterative numerical methods. Genet Program Evolvable Mach (2021). https://doi.org/10.1007/s10710-021-09425-5
Revised: 15 September 2021
Genetic programming
Sparse linear algebra
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
© 2022 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Scales, Maps and Ratios
Simplifying Ratios with Numeric Terms
Ratios with Fractions and Percents
Decimal Ratios
Applications of Ratios
Dividing a Quantity into a Given Ratio
A scale compares the distance on a map to the actual distance on the ground. Scales can be depicted in a few different ways:
Graphic scale
A graphic scale represents a scale by using a small line with markings similar to a ruler. One side of the line represents the distance on the map, while the other side represents the true distances of objects in real life. By measuring the distance between two points on a map and then referring to the graphic scale, we can calculate the actual distance between those points.
In this picture, you can see that one centimetre on the scale represents $250$250 kilometres in real life.
Verbal scale
A verbal scale uses words to describe the ratio between the map's scale and the real world. For example, we could say "One centimetre equals fifteen kilometres" or we could write it as $1$1cm = $15$15km. This means that one centimetre on the map is equivalent to $15$15 kilometres in the real world.
Scale Ratio
Some maps use a representative fraction to describe the ratio between the map and the real world. If you need a refresher about ratios, see Looking at Relationships between Different Groups. We write scale ratios for maps just like other ratios with the colon in the middle. For example $1:100000$1:100000.
Convert the following description to a proper scale ratio: $5$5cm on the map = $25$25m in real life.
Remember we need to have our two quantities in the same unit of measurement in a ratio. I'm going to convert everything to centimetres.
$25$25m $=$= $25\times100$25×100 cm
$=$= $2500$2500 cm
Once we have equivalent quantities, we can write it as a scale ratio.
$5:2500$5:2500 $=$= $1:500$1:500
Question: Given that the scale on a map is $1$1:$50000$50000, find the actual distance between two points that are $8$8 cm apart on the map.
Think: This means that $1$1 cm on the map represents $50000$50000cm (or $500$500m) in real life.
Do: So to work out how far $8$8cm represents, we need to multiply $8$8 by $50000$50000. Then convert to km.
$8\times50000$8×50000 $=$= $400000$400000 cm
$=$= $4000$4000 m
$=$= $4$4 km
Now let's look at how we can do this process in reverse.
Question: Given that the scale on a map of a garden is $1$1:$2000$2000 , how far apart should two fountains be drawn on the map if the actual distance between the fountains is $100$100 m?
GM5-8
Interpret points and lines on co-ordinate planes, including scales and bearings on maps
|
CommonCrawl
|
Characterizations of the $E$-Benson proper efficiency in vector optimization problems
NACO Home
Some properties of a class of $(F,E)$-$G$ generalized convex functions
2013, 3(4): 627-641. doi: 10.3934/naco.2013.3.627
Error bounds for symmetric cone complementarity problems
Xin-He Miao 1, and Jein-Shan Chen 2,
Department of Mathematics, School of Science, Tianjin University, Tianjin 300072, China
Department of Mathematics, National Taiwan Normal University, Taipei 11677
Received May 2013 Revised August 2013 Published October 2013
In this paper, we investigate the issue of error bounds for symmetric cone complementarity problems (SCCPs). In particular, we show that the distance between an arbitrary point in Euclidean Jordan algebra and the solution set of the symmetric cone complementarity problem can be bounded above by some merit functions such as Fischer-Burmeister merit function, the natural residual function and the implicit Lagrangian function. The so-called $R_0$-type conditions, which are new and weaker than existing ones in the literature, are assumed to guarantee that such merit functions can provide local and global error bounds for SCCPs. Moreover, when SCCPs reduce to linear cases, we demonstrate such merit functions cannot serve as global error bounds under general monotone condition, which implicitly indicates that the proposed $R_0$-type conditions cannot be replaced by $P$-type conditions which include monotone condition as special cases.
Keywords: Error bounds, $R_0$-type functions, symmetric cone complementarity problem., merit function.
Mathematics Subject Classification: Primary: 65K10; Secondary: 90C3.
Citation: Xin-He Miao, Jein-Shan Chen. Error bounds for symmetric cone complementarity problems. Numerical Algebra, Control & Optimization, 2013, 3 (4) : 627-641. doi: 10.3934/naco.2013.3.627
S.-J. Bi, S.-H. Pan and J.-S. Chen, The same growth of FB and NR symmetric cone complementarity functions,, Optimization Letters, 6 (2012), 153. Google Scholar
B. Chen, Error bounds for R0-type and monotone nonlinear complementarity problems,, Journal of Optimization Theorey and Applications, 108 (2001), 297. doi: 10.1023/A:1026434200384. Google Scholar
J.-S. Chen, Conditions for error bounds and bounded Level sets of some merit functions for the second-order cone complementarity problem,, Journal of Optimization Theory and Applications, 135 (2007), 459. doi: 10.1007/s10957-007-9279-9. Google Scholar
B. Chen and P. T. Harker, Smoothing Approximations to nonlinear complementarity problems,, SIAM Journal on Optimization, 7 (1997), 403. doi: 10.1137/S1052623495280615. Google Scholar
X. Chen and S. Xiang, Computation of error bounds for P-matrix linear complementarity problems,, Mathematical Programming, 106 (2006), 513. doi: 10.1007/s10107-005-0645-9. Google Scholar
J. Faraut and A. Korányi, "Analysis on Symmetric Cones,", Oxford Mathematical Monographs Oxford University Press, (1994). Google Scholar
F. Facchinei and J. S. Pang, "Finite-Dimensional Variational Inequalities and Complementarity Problems,", Volume I, (2003). Google Scholar
M. Fukushima, Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems,, Mathematical Programming, 53 (1992), 99. doi: 10.1007/BF01585696. Google Scholar
M. S. Gowda, R. Sznajder and J. Tao, Some P-properties for linear transformations on Euclidean Jordan algebras,, Linear Algebra Appl., 393 (2004), 203. doi: 10.1016/j.laa.2004.03.028. Google Scholar
Z. H. Huang, S. L. Hu and J. Y. Han, Convergence of a smoothing algorithm for symmetric cone complementarity problems with a nonmonotone line search,, Science China Mathematics, 52 (2009), 833. doi: 10.1007/s11425-008-0170-4. Google Scholar
Z. H. Huang and T. Ni, Smoothing algorithms for complementarity problems over symmetric cones,, Comprtational Optimization and Applications, 45 (2010), 557. doi: 10.1007/s10589-008-9180-y. Google Scholar
C. Kanzow and M. Fukushima, Equivalence of the generalized complementarity problem to differentiable unconstrained minimization,, Journal of Optimization Theory and Applications, 90 (1996), 581. doi: 10.1007/BF02189797. Google Scholar
L. C. Kong, J. Sun and N. H. Xiu, A regularized smoothing Newton method for symmetric cone complementarity problems,, SIAM Journal on Optimization, 19 (2008), 1028. doi: 10.1137/060676775. Google Scholar
L. C. Kong, L. Tuncel and N. H. Xiu, Vector-valued implicit Lagrangian for symmetric cone complementarity problems,, Asia-Pacific Journal of Operational Research, 26 (2009), 199. doi: 10.1142/S0217595909002171. Google Scholar
Z. Q. Luo, O. L. Mangasarian, J. Ren and M. V. Solodov, New error bounds for the linear complementarity problem,, Mathematics of Operations Research, 19 (1994), 880. doi: 10.1287/moor.19.4.880. Google Scholar
Z. Q. Luo and P. Tseng, Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem,, SIAM Journal on Optimization, 2 (1992), 43. doi: 10.1137/0802004. Google Scholar
Y. J. Liu, Z. W. Zhang and Y. H. Wang, Some properties of a class of merit functions for symmetric cone complementarity problems,, Asia Pacific Journal of Operational Research, 23 (2006), 473. doi: 10.1142/S0217595906000991. Google Scholar
R. Mathias and J. S. Pang, Error bounds for the linear complementarity problem with a P-Matrix,, Linear Algebra and Applications, 36 (1986), 81. Google Scholar
O. L. Mangasarian and J. Ren, New improved error bounds for the linear complementarity problem,, Mathematical Programming, 66 (1994), 241. doi: 10.1007/BF01581148. Google Scholar
O. L. Mangasarian and T.-H. Shiau, Error bounds for monotone linear complementarity problems,, Mathematical Programming, 36 (1986), 81. doi: 10.1007/BF02591991. Google Scholar
J. S. Pang and L. Qi, Nonsmooth equations: Motivation and algorithms,, SIAM Journal on Optimization, 3 (1993), 443. doi: 10.1137/0803021. Google Scholar
S.-H. Pan and J.-S. Chen, A one-parametric class of merit functions for the symmetric cone complementarity problem,, Journal of Mathematical Analysis and Applications, 355 (2009), 195. doi: 10.1016/j.jmaa.2009.01.064. Google Scholar
J. M. Peng, Equivalence of variational inequality problems to unconstrained minimization,, Mathematical Programming, 78 (1997), 347. doi: 10.1016/S0025-5610(96)00077-9. Google Scholar
D. Sun and J. Sun, Löwner's operator and spectral functions on Euclidean Jordan algebras,, Mathematics of Operations Research, 33 (2008), 421. doi: 10.1287/moor.1070.0300. Google Scholar
P. Tseng, Growth behavior of a class of merit functions for the nonlinear complementarity problems,, Journal of Optimization Theory and Applications, 89 (1996), 17. doi: 10.1007/BF02192639. Google Scholar
J. Tao and M. S. Gowda, Some P-properties for nonlinear transformations on Euclidean Jordan algebras,, Mathematics of Operations Research, 30 (2005), 985. doi: 10.1287/moor.1050.0157. Google Scholar
Li-Xia Liu, Sanyang Liu, Chun-Feng Wang. Smoothing Newton methods for symmetric cone linear complementarity problem with the Cartesian $P$/$P_0$-property. Journal of Industrial & Management Optimization, 2011, 7 (1) : 53-66. doi: 10.3934/jimo.2011.7.53
Yu-Lin Chang, Jein-Shan Chen, Jia Wu. Proximal point algorithm for nonlinear complementarity problem based on the generalized Fischer-Burmeister merit function. Journal of Industrial & Management Optimization, 2013, 9 (1) : 153-169. doi: 10.3934/jimo.2013.9.153
Mengmeng Zheng, Ying Zhang, Zheng-Hai Huang. Global error bounds for the tensor complementarity problem with a P-tensor. Journal of Industrial & Management Optimization, 2019, 15 (2) : 933-946. doi: 10.3934/jimo.2018078
Xiao-Hong Liu, Wei Wu. Coerciveness of some merit functions over symmetric cones. Journal of Industrial & Management Optimization, 2009, 5 (3) : 603-613. doi: 10.3934/jimo.2009.5.603
Hisashi Inaba. The Malthusian parameter and $R_0$ for heterogeneous populations in periodic environments. Mathematical Biosciences & Engineering, 2012, 9 (2) : 313-346. doi: 10.3934/mbe.2012.9.313
Christine K. Yang, Fred Brauer. Calculation of $R_0$ for age-of-infection models. Mathematical Biosciences & Engineering, 2008, 5 (3) : 585-599. doi: 10.3934/mbe.2008.5.585
Behrouz Kheirfam. A weighted-path-following method for symmetric cone linear complementarity problems. Numerical Algebra, Control & Optimization, 2014, 4 (2) : 141-150. doi: 10.3934/naco.2014.4.141
Yi Zhang, Liwei Zhang, Jia Wu. On the convergence properties of a smoothing approach for mathematical programs with symmetric cone complementarity constraints. Journal of Industrial & Management Optimization, 2018, 14 (3) : 981-1005. doi: 10.3934/jimo.2017086
Jinchuan Zhou, Naihua Xiu, Jein-Shan Chen. Solution properties and error bounds for semi-infinite complementarity problems. Journal of Industrial & Management Optimization, 2013, 9 (1) : 99-115. doi: 10.3934/jimo.2013.9.99
Toshikazu Kuniya, Mimmo Iannelli. $R_0$ and the global behavior of an age-structured SIS epidemic model with periodicity and vertical transmission. Mathematical Biosciences & Engineering, 2014, 11 (4) : 929-945. doi: 10.3934/mbe.2014.11.929
Nicolas Bacaër, Xamxinur Abdurahman, Jianli Ye, Pierre Auger. On the basic reproduction number $R_0$ in sexual activity models for HIV/AIDS epidemics: Example from Yunnan, China. Mathematical Biosciences & Engineering, 2007, 4 (4) : 595-607. doi: 10.3934/mbe.2007.4.595
Benjamin H. Singer, Denise E. Kirschner. Influence of backward bifurcation on interpretation of $R_0$ in a model of epidemic tuberculosis with reinfection. Mathematical Biosciences & Engineering, 2004, 1 (1) : 81-93. doi: 10.3934/mbe.2004.1.81
Cameron J. Browne, Sergei S. Pilyugin. Minimizing $\mathcal R_0$ for in-host virus model with periodic combination antiviral therapy. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3315-3330. doi: 10.3934/dcdsb.2016099
Liuyang Yuan, Zhongping Wan, Jingjing Zhang, Bin Sun. A filled function method for solving nonlinear complementarity problem. Journal of Industrial & Management Optimization, 2009, 5 (4) : 911-928. doi: 10.3934/jimo.2009.5.911
Fengming Ma, Yiju Wang, Hongge Zhao. A potential reduction method for the generalized linear complementarity problem over a polyhedral cone. Journal of Industrial & Management Optimization, 2010, 6 (1) : 259-267. doi: 10.3934/jimo.2010.6.259
Gerardo Chowell, R. Fuentes, A. Olea, X. Aguilera, H. Nesse, J. M. Hyman. The basic reproduction number $R_0$ and effectiveness of reactive interventions during dengue epidemics: The 2002 dengue outbreak in Easter Island, Chile. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1455-1474. doi: 10.3934/mbe.2013.10.1455
Minghua Li, Chunrong Chen, Shengjie Li. Error bounds of regularized gap functions for nonmonotone Ky Fan inequalities. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-12. doi: 10.3934/jimo.2019001
Chengxiang Wang, Li Zeng. Error bounds and stability in the $l_{0}$ regularized for CT reconstruction from small projections. Inverse Problems & Imaging, 2016, 10 (3) : 829-853. doi: 10.3934/ipi.2016023
Jeremiah Birrell. A posteriori error bounds for two point boundary value problems: A green's function approach. Journal of Computational Dynamics, 2015, 2 (2) : 143-164. doi: 10.3934/jcd.2015001
Xiao-Hong Liu, Wei-Zhe Gu. Smoothing Newton algorithm based on a regularized one-parametric class of smoothing functions for generalized complementarity problems over symmetric cones. Journal of Industrial & Management Optimization, 2010, 6 (2) : 363-380. doi: 10.3934/jimo.2010.6.363
Xin-He Miao Jein-Shan Chen
|
CommonCrawl
|
What is meaning of the term "language"?
I don't have much formal background, and I could not find a suitable explanation for this after searching on Google/Wikipedia.
What is the meaning of the term "language" as used in cryptographic protocols?
Sample sentence from a recent paper:
Every language in BQP admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical(and quantum) polynomial-time verifiers.
terminology complexity
$\begingroup$ en.wikipedia.org/wiki/Formal_language $\endgroup$ – Squeamish Ossifrage Feb 18 '19 at 17:17
$\begingroup$ @SqueamishOssifrage thank you! I guess I was missing the word formal and hence always ended up in web searches about programming languages $\endgroup$ – user1936752 Feb 18 '19 at 17:24
$\begingroup$ This doesn't really have anything to do with crypto and would be more at home in the cs stackexchange. $\endgroup$ – Maeher Feb 18 '19 at 18:11
$\begingroup$ @Maeher I agree at least for the example paper, as it seems a rather theoretical definition of a formal language that expresses a program that can be represented by a (non-deterministic) Turing machine. And I guess that is the definition for most crypto papers - but I would like to see some more consent before I'd post it as an answer myself. $\endgroup$ – Maarten Bodewes♦ Feb 18 '19 at 18:24
$\begingroup$ I would be very grateful if you could point out how that defintion of language ties in with the case in this sample paper where they talk about the language "admitting an argument system". As I understand it, it isn't too clear to me how it connects the notion of the language and what the language admits to the actual problem they discuss (which seems to be a cryptographic one of giving a zero knowledge proof). $\endgroup$ – user1936752 Feb 18 '19 at 18:33
The concept of language has been systematized. For example here you can become familiar with this in an accessible way.
In the article you are reading the language has such meaning:
Wikipedia BQP:
A language L is in BQP if and only if and only if and only if there exists a polynomial-time uniform family of quantum circuits $\{Q_n:n \in \mathbb{N}\}$, such that
For all $n \in \mathbb{N}$, ''Qn'' takes ''n'' qubits as input and outputs 1 bit
For all ''x'' in ''L'', $\mathrm{Pr}(Q_{|x|}(x)=1)\geq \tfrac{2}{3}$
For all ''x'' not in ''L'', $\mathrm{Pr}(Q_{|x|}(x)=0)\geq \tfrac{2}{3}$
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
kelalaka
simhumilecosimhumileco
$\begingroup$ Hey-hey, welcome to crypto.SE, simhumileco. Now that's a nice answer to start your contributions! $\endgroup$ – Maarten Bodewes♦ Feb 18 '19 at 19:05
$\begingroup$ Thank you for the warm welcome @MaartenBodewes :) $\endgroup$ – simhumileco Feb 18 '19 at 19:06
This question is concerned with the definition of the term "language" as it appears in the definition of a complexity class; thus we can look for the answer in a reference work on complexity theory. One such work is the book of Arora and Barak, where one can read (the set $\{0,1\}^*$ having been previously defined as the set of all finite binary strings)
An important special case of functions mapping strings to strings is the case of Boolean functions, whose output is a single bit. We identify such a function $f$ with the subset $L_f = \{x : f(x) = 1\}$ of $\{0,1\}^*$ and call such sets languages or decision problems (we use these terms interchangeably).
Thus a language in this context is simply a subset of $\{0,1\}^*$, i.e., a set of finite binary strings. It is equivalent to the (perhaps more intuitive) notion of a decision problem, which is basically a question to which the answer is "yes" or "no". To such a problem we can associate the language consisting of the strings for which the answer is "yes"; for example the language associated to the question "Given an integer, is it prime?" is the language whose elements are precisely (the binary representations of) the prime integers.
fkraiemfkraiem
$\begingroup$ Thank you, I think that's a very good point about the decision problem being represented as a language $\endgroup$ – user1936752 Feb 19 '19 at 14:07
Not the answer you're looking for? Browse other questions tagged terminology complexity or ask your own question.
The meaning of "scheme"
What is the meaning of "trapdoor" in cryptography?
Definition of the term "key"
What is the meaning of "probabilistic encryption algorithm"?
Term for the ratio of ciphertext to plaintext size?
What does the term "asymptotic security" mean?
What does the term "nontrivial rotation" mean?
The meaning of formal and semi-formal in cryptography
|
CommonCrawl
|
Subspace-based self-interference cancellation for full-duplex MIMO transceivers
Ahmed Masmoudi1 &
Tho Le-Ngoc1
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 55 (2017) Cite this article
This paper addresses the self-interference (SI) cancellation at baseband for full-duplex MIMO communication systems in consideration of practical transmitter imperfections. In particular, we develop a subspace-based algorithm to jointly estimate the SI and intended channels and the nonlinear distortions. By exploiting the covariance and pseudo-covariance of the received signal, we can increase the dimension of the received signal subspace while keeping the dimension of the signal subspace constant, and hence, the proposed algorithm can be applied to most of full-duplex MIMO configurations with arbitrary numbers of transmit and receive antennas. The channel coefficients are estimated, up to an ambiguity term, without any knowledge of the intended signal. A joint detection and ambiguity identification scheme is proposed. Simulation results show that the proposed algorithm can properly estimate the channel with only one pilot symbol and offers superior SI cancellation performance.
Half-duplex transmission is commonly used in the current communication systems by transmitting and receiving over orthogonal channels. Full-duplex communication represents an attractive alternative to save channel resources or to increase the transmission efficiency. The main deterrent to employ full-duplex is the large self-interference (SI) from the simultaneous transmission and reception over the same frequency band. The SI is usually several orders of magnitude higher than the intended signal received from the other transmitter, because the later travels a longer distance than the former signal. Recent works have shown that, using different cancellation stages, the SI can be sufficiently suppressed to properly detect the intended signal [1, 2].
The SI is first cancelled at the radio-frequency (RF) level, prior to the low-noise amplifier (LNA) and the analog-to-digital converter (ADC), to avoid overloading/saturation of these devices [1–3]. In other words, the SI should be sufficiently suppressed at RF to maintain the receiver's limited dynamic range. Then, further SI suppression can be done after the ADC at the baseband [4, 5]. In the following, we assume that a cancellation stage at RF is available and we concentrate on the SI cancellation in the baseband.
To further reduce the SI, channel state information of the interference link should be available. Therefore, estimating the SI channel is a critical issue in full-duplex systems. In [6], the SI channel estimation is performed in the frequency domain using a least square (LS) technique. LS and minimum mean square error (MMSE) channel estimations are proposed in [7] to estimate the SI channel in the relay station. However, these approaches ignore the intended signal coming from the other transceiver and treat it as additive noise. An adaptive least mean square algorithm to estimate the SI channel is proposed in [8] where the large SI compared to the intended signal and additive noise is exploited to obtain an estimate of the SI channel. A more elaborate LS-based estimator was presented in [9] where a first estimate of the SI channel is obtained by considering the intended signal as additive noise. Then an iterative detection of the intended signal and channel estimation is performed to obtain a better estimate of the channel. On the other hand, spatial domain cancellation attempts to reduce the SI by precoding at the transmit chain and decoding at the receive chain. Spatial domain cancellation is formulated in the frequency domain [10–12]. An alternative time domain formulation was presented in [13] by precoding the transmitted SI to coincide with the null space of the SI channel. These techniques are based on the knowledge of both the SI and intended channels at the two transceivers, which further motivates the development of channel estimators for full-duplex systems. A novel cancellation method is proposed in [14] by adding a cancelling signal to the original signal.
In addition to the SI channel information for SI cancellation, intended channel knowledge is an important prerequisite for signal detection. Motivated by this fact, channel estimation has been the subject of intense research. In the case of data-aided transmissions, training-based techniques can be applied [15, 16]. However, the amount of training increases dramatically with the number of antennas and channel order. Blind approaches have been proposed as more bandwidth efficient techniques [17, 18] where subspace methods, initially presented in [19], have a great potential. By decomposing the covariance matrix of the received signal, subspace methods exploit the orthogonality between the noise and the signal subspaces in the observation space to express the channel coefficients as a linear combination of a basis of the signal subspace. Although previous researches have shown the potential of this procedure to give an accurate estimate of the channel, it remains of limited practical interest. Actually, considering that the noise subspace needs to be nondegenerated, it is legitimate to wonder how we can satisfy this condition. Previous works rely on oversampling of the received signal or using more receive antennas than transmit antennas [20, 21]. However, such solutions increase the receiver cost and need additional hardware. Moreover, they may result in correlated noise which makes the subspace technique inappropriate. A maximum likelihood estimator was presented in [22] by exploiting the pilots in the intended signal.
In the full-duplex context, the transmitter impairments, including power amplifier (PA) nonlinearity and IQ mixer imbalance, become limiting factors and need to be reduced to properly detect the intended signal. In practice, the inband image resulting from the IQ mixer in mobile user is about 28 dB lower than the direct signal [23]. In the presence of strong SI of about 50 dB higher than the intended signal, this IQ image represents additional interference for the intended signal. The effects of transceiver impairments are illustrated in detail in [3, 24]. Due to the importance of the nonlinearities, a digital cancellation procedure has been proposed to reduce the effects of the PA in [25] by estimating the nonlinear coefficients of the PA and another algorithm has been proposed to deal with the IQ mixer imbalance [26]. However, there is no discussion about the intended signal in the existing literature, which limits the estimation performance if it is considered as additive noise.
In this work, we incorporate the intended signal in the estimation process. We also take into account the transmitter impairments when modelling the SI signal. For realistic multipath propagation channels, we need to estimate the SI channel, the intended channel and the distorted SI. And noting that the intended signal is unknown, we propose to use a novel subspace method to efficiently estimate the different parameters. Since the received signal consists of the SI and intended signals, the dimension of the signal subspace in full-duplex operation is at least twice that in traditional half-duplex operation [5, 27]. Thus an essential shortcoming of the existing subspace-based technique is that it can be applied only when the number of receive antennas is larger than the number of transmit antennas. In the following, we circumvent this condition and develop a subspace-based algorithm suitable for MIMO full-duplex systems with larger or equal numbers of transmit and receive antennas. We exploit both the covariance and pseudo-covariance matrices of the received signal to effectively increase the dimension of the observation space while keeping the dimension of the signal subspace unchanged. The joint processing of the received signal and its complex conjugates has been used in many works to improve the detection performance on various systems [28, 29]. Also, in an entirely different context, the improper property of the received signal was first exploited for channel identification in [30] to obtain a virtual SIMO model from a SISO one. Preliminary results can be found in [31] for real-valued symbols to enable the application of widely linear processing techniques, but entail a loss in spectral efficiency compared to complex-valued symbols. We propose in this paper a method to use the widely linear processing to complex symbols by forcing the transmit signal to be improper. We justify the advocated time domain approach and compare its performances to a frequency domain approach and we generalize the PA model to any nonlinearity order. In practice, we cannot blindly recover the channel coefficients since an ambiguity term always appears in the final estimate [5]. This ambiguity is resolved using a sequence of pilot symbols, considerably shorter than needed in training-based techniques. In the following, we propose a joint data detection and estimation of the ambiguity term to considerably reduce the length of the pilot sequence. We show through simulation that just one pilot symbol is sufficient to perfectly estimate the channel.
The paper is organized as follows. In Section 2, the full-duplex system model is presented. The subspace-based channel estimation is described in Section 3. In Section 4, we describe the joint decoding and ambiguity removal procedure. Illustrative simulation results are given in Section 5 and Section 6 presents the conclusion.
Notations commonly used in this paper are presented. Subscripts (·)∗, (·)T, and (·)H refer to conjugate, transpose and conjugate transpose for matrices or vectors, respectively. For a given vector x, diag(x) returns a diagonal matrix whose diagonal elements are the entries of x. rank(M) returns the rank of a given matrix M, det(M) returns the determinant of M and vect(M) stacks the columns of M into one vector. The operator ⊗ refers to the Kronecker product of two matrices. ℜ(·) and I(·) return the real and imaginary parts of complex numbers. E(·) denotes the mathematical expectation. ||·||2 returns the Euclidean norm of a vector. I p refers to the p×p identity matrix and 1 p the p×1 vector with 1 at all elements. A term accented by a hat, \(\widehat x\), means an estimate of x.
Full-duplex MIMO system model
Consider two transceivers communicating in a full-duplex fashion. The simultaneous transmission and reception creates self-interference (SI) to be cancelled before the demodulation process. The SI signal is first suppressed at RF, prior to the low-noise amplifier (LNA) and analog-to-digital converter (ADC) to avoid overloading/saturation of these components [2, 3, 32]. In [5], we proposed an efficient compressed-sensing (CS)-based algorithm for the RF SI cancellation stage. In this work, we concentrate on the development of subspace-based algorithm to jointly estimate the SI and intended channels and the nonlinear distortions for the baseband SI cancellation stage of a full-duplex MIMO transceiver with arbitrary numbers of transmit and receive antennas. The output signal of the RF SI cancellation stage consists of the residual SI, the intended signal received from the other transceiver and the additive thermal noise. Figure 1 shows a simplified block diagram of a MIMO transceiver. The residual SI can be further suppressed at the baseband after ADC using digital signal processing (DSP). The advantage of working in the digital domain, as compared to RF, is that sophisticated DSP methods can be handled. Both transceivers are equipped with N t transmitting antennas and N r receiving antennas. At transmitting antenna q, a group of N data symbols X q =[ X q (0),…, X q (N−1)]T is first modulated by the IFFT matrix to form an OFDM block, then the time domain vector x q =[ x q (0),…, x q (N−1)]T is extended by the cyclic prefix of length1 N cp and the resulting vector is sent sequentially. In the transmit stream q, the complex signal x q (t) after the digital-to-analog conversion (DAC), is passed through an imbalance IQ mixer whose output is as follows:
$$ x_{q}^{IQ}(t) = k_{1,q} x_{q}(t) + k_{2,q} x_{q}^{*}(t), $$
Simplified block diagram of the full-duplex transceiver with RF and baseband SI cancellation stages
where k 1,q and k 2,q are the responses of the IQ mixer at antenna q to the direct signal and the image, respectively. Then, the signal is amplified with a nonlinear PA. In the following, we model the PA response with a Hammerstein model whose response is:
$$\begin{array}{@{}rcl@{}} x^{PA}_{q}(t) = \left(\sum_{p=0}^{P}\alpha_{2p+1,q} x_{q}^{IQ}(t)|x_{q}^{IQ}(t)|^{2p} \right) \star f(t), \end{array} $$
where α 2p+1,q , for p=0,…, P, are the nonlinearity coefficients of the PA at transmit antenna q, P is the nonlinearity order and f(t) is the memory of the PA. In (2), ⋆ denotes the convolution operator. The transmitted signal is coupled to produce SI in the receiver. Considering multipath channels, the received signal at antenna r is as follows:
$$ y_{r}^{ant}(t) \! = \! \sum_{q=1}^{N_{t}} h^{c}_{r,q}(t) \star x_{q}^{PA}(t) \! + \! \sum_{q=1}^{N_{t}} h^{s}_{r,q}(t) \star s_{q}(t) \! + \! w_{th,r}(t), $$
where s q (t) is the transmitted signal from the q th antenna of the other intended transceiver. \(h_{r,q}^{c}(t)\) is the response of the SI channel from transmitting antenna q to receiving antenna r of the same transceiver. \(h_{r,q}^{s}(t)\) is the response of the intended channel from transmitting antenna q of the other intended transceiver to receiving antenna r of the same transceiver. w th,r (t) is the additive thermal noise in Rx stream r. To reduce the SI before the LNA and ADC, the RF cancellation stage is performed as follows:
$$\begin{array}{@{}rcl@{}} y_{r}^{RF}(t) = y_{r}^{ant}(t) - \sum_{q=1}^{N_{t}} \widehat h^{c}_{r,q}(t) \star x_{q}^{PA}(t), \end{array} $$
where \(\widehat h^{c}_{r,q}(t)\) is a first estimate of the SI channel [1, 6]. \(\widehat h^{c}_{r,q}(t)\) is used to adjust the phase, amplitude and delay of the SI to the main propagation path. To include the transmitter distortion in the RF cancellation process, the reference signal is taken from the output of the PA. This RF SI cancellation can attenuate the SI by 30 dB, as reported in practical experiments [6, 33]. Then, the received signal passes through the LNA:
$$ y_{r}^{LNA}(t) = k_{LNA} y_{r}^{RF}(t) + w_{LNA}(t), $$
where w LNA (t) is the additive noise caused by the LNA and k LNA is the gain of the LNA. Finally, the received signal is adjusted by the variable gain amplifier (VGA) to match the dynamic range of the ADC. For simplicity, we suppose that the linear gains k 1,q and α 1,q of the IQ mixer and PA are equal to 1. Combining (2), (3) and (5), the received samples are given by
$$ \begin{aligned} y_{r}(n) &= \sum_{q=1}^{N_{t}} \sum_{l=0}^{L} h^{(i)}_{r,q}(l)x_{q}^{IQ}(n-l) + \sum_{p=1}^{P} \alpha_{2p+1,q} h^{(i)}_{r,q}(l)x_{q,ip,p}(n-l)\\ &\quad+ h_{r,q}^{(s)}(l) s_{q}(n-l) + w_{r}(n), \end{aligned} $$
where \(x_{{q,ip,p}}(n)=x_{q}^{IQ}(n)|x_{q}^{IQ}(n)|^{2p}\) resulting from the cascade of IQ mismatch and PA (2p+1)rd order nonlinearity and w r (n) collects the thermal noise, the LNA noise and the quantization noise. In (6), the global channel responses are given by
$$\begin{array}{@{}rcl@{}} h^{(i)}_{r,q}(l) & = & k_{LNA} (h^{c}_{r,q}(l) \star f(l) -\widehat h_{r,q}^{c}(l)),\\ h^{(s)}_{r,q}(l) & = & k_{LNA} h_{r,q}^{s}(l). \end{array} $$
To have a homogeneous notation, all channels are supposed to have the same order L and the channels of order lower than L are zero-padded so that the different channels have the same order and L still satisfies L<N cp . The received vector \(\boldsymbol {y}(n)=\ [\!y_{1}(n),\dots,~y_{N_{r}}(n)]^{T}\) over the N r antennas is given by
$$ \begin{aligned} \boldsymbol{y}(n) &= \sum_{q=1}^{N_{t}} \sum_{l=0}^{L} \boldsymbol{h}_{q}^{(i)}(l) x_{q}^{IQ}(n-l) + \sum_{p=1}^{P} \alpha_{2p+1,q} \boldsymbol{h}_{q}^{(i)}(l) x_{q,ip,p}(n-l)\\ &\quad+ \boldsymbol{h}_{q}^{(s)}(l) s_{q}(n-l) + \boldsymbol{w}(n), \end{aligned} $$
$$\begin{array}{@{}rcl@{}} \boldsymbol{h}^{(i)}_{q}(l) & = & [\!h^{(i)}_{1,q}(l),~ h^{(i)}_{2,q}(l),\dots,~ h^{(i)}_{N_{r},q}(l)]^{T},\\ \boldsymbol{h}^{(s)}_{q}(l) & = & [\!h^{(s)}_{1,q}(l),~ h^{(s)}_{2,q}(l),\dots,~ h^{(s)}_{N_{r},q}(l)]^{T}, \end{array} $$
for l=0, 1,…, L and \(\boldsymbol {w}(n) =\ [w_{1}(n),~w_{2}(n),\dots, w_{N_{r}}(n)]^{T}\). For a more compact representation, we gather the transmitted signals from the N t antennas to obtain
$$ \boldsymbol{y}(n) = \sum_{l=0}^{L} \boldsymbol{H}^{(i)}(l) \boldsymbol{x}(n-l) + \boldsymbol{H}^{(s)}(l) \boldsymbol{s}(n-l) + \boldsymbol{w}(n), $$
where the N r ×N t matrices H (i)(l) and H (s)(l) are given by
$$\begin{array}{@{}rcl@{}} \boldsymbol{H}^{(i)}(l) = [\!\boldsymbol{h}^{(i)}_{1}(l),~\boldsymbol{h}^{(i)}_{2}(l),\dots,~\boldsymbol{h}^{(i)}_{N_{t}}(l)], \\ \boldsymbol{H}^{(s)}(l) = [\!\boldsymbol{h}^{(s)}_{1}(l),~\boldsymbol{h}^{(s)}_{2}(l),\dots,~\boldsymbol{h}^{(s)}_{N_{t}}(l)], \end{array} $$
for l=0,…, L and
$$ \begin{aligned} \boldsymbol{x}_{i}(n) & =\ [x_{1}(n),~x_{2}(n),\dots,~x_{N_{t}}(n)]^{T}, \\ \boldsymbol{x}_{dist}(n) & = \left[k_{2,1}x_{1}^{*}(n)+\sum_{p=1}^{P} \alpha_{2p+1,1}x_{1,ip,p}(n),\dots,~k_{2,N_{t}}x_{N_{t}}^{*}(n)\right.\\&\left.\qquad+\sum_{p=1}^{P} \alpha_{2p+1,N_{t}}x_{N_{t},ip,p}(n)\right]^{T}, \\ \boldsymbol{x}(n) & = \boldsymbol{x}_{i}(n) + \boldsymbol{x}_{dist}(n),\\ \boldsymbol{s}(n) & = [\!s_{1}(n),~s_{2}(n),\dots,~s_{N_{t}}(n)]^{T}. \end{aligned} $$
We then group the channel matrices H (i)(l) and H (s)(l) in one N r ×2N t matrix H(l)= [H (i)(l), H (s)(l)] and gather all the channel coefficients in the following N r M×2N t N block Toeplitz matrix:
$$\begin{array}{@{}rcl@{}} \boldsymbol{H} = \left(\begin{array}{lllll} \boldsymbol{H}(0) & \boldsymbol{0}~\dots & \boldsymbol{0}~\boldsymbol{H}(L) & \dots & \boldsymbol{H}(1) \\ \boldsymbol{H}(1) & \boldsymbol{H}(0) & & \ddots & \vdots \\ \vdots & \boldsymbol{H}(1) & \ddots & & \boldsymbol{H}(L) \\ \boldsymbol{H}(L) & \vdots & & \ddots & \boldsymbol{0} \\ & \boldsymbol{H}(L) & & & \boldsymbol{H}(0) \\ \boldsymbol{0} & & \ddots & & \boldsymbol{H}(1) \\ \vdots & & & \ddots & \vdots \\ \boldsymbol{0} & \dots & \boldsymbol{0} & & \boldsymbol{H}(L) \end{array}\right). \end{array} $$
The received OFDM block on the N r antennas is:
$$ \boldsymbol{y} =\ [\!\boldsymbol{y}^{T}(0),~\boldsymbol{y}^{T}(1),\dots,~\boldsymbol{y}^{T}(M-1)]^{T} = \boldsymbol{H} \boldsymbol{u} + \boldsymbol{w}, $$
where M=N+L, the 2N t N×1 data vector u is given by
$$ \boldsymbol{u} =\ [\!\boldsymbol{x}^{T}(0),~\boldsymbol{s}^{T}(0),\dots,~\boldsymbol{x}^{T}(N-1),~\boldsymbol{s}^{T}(N-1)]^{T}, $$
$$ \boldsymbol{w} =\ [\!\boldsymbol{w}^{T}(0),~\boldsymbol{w}^{T}(1),\dots,~\boldsymbol{w}^{T}(M-1)]^{T}. $$
For multi-block transmission, the received vector in (14) is indexed by the block number t, i.e., y t . For convenience, we omit this indexation and we will consider later a given number of transmitted blocks to compute the covariance matrix of the received vector.
Subspace-based channel estimator
We propose to apply a subspace-based algorithm to jointly estimate the SI and intended channel coefficients along with the nonlinear coefficients. Subspace methods rely on the orthogonality property between the signal and noise subspaces. These two subspaces are obtained from eigendecomposition of the covariance matrix of the received signal y. Denoting by R u , the covariance of u, the covariance matrix R y of the received vector y is given by
$$ \boldsymbol{R}_{y} = \boldsymbol{H} \boldsymbol{R}_{u} \boldsymbol{H}^{H} +\sigma^{2} \boldsymbol{I}_{MN_{r}}, $$
as long as the signal samples are uncorrelated from the noise samples2.
The signal subspace is spanned by the columns of the matrix H. Noting that the columns of H are, by construction, linearly independent as soon as there exists an l∈ [ 0, L] such that H(l) is full rank3, the matrix H is a full-rank matrix. Therefore, the dimension of the signal subspace is 2NN t . It follows that, to obtain a nondegenerate noise subspace, its dimension N r M−2N t N should be larger than zero, and thus, the number of receiving antennas should be larger than the number of transmitting antennas to make the subspace method work, and in [5], we developed the linear subspace algorithm for this setting. In the following, we will develop the subspace-based algorithm for general numbers of transmit and receive antennas. When N t =N r , the matrix R y cannot be directly used to find the noise subspace. As an alternative different approach, we consider the augmented received vector as
$$\begin{array}{@{}rcl@{}} \widetilde{\boldsymbol{y}} = \left(\begin{array}{l} \boldsymbol{y} \\ \boldsymbol{y}^{*} \end{array}\right) = \left(\begin{array}{ll} \boldsymbol{H} & \mathbf{0} \\ \mathbf{0} & \boldsymbol{H}^{*} \end{array}\right) \left(\begin{array}{l} \boldsymbol{u} \\ \boldsymbol{u}^{*} \end{array}\right) + \left(\begin{array}{l} \boldsymbol{w} \\ \boldsymbol{w}^{*} \end{array}\right). \end{array} $$
The use of the augmented received vector is usually referred as widely linear processing. In this case, the augmented covariance matrix \(\boldsymbol {R}_{\widetilde y}\) of \(\widetilde {\boldsymbol {y}}\) has the following structure:
$$\begin{array}{@{}rcl@{}} \boldsymbol{R}_{\widetilde y} = \widetilde{\boldsymbol{H}} \boldsymbol{R}_{\widetilde u} \widetilde{\boldsymbol{H}}^{H} + \sigma^{2} \boldsymbol{I}_{2MN_{r}}, \end{array} $$
where \(\boldsymbol {R}_{\widetilde u}\) denotes the covariance matrix of the augmented transmit signal \(\widetilde {\boldsymbol {u}} = \left (\begin {array}{l} \boldsymbol {u} \\ \boldsymbol {u}^{*} \end {array}\right)\) and
$$ \widetilde{\boldsymbol{H}} = \left(\begin{array}{ll} \boldsymbol{H} & \mathbf{0} \\ \mathbf{0} & \boldsymbol{H}^{*} \end{array}\right). $$
It is worth mentioning that the proper noise has a vanishing pseudo-covariance [34]. The main purpose of using the extended received signal is to increase the dimension of the received signal and thus avoid the degenerate noise subspace. Hence, the subspace identification procedure can be derived only if the signal part covariance matrix, given by \(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\), of the covariance matrix \(\boldsymbol {R}_{\widetilde y}\) is singular. It results that \(d_{s} = \text {rank}(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}) < 2MN_{r}\). In this case, the signal is confined in a d s -dimensional subspace and the remaining noise subspace is with dimension 2MN r −d s . Singularity of \(\boldsymbol {R}_{\widetilde u}\) is a necessary condition to obtain a nondegenerate noise subspace. Actually, noting that \(\widetilde {\boldsymbol {H}}\) is full rank, nonsingular \(\boldsymbol {R}_{\widetilde u}\) results in \(\text {rank}(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}) = 2MN_{r}\), and thus, the matrix \(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\) spans all the observation space. On the other hand, since the matrix \(\widetilde {\boldsymbol {H}}\) is a tall matrix, singularity of \(\boldsymbol {R}_{\widetilde u}\) is not a sufficient condition to guarantee the singularity of \(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\).
The matrix \(\boldsymbol {R}_{\widetilde u}\) can be expressed in a block form in terms of the covariance matrix of u, R u =E(u u H), the pseudo-covariance matrix C u =E(u u T) and their complex conjugates as
$$ \boldsymbol{R}_{\widetilde u} = \left(\begin{array}{ll} \boldsymbol{R}_{u} & \boldsymbol{C}_{u} \\ \boldsymbol{C}_{u}^{*} & \boldsymbol{R}_{u}^{*} \end{array}\right). $$
In the following, we distinguish two cases of real and complex modulated symbols.
For real modulated symbols, it can be shown that \(\boldsymbol {R}_{\widetilde u} = \alpha ^{2} \boldsymbol {M} \otimes \boldsymbol {I}_{2N_{t}}\) with the 2N×2N matrix M having the following form:
From (22), we note that each column of M appears exactly two times (the first column of M is the same as the (N+1)th column, and the i th column of M is the same as the (2N−i+2)th column, for i=2,…, N). Therefore, the matrix M has exactly N-independent columns and thus its rank is N. It follows that the rank of \(\boldsymbol {R}_{\widetilde u}\) is 2NN t . In Appendix 1, we show that \(\boldsymbol {R}_{\widetilde u}\) has zero eigenvalue with multiplicity 2NN t and 2α 2 also with multiplicity 2NN t . Then, the matrix \(\boldsymbol {R}_{\widetilde u}\) is decomposed as U D U H where D is the 4NN t ×4NN t diagonal matrix with zeroes in the first 2NN t diagonal elements and 2α 2 in the last 2NN t diagonal elements and U is an orthogonal matrix whose columns are the corresponding eigenvectors of \(\boldsymbol {R}_{\widetilde u}\).
For complex symbols, the pseudo-covariance matrix C u is generally equal to the zero matrix, which makes the matrix \(\boldsymbol {R}_{\widetilde u}\) of full rank. To avoid this problem, we apply a simple precoding at the input of the IFFT. It transforms the data symbol X q to
$$\begin{array}{@{}rcl@{}} \widetilde{\boldsymbol{X}}_{q} = \boldsymbol{P} \boldsymbol{X}_{q} + \boldsymbol{Q} \boldsymbol{X}_{q}^{*}. \end{array} $$
where P and Q are two matrices. By combining the data symbol X q and its complex conjugate, we force the pseudo-covariance matrix to be different from zero. Appendix 2 gives a detailed discussion about the choice of the matrices P and Q so that the covariance matrix \(\boldsymbol {R}_{\widetilde u}\) has rank 2NN t and can be decomposed as U D U H with D as the 4NN t ×4NN t diagonal matrix with zeroes in the first 2NN t diagonal elements.
The noise subspace is the span of the p=2MN r −2NN t eigenvectors of \(\boldsymbol {R}_{\widetilde y}\) corresponding to the smallest eigenvalue σ 2, and the columns of \(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\) belong to the signal subspace. Due to the orthogonality between the signal and the noise subspaces, each column of \(\widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\) is orthogonal to any vector in the noise subspace. Let \(\{\boldsymbol {\nu }_{i}\}_{i=1}^{p}\) denote the p co-orthogonal eigenvectors corresponding to the smallest eigenvalue of \(\boldsymbol {R}_{\widetilde y}\). Then we have the following set of equations:
$$ \boldsymbol{\nu}_{i}^{H} \widetilde{\boldsymbol{H}} \boldsymbol{R}_{\widetilde u} \widetilde{\boldsymbol{H}}^{H} = \mathbf{0},~i=1,~2,\dots,~p. $$
From (24), we conclude that ν i spans the left null space of \( \widetilde {\boldsymbol {H}} \boldsymbol {R}_{\widetilde u} \widetilde {\boldsymbol {H}}^{H}\). For convenience, U is written as a block of 4 2NN t ×2NN t matrices:
$$ \boldsymbol{U} = \left(\begin{array}{cccccc} \boldsymbol{U}_{1} & \boldsymbol{U}_{2} \\ \boldsymbol{U}_{3} & \boldsymbol{U}_{4} \end{array} \right), $$
where the columns of \([\boldsymbol {U}_{1}^{T},~\boldsymbol {U}_{3}^{T}]^{T}\) are the eigenvectors of \(\boldsymbol {R}_{\widetilde u}\) corresponding to the eigenvalue zero and the columns of \([\boldsymbol {U}_{2}^{T},~\boldsymbol {U}_{4}^{T}]^{T}\) are the other eigenvectors. Then, taking into account the eigenvalue decomposition of \(\boldsymbol {R}_{\widetilde u}\), the set of equations in (24) are equivalent to
$$\begin{array}{@{}rcl@{}} \boldsymbol{\nu}_{i}^{H} \left(\begin{array}{c} \boldsymbol{H} \boldsymbol{U}_{2}\\ \boldsymbol{H}^{*} \boldsymbol{U}_{4} \end{array} \right) = \mathbf{0},~i=1,~2,\dots,~p. \end{array} $$
By dividing ν i into two MN r ×1 vectors, i.e., \(\boldsymbol {\nu }_{i} = [\boldsymbol {\nu }_{i,1}^{T},~\boldsymbol {\nu }_{i,2}^{T}]^{T}\), (26) is rewritten as
$$ \boldsymbol{\nu}_{i,1}^{H} \boldsymbol{H} \boldsymbol{U}_{2} + \boldsymbol{\nu}_{i,2}^{H} \boldsymbol{H}^{*} \boldsymbol{U}_{4} = \mathbf{0}, $$
for i=1, 2,…,p. The matrix H is completely defined by the set of matrices H(l), for l=0, 1,…, L. Therefore, the specific structure of H should be taken into consideration when solving the equations in (27) to obtain a more accurate estimate of the channels. To that end, we divide the two vectors ν i,1 and ν i,2 as follows:
$$ \begin{aligned} \boldsymbol{\nu}_{i,j} &= \left[\boldsymbol{\nu}_{i,j}^{T}(M),~\boldsymbol{\nu}_{i,j}^{T}(M-1),\dots,~\boldsymbol{\nu}_{i,j}^{T}(1)\right]^{T},\\ j&=1,~2,~i=1,~2,\dots,~p, \end{aligned} $$
where each ν i,j (n), for n=1, 2,…, M, is a N r ×1 vector. From (13) and (28), each term \(\boldsymbol {\nu }_{i,1}^{H} \boldsymbol {H}\) in (27) is rewritten as
$$ \begin{aligned} &\sum_{l=0}^{L} \boldsymbol{\nu}_{i,1}^{H}(n+L-l) \boldsymbol{H}(l) + \sum_{l=n}^{L} \boldsymbol{\nu}_{i,1}^{H}(M-l+n) \boldsymbol{H}(l),\\&\quad\text{for}~n=1,~\dots,~L,\\ &\sum_{l=0}^{L} \boldsymbol{\nu}_{i,1}^{H}(n+L-l) \boldsymbol{H}(l), \text{for}~n=L+1,\dots,~M, \end{aligned} $$
and \(\boldsymbol {\nu }_{i,2}^{H} \boldsymbol {H}^{*}\) can also be partitioned in the same manner. By introducing \(\boldsymbol {\check {h}}(l) = \text {vect}(\boldsymbol {H}(l))\) and \(\boldsymbol {V}_{i,j}(n) = \boldsymbol {I}_{2N_{t}} \otimes \boldsymbol {\nu }_{i,j}^{H}(n)\), for i=1,…, p and j=1, 2, it is easy to verify that \(\boldsymbol {\nu }_{i,j}^{H}(n) \boldsymbol {H}(l) = \boldsymbol {\check h}^{T}(l) \boldsymbol {V}_{i,j}^{T}(n)\). Let us denote the 2NN t ×2N t N r (L+1) matrices V i,j , for j=1, 2, as
$$ { \begin{aligned} \boldsymbol{V}_{i,j} &= \left(\begin{array}{llll} \boldsymbol{V}_{i,j}(L+1) & \boldsymbol{V}_{i,j}(L) & \ldots & \boldsymbol{V}_{i,j}(1) \\ \boldsymbol{V}_{i,j}(L+2) & \boldsymbol{V}_{i,j}(L+1) & \ldots & \boldsymbol{V}_{i,j}(2) \\ \boldsymbol{V}_{i,j}(L+3) & \boldsymbol{V}_{i,j}(L+2) & \ldots & \boldsymbol{V}_{i,j}(3) \\ \vdots & \vdots & \vdots & \vdots \\ \boldsymbol{V}_{i,j}(N+L) & \boldsymbol{V}_{i,j}(N+L-1)& \ldots & \boldsymbol{V}_{i,j}(N) \\ \end{array}\right)\\ &\quad+ \left(\begin{array}{llll} \mathbf{0} & \boldsymbol{V}_{i,j}(N+L) & \ldots & \boldsymbol{V}_{i,j}(N+1) \\ & & \ddots & \vdots \\ \vdots& & & \boldsymbol{V}_{i,j}(N+L) \\ \vdots& & & \mathbf{0} \\ & & & \vdots \\ \mathbf{0} & & & \mathbf{0} \\ \end{array}\right), \end{aligned}} $$
and \(\boldsymbol {\check {h}} = [\boldsymbol {\check h}^{T}(0),~\boldsymbol {\check h}^{T}(1),\dots,~\boldsymbol {\check h}^{T}(L)]^{T}\). Then, using the previous notations, (27) is rearranged to obtain
$$\begin{array}{@{}rcl@{}} \boldsymbol{\check h}^{T} \boldsymbol{V}_{i,1}^{T} \boldsymbol{U}_{2} + \boldsymbol{\check h}^{H} \boldsymbol{V}_{i,2}^{T} \boldsymbol{U}_{4} = \mathbf{0}, \end{array} $$
or, by taking the transpose of the previous equation:
$$\begin{array}{@{}rcl@{}} \boldsymbol{U}_{2}^{T} \boldsymbol{V}_{i,1} \boldsymbol{\check h} + \boldsymbol{U}_{4}^{T} \boldsymbol{V}_{i,2} \boldsymbol{\check h}^{*} = \mathbf{0}, \end{array} $$
for i=1, 2,…, p. Note that the difference between (27) and (32) is that (32) takes into account the Toeplitz blocks structure of H. Now, collecting all the previous equations, we obtain
$$ \boldsymbol{\Theta}_{1} \boldsymbol{\check h} +\boldsymbol{\Theta}_{2} \boldsymbol{\check h}^{*} = \mathbf{0}, $$
$$ { \begin{aligned} \boldsymbol{\Theta}_{1} & = \left[\left(\boldsymbol{U}_{2}^{T} \boldsymbol{V}_{1,1}\right)^{T},~\left(\boldsymbol{U}_{2}^{T} \boldsymbol{V}_{2,1}\right)^{T},\dots,~\left(\boldsymbol{U}_{2}^{T} \boldsymbol{V}_{p,1}\right)^{T}\right]^{T},\\ \boldsymbol{\Theta}_{2} & = \left[\left(\boldsymbol{U}_{4}^{T} \boldsymbol{V}_{1,2}\right)^{T},~\left(\boldsymbol{U}_{4}^{T} \boldsymbol{V}_{2,2}\right)^{T},\dots,~\left(\boldsymbol{U}_{4}^{T} \boldsymbol{V}_{p,2}\right)^{T}\right]^{T}. \end{aligned}} $$
Separating the real and imaginary parts of (33), we have
$$\begin{array}{*{20}l} \underbrace{\left(\begin{array}{ll} \Re(\boldsymbol{\Theta}_{1}+\boldsymbol{\Theta}_{2}) & \Im(-\boldsymbol{\Theta}_{1}+\boldsymbol{\Theta}_{2})\\ \Im(\boldsymbol{\Theta}_{1}+\boldsymbol{\Theta}_{2}) & \Re(\boldsymbol{\Theta}_{1}-\boldsymbol{\Theta}_{2}) \end{array}\right)}_{\boldsymbol{\overline \Theta}} \underbrace{\left(\begin{array}{l} \Re(\boldsymbol{\check h}) \\ \Im(\boldsymbol{\check h}) \end{array}\right)}_{\boldsymbol{\overline h}} = \mathbf{0}. \end{array} $$
From (35), the vector \(\boldsymbol {\overline h}\) belongs to the right null space of \(\boldsymbol {\overline \Theta }\). In practice, \(\boldsymbol {\overline h}\) is a linear combination of the 4N t N r right singular vectors of the matrix \(\boldsymbol {\overline \Theta }\), denoted by β i , which are equal to the eigenvector of the Gramian \(\overline {\boldsymbol {\Theta }}\overline {\boldsymbol {\Theta }}^{H}\) corresponding to the zero eigenvalue. Therefore, an estimate of \(\overline {\boldsymbol {h}}\) is given by
$$ \widehat{\overline{\boldsymbol{h}}} = \overline{\boldsymbol{\Phi}} \boldsymbol{c}, $$
where \(\overline {\boldsymbol {\Phi }}=[\boldsymbol {\beta }_{1},~\boldsymbol {\beta }_{2},\dots,~\boldsymbol {\beta }_{4N_{t}N_{r}}]\), and the 4N t N r ×1 vector c represents the ambiguity term to be estimated. The complex channel vector can also be obtained as
$$ \widehat{\boldsymbol{\check h}} = \boldsymbol{\Phi} \boldsymbol{c}, $$
where Φ is obtained by combining the lines of \(\overline {\boldsymbol {\Phi }}\) in the following way:
$$\begin{array}{@{}rcl@{}} \overline{\boldsymbol{\Phi}} = \left(\begin{array}{l} \overline{\boldsymbol{\Phi}}_{real} \\ \overline{\boldsymbol{\Phi}}_{imag} \end{array}\right) \rightarrow \boldsymbol{\Phi} = \overline{\boldsymbol{\Phi}}_{real} + j\overline{\boldsymbol{\Phi}}_{imag}, \end{array} $$
and j is the complex number satisfying j 2=−1.
We mention that the matrices U 2 and U 4 do not depend on the received signal and can be computed offline prior to the transmission. It is also seen that the overestimated channel order L does not affect the estimation process. This is a common property with other subspace-based estimators [17].
Resolving the ambiguity term
As mentioned above, the subspace that contains the channels is obtained and the ambiguity term needs to be estimated to extract the exact coefficients. Different approaches can be applied to solve the ambiguity term c. To do so, we highlight the contribution of c on the received vector y. First, we separate the matrix Φ in two N t N r (L+1)×4N t N r matrices Φ i and Φ s which contribute in the SI and intended channels, respectively (i.e., \(\boldsymbol {\check h}^{(i)} = \boldsymbol {\Phi }_{i} \boldsymbol {c}\) and \(\boldsymbol {\check h}^{(s)} = \boldsymbol {\Phi }_{s} \boldsymbol {c}\big)\). By rearranging the elements of Φ i as
$$ \boldsymbol{\Phi}_{i} \,=\, \left(\begin{array}{l} \boldsymbol{\Phi}_{i,1}(0) \\ \boldsymbol{\Phi}_{i,2}(0) \\ \vdots \\ \boldsymbol{\Phi}_{i,N_{t}}(0) \\ \vdots \\ \boldsymbol{\Phi}_{i,1}(L) \\ \boldsymbol{\Phi}_{i,2}(L) \\ \vdots \\ \boldsymbol{\Phi}_{i,N_{t}}(L) \\ \end{array}\right) \rightarrow \boldsymbol{\check \Phi}_{i} \,=\, \left(\begin{array}{lll} \boldsymbol{\Phi}_{i,1}(0) & \dots & \boldsymbol{\Phi}_{i,N_{t}}(0) \\ \boldsymbol{\Phi}_{i,1}(1) & \dots & \boldsymbol{\Phi}_{i,N_{t}}(1) \\ \vdots & & \vdots \\ \boldsymbol{\Phi}_{i,1}(L) & \dots & \boldsymbol{\Phi}_{i,N_{t}}(L) \\ \end{array}\right)\!\!,\! $$
where each Φ i,q (l) is a N r ×4N t N r matrix, \(\boldsymbol {\check H}^{(i)} = [{\boldsymbol {H}^{(i)}}^{T}(0),~{\boldsymbol {H}^{(i)}}^{T}(1),\dots,~{\boldsymbol {H}^{(i)}}^{T}(L)]^{T}\) can be written as
$$ \boldsymbol{\check H}^{(i)} = \boldsymbol{\check \Phi}_{i} (\boldsymbol{I}_{N_{t}}\otimes \boldsymbol{c}), $$
and \(\boldsymbol {\check H}^{(s)} = [{\boldsymbol {H}^{(s)}}^{T}(0),~{\boldsymbol {H}^{(s)}}^{T}(1),\dots,~{\boldsymbol {H}^{(s)}}^{T}(L)]^{T}\) can be also written as \(\boldsymbol {\check H}^{(s)} = \boldsymbol {\check \Phi }_{s} (\boldsymbol {I}_{N_{t}} \otimes \boldsymbol {c}),\) where \(\boldsymbol {\check \Phi }_{s}\) is defined in the same way as \(\boldsymbol {\check \Phi }_{i}\). \(\boldsymbol {\check H}^{(i)}\) and \(\boldsymbol {\check \Phi }_{i}\) are used to build the matrices H (i) and Ψ i , respectively, having the same block structure as H in (13).
Next, we define the diagonal matrices K and A p whose diagonal elements are \(\boldsymbol {k} = [k_{2,1},\dots,~k_{2,N_{t}}]^{T}\) and \(\boldsymbol {\alpha }_{p} = [\alpha _{2p+1,1},\dots,~\alpha _{2p+1,N_{t}}]^{T}\), respectively, and we denote by \(\boldsymbol {x}_{ip,p}(n) = [ x_{1,ip,p}(n),\dots,~ x_{N_{t},ip,p}(n)]^{T}\), and \(\boldsymbol {x}_{ip,p} = [\boldsymbol {x}_{ip,p}^{T}(0),\dots,~\boldsymbol {x}_{ip,p}^{T}(N-1)]^{T}\). Using the previous notations and by developing \(\boldsymbol {x} = \boldsymbol {x}_{i} + (\boldsymbol {I}_{N} \otimes \boldsymbol {K}) \boldsymbol {x}^{*}_{i} + \sum _{p=1}^{P} (\boldsymbol {I}_{N} \otimes \boldsymbol {A}_{p}) \boldsymbol {x}_{ip,p}\) in terms of the transmitter impairments, one can express the received signal in (14) as
$$ \begin{aligned} \boldsymbol{y} & = \underbrace{\boldsymbol{\Psi}_{i} (\boldsymbol{I}_{NN_{t}} \otimes \boldsymbol{c})}_{\boldsymbol{H}^{(i)}} \boldsymbol{x} + \underbrace{\boldsymbol{\Psi}_{s} (\boldsymbol{I}_{NN_{t}} \otimes \boldsymbol{c})}_{\boldsymbol{H}^{(s)}} \boldsymbol{s} + \boldsymbol{w},\\ &=\boldsymbol{\Psi}_{i} (\boldsymbol{I}_{NN_{t}} \otimes \boldsymbol{c}) \left(\boldsymbol{x}_{i} + (\boldsymbol{I}_{N} \otimes \boldsymbol{K}) \boldsymbol{x}^{*}_{i} + \sum_{p=1}^{P}(\boldsymbol{I}_{N}\otimes \boldsymbol{A}_{p}) \boldsymbol{x}_{ip,p} \right)\\ &\quad+ \boldsymbol{\Psi}_{s} (\boldsymbol{I}_{NN_{t}} \otimes \boldsymbol{c}) \boldsymbol{s} + \boldsymbol{w}, \end{aligned} $$
where Ψ s and H (s) are defined in the same way as Ψ i and H (i), respectively, and s=[s T(0),…, s T(N−1)]T. After some manipulations, one can easily verify that \((\boldsymbol {I}_{NN_{t}} \otimes \boldsymbol {c}) \boldsymbol {x}_{i} = (\boldsymbol {x}_{i} \otimes \boldsymbol {I}_{4N_{t}N_{r}}) \boldsymbol {c}\) and \((\boldsymbol {I}_{NN_{t}} \otimes \boldsymbol {c}) \boldsymbol {s} = (\boldsymbol {s} \otimes \boldsymbol {I}_{4N_{t}N_{r}}) \boldsymbol {c}\). Then, the received vector in (41) is rewritten as
$$ \begin{aligned} \boldsymbol{y} &= \boldsymbol{\Psi}_{i} \left(\left(\boldsymbol{x}_{i} + \left(\boldsymbol{I}_{N} \otimes \boldsymbol{K} \right)\boldsymbol{x}_{i}^{*} + \sum_{p=1}^{P}\left(\boldsymbol{I}_{N} \otimes \boldsymbol{A}_{p} \right) \boldsymbol{x}_{ip,p} \right) \otimes \boldsymbol{I}_{4N_{t}N_{r}} \right) \boldsymbol{c}\\ &\quad+ \boldsymbol{\Psi}_{s} \left(\boldsymbol{s} \otimes \boldsymbol{I}_{4N_{t}N_{r}}\right) \boldsymbol{c} + \boldsymbol{w}. \end{aligned} $$
In (42), the received vector y is expressed as a linear function of the unknown vector c. This formulation makes the estimation of c more tractable. While the transmitted SI is known, the distorted parts (I N ⊗A p )x ip,p and \((\boldsymbol {I}_{N} \otimes \boldsymbol {K}) \boldsymbol {x}_{i}^{*}\) of the SI from the cascade of the IQ mixer and PA need to be estimated. We begin by writing the following cost function \(f(\boldsymbol {c},\boldsymbol {s},\boldsymbol {K},\boldsymbol {A}_{p}) = ||\boldsymbol {y} - \boldsymbol {\Psi }_{i} ((\boldsymbol {x}_{i} + (\boldsymbol {I}_{N} \otimes \boldsymbol {K})\boldsymbol {x}_{i}^{*} + \sum _{p=1}^{P}(\boldsymbol {I}_{N} \otimes \boldsymbol {A}_{p}) \boldsymbol {x}_{ip,p}) \otimes \boldsymbol {I}_{4N_{t}N_{r}}) \boldsymbol {c} - \boldsymbol {\Psi }_{s} (\boldsymbol {s} \otimes \boldsymbol {I}_{4N_{t}N_{r}}) \boldsymbol {c}||^{2}\) depending on c, K, A p (for p=1,…, P) and s. Given an initial estimate \(\widehat {\boldsymbol {c}}\) of c, the minimization of \(f(\widehat {\boldsymbol {c}},\boldsymbol {s},\boldsymbol {K},\boldsymbol {A}_{p})\) with respect to s, K and A p can be recast as a least square (LS) problem. Then, using the solutions \(\widehat {\boldsymbol {s}}\), \(\widehat {\boldsymbol {K}}\) and \(\widehat {\boldsymbol {A}}_{p}\), we minimize \(f(\boldsymbol {c},\widehat {\boldsymbol {s}},\widehat {\boldsymbol {K}},\widehat {\boldsymbol {A}}_{p})\) with respect to c. We iterate this procedure until the estimated parameters converge. An initial estimate of c is obtained using the LS criteria as
$$ \widehat{\boldsymbol{c}}_{0} = (\boldsymbol{\Psi}_{i} (\boldsymbol{x}_{i} \otimes \boldsymbol{I}_{4N_{t}N_{r}}))^{\#} \boldsymbol{y}, $$
where the operator (·)# returns the pseudo-inverse of a given matrix. At the k th iteration, the estimate \(\widehat {\boldsymbol {c}}_{k-1}\) obtained at the previous iteration is used to find s, K and A p (or equivalently k and α p ) as follows:
$$ {\begin{aligned} \left(\begin{array}{l} \widehat{\boldsymbol{s}}_{k} \\ \widehat{\boldsymbol{k}}_{k} \\ \widehat{\boldsymbol{\alpha}}_{1,k} \\ \vdots \\ \widehat{\boldsymbol{\alpha}}_{P,k} \end{array}\right) & = \left[\boldsymbol{\Psi}_{s} \widehat{\boldsymbol{C}}_{k-1},~\boldsymbol{\Psi}_{i} \left(\text{diag}\left(\boldsymbol{x}_{i}^{*}\right)\boldsymbol{B} \right) \right.\\ &\quad\otimes \widehat{\boldsymbol{c}}_{k-1},~\boldsymbol{\Psi}_{i} \left(\text{diag}\left(\boldsymbol{x}_{ip,1}\right) \boldsymbol{B} \right)\otimes \widehat{\boldsymbol{c}}_{k-1},\dots,\\ & \quad\left. \boldsymbol{\Psi}_{i} \left(\text{diag}\left(\boldsymbol{x}_{ip,P}\right) \boldsymbol{B} \right) \otimes \widehat{\boldsymbol{c}}_{k-1} \right]^{\#} \left(\boldsymbol{y} - \boldsymbol{\Psi}_{i} \widehat{\boldsymbol{C}}_{k-1} \boldsymbol{x}_{i} \right), \end{aligned}} $$
where, for clarity, we introduce \(\phantom {\dot {i}\!}\boldsymbol {B} = \boldsymbol {1}_{N} \otimes \boldsymbol {I}_{N_{t}}\) and \(\phantom {\dot {i}\!}\widehat {\boldsymbol {C}}_{k-1} = \boldsymbol {I}_{NN_{t}} \otimes \widehat {\boldsymbol {c}}_{k-1}\) and we use the equality \(\Big (\big ((\boldsymbol {I}_{N} \otimes \boldsymbol {K})\boldsymbol {x}_{i}^{*} \big) \otimes \boldsymbol {I}_{4N_{t}N_{r}} \Big) \boldsymbol {c} = \Big (\big (\text {diag}(\boldsymbol {x}_{i}^{*})\boldsymbol {B} \big) \otimes \boldsymbol {c} \Big) \boldsymbol {k}\). Then, \(\widehat {\boldsymbol {s}}_{k}\) is transformed in the frequency domain and each element of the frequency domain vector is projected to its closest discrete constellation point. The obtained vector is converted back to the time domain to obtain a better estimate \(\widetilde {\boldsymbol {s}}_{k}\) of s.
Then, an update of c at iteration k is obtained as:
$$ \begin{aligned} \widehat{\boldsymbol{c}}_{k} &= \left(\boldsymbol{\Psi}_{i} \left(\left(\boldsymbol{x}_{i}+(\boldsymbol{I}_{N} \otimes \widehat{\boldsymbol{K}}_{k}) + \sum_{p=1}^{P}\left(\boldsymbol{I}_{N} \otimes \widehat{\boldsymbol{A}}_{p,k}\right) \boldsymbol{x}_{ip,p} \right) \otimes \boldsymbol{I}_{4N_{t}Nr} \right)\right.\\ &\quad\left.+ \boldsymbol{\Psi}_{s} \left(\widetilde{\boldsymbol{s}}_{k} \otimes \boldsymbol{I}_{4N_{t}N_{r}}\right) {\vphantom{\sum_{p=1}^{P}}}\right)^{\#} \boldsymbol{y}. \end{aligned} $$
If a set of P pilot, pilot symbols are available at subcarriers indexed by \(\mathcal {P}=\{ p_{1},\dots,~p_{P_{\text {pilot}}}\}\), the intended transmit signal at antenna q can be represented as the sum of two signals:
$$ \begin{aligned} s_{q}^{p}(n) &= \sum_{i=1}^{P_{\text{pilot}}} S_{q}(p_{i})e^{j2\pi p_{i} n/N},\\s_{q}^{d}(n) &= \sum_{k \notin \mathcal{P}} S_{q}(k)e^{j2\pi k n/N}, \end{aligned} $$
where the first sequence \(s_{q}^{p}(n)\) contains the pilot symbols and the second sequence \(s_{q}^{d}(n)\) contains the unknown data symbols transmitted by other intended transmitter. Then, the received vector in (42) is rearranged as follows:
$$ \begin{aligned} \boldsymbol{y} &= \boldsymbol{\Psi}_{i} \left(\left(\boldsymbol{x}_{i} + \left(\boldsymbol{I}_{N} \otimes \boldsymbol{K}\right)\boldsymbol{x}_{i}^{*} + \sum_{p=1}^{P}\left(\boldsymbol{I}_{N} \otimes \boldsymbol{A}_{p}\right) \boldsymbol{x}_{ip,p} \right) \otimes \boldsymbol{I}_{4N_{t}N_{r}} \right)\\ &\quad\boldsymbol{c} + \boldsymbol{\Psi}_{s} \left(\left(\boldsymbol{s}^{p} + \boldsymbol{s}^{d}\right) \otimes \boldsymbol{I}_{4N_{t}N_{r}} \right)\boldsymbol{c} + \boldsymbol{w}. \end{aligned} $$
where s p and s d are constructed in the same way as s and contain the pilot symbols and unknown symbols, respectively. The initial estimate of c is modified to incorporate the pilot symbols as
$$ \widehat{\boldsymbol{c}}_{0} = \Big(\boldsymbol{\Psi}_{i} (\boldsymbol{x}_{i} \otimes \boldsymbol{I}_{4N_{t}Nr}) + \boldsymbol{\Psi}_{s} (\boldsymbol{s}^{p} \otimes \boldsymbol{I}_{4N_{t}N_{r}}) \Big)^{\#} \boldsymbol{y}, $$
and the estimates of s d, K and A p at iteration k are given by
$$ \begin{aligned} \left(\begin{array}{l} \widehat{\boldsymbol{s}}_{k}^{d} \\ \widehat{\boldsymbol{k}}_{k} \\ \widehat{\boldsymbol{\alpha}}_{1,k} \\ \vdots \\ \widehat{\boldsymbol{\alpha}}_{P,k} \! \end{array}\right) & =\Big[\boldsymbol{\Psi}_{s} \widehat{\boldsymbol{C}}_{k-1},~\boldsymbol{\Psi}_{i} \Big(\text{diag}(\boldsymbol{x}_{i}^{*})\boldsymbol{B} \Big)\\ &\quad\otimes \widehat{\boldsymbol{c}}_{k-1},~\boldsymbol{\Psi}_{i} \Big(\text{diag}(\boldsymbol{x}_{ip,1}) \boldsymbol{B} \Big) \! \otimes \! \widehat{\boldsymbol{c}}_{k-1},\dots, \\ & \quad\boldsymbol{\Psi}_{i} \Big(\text{diag}(\boldsymbol{x}_{ip,P}) \boldsymbol{B} \Big) \otimes \widehat{\boldsymbol{c}}_{k-1} \!\Big]^{\#} \\ &\quad\times\Big(\boldsymbol{y} - \boldsymbol{\Psi}_{i} \widehat{\boldsymbol{C}}_{k-1} \boldsymbol{x}_{i} - \Psi_{s} \widehat{\boldsymbol{C}}_{k-1} \boldsymbol{s}^{p} \Big). \end{aligned} $$
As before, \(\widehat {\boldsymbol {s}}_{k}^{d}\) is converted to the frequency domain, demodulated then transformed to the time domain to obtain \(\widetilde {\boldsymbol {s}}_{k}^{d}\). The updated estimate of c at iteration k is obtained as:
$$ \begin{aligned} \widehat{\boldsymbol{c}}_{k} &= \left(\boldsymbol{\Psi}_{i} \left(\left(\boldsymbol{x}_{i} + \left(\boldsymbol{I}_{N} \otimes \widehat{\boldsymbol{K}}_{k}\right)\boldsymbol{x}_{i}^{*} + \sum_{p=1}^{P}\left(\boldsymbol{I}_{N} \otimes\widehat{\boldsymbol{A}}_{3,p}\right) \boldsymbol{x}_{ip,p} \right) \otimes \boldsymbol{I}_{4N_{t}N_{r}} \right)\right.\\ &\qquad\left.+ \boldsymbol{\Psi}_{s} \left(\left(\boldsymbol{s}^{p} + \widetilde{\boldsymbol{s}}_{k}^{d}\right) \otimes \boldsymbol{I}_{4N_{t}N_{r}}\right) {\vphantom{\sum_{p=1}^{P}}}\right)^{\#} \boldsymbol{y}. \end{aligned} $$
In the following, we summarize the different steps of the proposed algorithm:
Compute the augmented covariance matrix \(\boldsymbol {R}_{\widetilde y}\) by time averaging of T received samples as:
$$ \widehat{\boldsymbol{R}}_{\widetilde y} = \frac{1}{T} \sum_{t=1}^{T} \left(\begin{array}{l} \boldsymbol{y}_{t} \\ \boldsymbol{y}^{*}_{t} \end{array}\right) \left(\begin{array}{l} \boldsymbol{y}_{t} \\ \boldsymbol{y}^{*}_{t} \end{array}\right)^{H} $$
Perform eigendecomposition of \(\boldsymbol {R}_{\widetilde y}\) and take the p eigenvectors ν i corresponding to the smallest eigenvalue of \(\boldsymbol {R}_{\widetilde y}\).
Construct the matrix \(\boldsymbol {\overline \Theta }\) from ν i and compute the 4N t N r singular vectors of \(\boldsymbol {\overline \Theta }\) corresponding to the zero singular value to form \(\overline {\boldsymbol {\Phi }}\).
Build the matrices \(\boldsymbol {\check \Phi _{i}}\) and \(\boldsymbol {\check \Phi _{s}}\) as given in (39).
Estimate the ambiguity vector c by iterating between (44) and (45) if no pilot symbols are available or between (49) and (50) if a set of pilot symbols are available from the intended transceiver.
In this section, we provide some simulation results on the performance of the proposed estimation algorithm for a 2×2 MIMO full-duplex system. The transmitted bits are mapped to 4-QAM symbols, then passed through an OFDM modulator of length N=64. The wireless channel is represented as a Rayleigh multipath fading channel with five equal-variance resolvable paths. Since the exact number of paths is supposed to be unknown, the algorithm is parametrized as if there are eight paths. In the following, the SNR is defined as the average intended-signal-to-thermal noise power ratio and the estimation mean square error (MSE) of H is \(\textrm {MSE} = E\Big (||\boldsymbol {H} - \widehat {\boldsymbol {H}}||^{2}\Big).\) To model the RF impairments, a complete transmission chain is simulated. The PA coefficients are derived from the intercept points by taking the IIP 3=20 dBm. For the IQ mixer, the ratio between the direct signal and the image is set to 28 dB which is specified in 3GPP LTE specifications [23]. The ADC is modelled as a 14-bit quantizer to incorporate the quantization noise. Therefore, no simplifications are made regarding the different impairments. Antenna separation can attenuate the SI by 40 dB while the RF cancellation stage reduces the direct path by 30 dB [1] leaving the weaker reflections and transceiver impairments to be reduced by the proposed digital algorithm.
The proposed algorithm is compared to different channel estimators: the least square (LS) and the maximum likelihood (ML) algorithms. For the LS estimator, the channel coefficients are obtained using the known self signal and the pilot symbols in the intended signal. It simply considers the unknown symbols as additive noise. The ML estimate is obtained by maximizing the following cost function:
$${ \begin{aligned} L\left(\boldsymbol{H}^{(i)},~\boldsymbol{H}^{(s)}\right) &= \log \left(\det(\boldsymbol{R})\right)- \left(\boldsymbol{y} - \boldsymbol{H}^{(i)}\boldsymbol{x}-\boldsymbol{H}^{(s)}\boldsymbol{s}^{p}\right)^{H}\\ &\quad\times\boldsymbol{R}^{-1} \left(\boldsymbol{y} - \boldsymbol{H}^{(i)}\boldsymbol{x}-\boldsymbol{H}^{(s)}\boldsymbol{s}^{p}\right), \end{aligned}} $$
where \(\boldsymbol {R} = \alpha ^{2} {\boldsymbol {H}^{(s)}}^{H}\boldsymbol {H}^{(s)} + \sigma ^{2} \boldsymbol {I}_{N_{r}M}\). An iterative procedure to find the ML estimate was proposed in [35]. The covariance matrix is obtained by averaging 60 OFDM blocks. Figures 2 and 3 plot the MSE vs. SNR curves for the SI and intended channel estimations, respectively. In both figures, one pilot symbol, from the intended transceiver, is used to solve the ambiguity matrix. For comparison purpose, a perfect estimate of the ambiguity term c is obtained as \(\boldsymbol {c}_{perfect} = \arg \min _{\boldsymbol {c}}||\boldsymbol {\check h} - \boldsymbol {\Phi }\boldsymbol {c}||_{2}^{2}\) and the corresponding curves are labelled by clairvoyant subspace. It is seen that, when one pilot symbol is used in the ML and LS estimators, the proposed subspace algorithm offers notably lower MSE over a large SNR range. We also represent the performance of the ML and LS estimators when 20% of the transmit symbols are known (pilot symbols equally spaced within one OFDM symbol) while keeping one pilot symbol for the subspace method4. In this case, the three algorithms give comparable performance at low SNR region with the expanse of lower bandwidth efficiency. As the SNR increases, the performance of the LS and ML estimators saturate due to the reduced number of pilot symbols and the presence of the unknown transmit signal from the intended transceiver which acts as an additive noise. While the subspace algorithm exploits the information bearing in the unknown data to find the signal subspace. The ambiguity term is first solved using the known transmit symbols, then the iterative decoding ambiguity estimation is applied to improve the estimation performance. From Figs. 2 and 3, three to four iterations are sufficient to converge and the performance is close to the performance when the ambiguity term c is perfectly obtained. Note that the ML solution is also obtained in an iterative way and for a fair comparison; we simulate the performance of the ML estimator after four iterations. As it can be expected, the estimate of the SI channel is more accurate than the estimate of the intended channel. This can be explained by the fact that the self-signal is known while one pilot symbol is known in the intended signal.
The number of pilot symbols is a critical issue in channel estimation since a large pilot sequence provides better estimation performance but reduces the bandwidth efficiency of the system. In Figs. 4 and 5, we compare the impact of the number of pilot symbols on the performance of the three estimators. We periodically place the pilot symbols within an OFDM symbol. Optimal pilot placement requires to verify all P pilot combinations from N subcarriers and hence, leads to an NP-hard problem beyond the scope of this paper, and is left for future work. It can be seen from these figures that the subspace method is not greatly affected by the number of pilot symbols since the subspaces are obtained using the second-order statistics of the received signal and not the transmit signal itself. Clearly, the proposed algorithm outperforms the ML and LS estimators at a reduced number of pilots while this tendency is inverted when the number of pilots increases. However, a system with a large amount of pilot symbols is not of practical interest.
SI channel estimation MSE vs. SNR with 60 received OFDM symbols
Intended channel estimation MSE vs. SNR with 60 received OFDM symbols
SI channel estimation MSE vs. percentage of pilot symbols for SNR = 10 dB
Intended channel estimation MSE vs. percentage of pilot symbols for SNR = 10 dB
In Figs. 6 and 7, we evaluate the impact of the number of observed OFDM symbols on the estimation performance. For the three algorithms, we consider the transmission scheme where the number of pilot symbols is set to one and the SNR is 10 dB. As the subspace algorithm is based on estimates of the second-order statistic of the received signal, its performance varies with the number of OFDM symbols. All three algorithms are able to estimate the SI channel with an error floor for the LS. The ML and subspace algorithms offer the similar performance. On the other hand, the LS estimator fails to recover the intended channel, for any number of OFDM symbols. This can be explained by the fact that the number of unknowns (intended channel coefficients) is larger than the number of pilot symbols. Hence, it is not possible to use this method when the number of pilot symbols is small. The ML estimator presents also poor estimation performance for the intended channel, while the subspace method is able to return a good channel estimate, with a better bandwidth efficiency compared to the other estimators, as soon as there are enough OFDM symbols to compute the covariance matrix.
SI channel estimation MSE vs. number of OFDM symbols for SNR = 10 dB and one pilot symbol
Intended channel estimation MSE vs. number of OFDM symbols for SNR = 10 dB and one pilot symbol
Our primary motivation of this work is to develop an accurate channel estimator to cancel the SI signal. The performance of the SI-canceller are represented by its achieved output signal-to-residual-SI-and-noise power ratio (SINR) after SI cancellation vs. the input SNR. Ideally, if SI could be completely cancelled then the residual SI after cancellation is 0, and consequently, the output SINR equals the input SNR as shown by the dashed line "perfect cancellation" in Fig. 8. In other words, the "perfect cancellation" is considered as the ideal upper-bound for the SINR. As shown in Fig. 8, with three iterations, the proposed subspace-based SI-canceller can offer an output SINR very close to the upper-bound over a large SNR range. At low SNR, the large estimation error results in a larger residual SI after cancellation, which ultimately affects the output SINR.
Output SINR vs. input SNR after SI cancellation
We also investigate in Fig. 8 a frequency domain method to estimate the different parameter using the pilot symbols on some subcarriers. We resort to the LS estimator to find the channel responses at the pilot subcarriers. Since the remaining subcarriers contain unknown symbols from the intended transceiver, the complete channel responses are obtained by linear interpolation of the estimated coefficients. Thus, the frequency domain approach uses only the portion of the signal containing pilots while the proposed approach exploits the whole received signal through the second-order statistics. Clearly, the performance of the frequency domain approach highly depends on the number of pilots (as shown in Fig. 8) since the interpolation cannot model the variance of the channel in the frequency domain. We also compare the proposed method with the widely linear LS estimator in [26]. Note that the algorithm in [26] ignores the PA nonlinearities and does not incorporate the intended signal in the estimation process. Some time frames are dedicated to transmit orthogonal pilot symbols for estimation purpose, where the transceiver receives only its own signal. Therefore, the widely linear LS estimator incurs an overhead and requires synchronization between the two transceivers. Besides, it shows a noise floor at high SNR because the PA nonlinearity is not considered during the estimation process. On the other hand, by exploiting the whole received signal through its second-order statistics, the proposed method offers good performance even with one pilot and still outperforms the frequency domain approach (even with much larger number of pilots). Figure 9 plots the bit error rate (BER) vs. SNR curves of the two approaches. For comparison, we include the case of perfect channel estimate. To improve the BER, the SINR should be kept as high as possible at the demodulator. To conclude, while the frequency domain approach is more intuitive, it needs a large number of pilots and is outperformed by the proposed method.
BER vs. SNR comparison of the proposed and the frequency domain LS techniques
We evaluate the performance of the system in the presence of phase noise by simulation. Figures 10 and 11 plot respectively the SINR and the BER vs. the phase noise 3 dB bandwidth f 3dB for SNR =20 dB and common oscillator at the transmitter and the receiver. The residual SI depends obviously on the quality of the oscillator represented by its f 3dB . Higher f 3dB results in a fast varying process. Clearly, the proposed method still offers good cancellation performance, which is degraded as f 3dB increases.
SINR after SI cancellation vs. f 3dB
BER vs. phase noise f 3dB
The PA nonlinearity effects on the performance of the proposed algorithm are also investigated through simulations. Figure 12 plots the resulting SINR after cancellation vs. the value of the PA third-order intercept point (IIP3) for SNR =20 dB. For perfect cancellation, the resulting SINR after cancellation would be the SNR =20 dB. A lower IIP3 indicates higher PA distortions (or poorer PA) and hence reduces the resulting SINR after cancellation. Figure 12 shows that as the IIP3 value increases, the cancellation performance is improved. However, for a sufficiently high IIP3 (e.g., 18 dBm or higher), the PA distortions are no longer dominant and the resulting SINR after cancellation is unchanged. This can be explained by the fact that, when developing the algorithm, the third-order component of the signal \(x_{q,ip3}(n) = x_{q}^{IQ}(n)|x_{q}^{IQ}(n)|^{2}\) is approximated by x q (n)|x q (n)|2 to simplify the algorithm. This approximation only affects the algorithm performance when the nonlinear coefficients are sufficiently high.
SINR after SI cancellation vs. PA IIP3
In this paper, a subspace-based estimation has been proposed to jointly estimate the SI channel, the intended channel and the transmitter impairments for MIMO full-duplex systems. By exploiting the covariance and pseudo-covariance matrix of the received signal, an effective way has been formulated to apply the subspace method for symmetric MIMO systems. The complete characterization of the second-order statistic of the received signal avoids the need of oversampling, required in traditional subspace methods. The subspace that contains the channels is blindly estimated and a short pilot sequence is needed to extract the channel coefficients from this subspace. The proposed method dramatically reduces the number of pilot symbols needed to identify the channel coefficients. Simulation results show that one pilot symbol is enough to obtain an accurate estimate while other methods are not able to recover the channel.
1 The length of the cyclic prefix N cp should be larger than the delay spread of the channel to eliminate the inter-symbol interference and inter-carrier interference. Therefore, if we know the length of the channel, we can set the cyclic prefix to be sufficiently large to satisfy N cp >L. Since this information is in general not available, N cp is chosen to guarantee N cp >L. For example, if the distance between the two transceivers is 1 km, a cyclic prefix of 4 microsec is sufficient.
2 Physically, the additive noise arises from the thermal agitation of the charge carriers in an electronic device and is independent from the input. It can also contain interference from other systems whose signals are independent from the transmit signal of the considered system.
3 The previous condition is verified for independent channels between different antennas.
4 The pilot symbols are equally spaced within one OFDM symbol.
Appendix 1: Eigenvalues of \(\boldsymbol {R}_{\widetilde u}\)
Following the discussion in Section 3, we mention that M is of rank N, then it has N strictly positive eigenvalues, τ 1, τ 2,…, τ N , and eigenvalue 0 of multiplicity N. And since the covariance matrix \(\boldsymbol {R}_{\widetilde u}\) is given by \(\alpha ^{2} \boldsymbol {M} \otimes \boldsymbol {I}_{2N_{t}}\), it follows that \(\boldsymbol {R}_{\widetilde u}\) has also N eigenvalues τ 1, τ 2,…, τ N each of multiplicity 2N t and eigenvalue 0 of multiplicity 2NN t . To find the non-zero eigenvalues, we solve the characteristic polynomial of M given by
$$\begin{array}{@{}rcl@{}} \det\Big(\boldsymbol{M} - \tau \boldsymbol{I}_{2N}\Big) = 0. \end{array} $$
First, if τ=1 is an eigenvalue of M, then it exists a vector a≠0 such that M a−a=0. It follows that a(1)=a(2)=⋯=a(2N)=0, which is in contradiction with a≠0. Therefore, 1 is not an eigenvalue of M.
By writing M as a block matrix:
$$ \boldsymbol{M} = \left(\begin{array}{ll} \boldsymbol{I}_{N} & \boldsymbol{M}_{1,2} \\ \boldsymbol{M}_{1,2} & \boldsymbol{I}_{N} \end{array}\right), $$
the characteristic polynomial of M, for τ≠1, is written as
$$ \begin{aligned} \det\left(\boldsymbol{M} - \tau \boldsymbol{I}_{2N}\right) & =\det\left((1-\tau)\boldsymbol{I}_{N}\right)\\ &\quad\times\det\left((1-\tau)\boldsymbol{I}_{N} - \boldsymbol{M}_{1,2} (1-\tau)^{-1}\boldsymbol{I}_{N} \boldsymbol{M}_{1,2}\right) \\ & =(1-\tau)^{N} \Big(1-\tau - (1-\tau)^{-1}\Big)^{N}, \end{aligned} $$
where we used the fact that M 1,2 M 1,2=I N . Then, the solutions to det(M−τ I 2N )=0 are 0 and 2. Therefore, all non-zero eigenvalues of M are equal to 2 and thus all the non-zero eigenvalues of \(\boldsymbol {R}_{\widetilde u}\) are equal to 2α 2.
Appendix 2: Precoding for complex modulation
To make it simple, we consider the matrices P and Q having the following block structure:
$$\begin{array}{@{}rcl@{}} \boldsymbol{P} = \left(\begin{array}{ccccccccc} a \boldsymbol{I}_{N/2} & 0 \boldsymbol{I}_{N/2} \\ 0 \boldsymbol{I}_{N/2} & b \boldsymbol{I}_{N/2} \end{array} \right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \boldsymbol{Q} = \left(\begin{array}{ccccccccc} 0 \boldsymbol{I}_{N/2} & c \boldsymbol{I}_{N/2} \\ d \boldsymbol{I}_{N/2} & 0 \boldsymbol{I}_{N/2} \end{array} \right), \end{array} $$
for given real numbers a, b, c and d. Similarly to the real modulation, we have \(\boldsymbol {R}_{\widetilde u} = \boldsymbol {M} \otimes \boldsymbol {I}_{2N_{t}}\) where M for complex modulation is given by
$$\begin{aligned} \boldsymbol{M} & = \left(\begin{array}{cccc} \boldsymbol{P}\boldsymbol{P}^{T} + \boldsymbol{Q} \boldsymbol{Q}^{T} & \boldsymbol{P} \boldsymbol{Q}^{T} + \boldsymbol{Q} \boldsymbol{P}^{T} \\ \boldsymbol{P} \boldsymbol{Q}^{T} + \boldsymbol{Q} \boldsymbol{P}^{T} & \boldsymbol{P}\boldsymbol{P}^{T} + \boldsymbol{Q} \boldsymbol{Q}^{T} \end{array} \right) \\ & =\left(\begin{array}{cccc} (a^{2}+c^{2}) & 0 & 0 & (ad+bc) \\ 0 & (b^{2}+d^{2})& (ad+bc) & 0 \\ 0 & (ad+bc) & (a^{2}+c^{2}) & 0 \\ (ad+bc) & 0 & 0 & (b^{2}+d^{2}) \end{array} \!\right) \! \otimes \! \boldsymbol{I}_{N/2}, \end{aligned} $$
for a 2+c 2=b 2+d 2. Thus, for a, b, c and d satisfying a 2+c 2=ad+bc and b 2+d 2=ad+bc, each line of M is repeated two times and \(\boldsymbol {R}_{\widetilde u}\) has rank 2NN t . As an example, we can take a=0.757, b=0.5032, c=0.4935 and d=0.7506.
JI Choi, M Jain, K Srinivasan, P Levis, S Katti, in Proc. ACM MobiCom. Achieving single channel, full duplex wireless communication (ACM Chicago, 2010), pp. 1–12.
M Duarte, A Sabharwal, V Aggarwal, R Jana, KK Ramakrishnan, CW Rice, NK Shankaranarayanan, Design and characterization of a full-duplex multiantenna system for WiFi networks. IEEE Trans. Veh. Technol. 63(3), 1160–1177 (2014).
A Masmoudi, T Le-Ngoc, in Proc. IEEE Global Telecommun. Conf. Self-interference cancellation limits in full-duplex communication systems (IEEE Washington DC, 2016).
MA Khojastepour, S Rangarajan, in Proc. ASILOMAR Signals, Syst., Comput. Wideband digital cancellation for full-duplex communications (IEEE Pacific Frove, 2012), pp. 1300–1304.
A Masmoudi, T Le-Ngoc, Channel estimation and self-interference cancellation in full-duplex communication systems. IEEE Trans. Veh. Technol. 66(1), 321–334 (2017).
M Duarte, C Dick, A Sabharwal, Experiment-driven characterization of full-duplex wireless systems. IEEE Trans. Wireless Comm. 11(12), 4296–4307 (2012).
J Ma, GY Li, J Zhang, T Kuze, H Iura, in Proc. IEEE Global Telecommun. Conf. A new coupling channel estimator for cross-talk cancellation at wireless relay stations (Honolulu, 2009).
JR Krier, IF Akyildiz, in Proc. IEEE Pers. Indoor and Mobile Radio Commun. Active self-interference cancellation of passband signals using gradient descent (IEEE London, 2013).
S Li, RD Murch, in Proc. IEEE Global Telecommun. Conf. Full-duplex wireless communication using transmitter output based echo cancellation, (2011), pp. 1–5.
D Bliss, P Parker, A Margetts, in Prog. IEEE Statistical Signal Processing. Simultaneous transmission and reception for improved wireless network performance, (2007), pp. 478–482.
BP Day, AR Margetts, DW Bliss, P Schniter, Full-duplex bidirectional MIMO: achievable rates under limited dynamic range. IEEE Trans. Signal Process. 60(7), 3702–3713 (2012).
AC Cirik, J Zhang, M Haardt, Y Hua, in IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC). Sum-rate maximization for bi-directional full-duplex MIMO systems under multiple linear constraints, (2014), pp. 389–393.
Y Hua, P Liang, Y Ma, AC Cirik, Q Gao, A method for broadband full-duplex MIMO radio. IEEE Signal Process. Lett. 19(12), 793–796 (2012).
A Masmoudi, T Le-Ngoc, in Proc. IEEE Veh. Technol. Conf. Self-interference mitigation using active signal injection full-duplex MIMO-OFDM systems (IEEE Montreal, 2016).
J-J Van de Beek, O Edfors, M Sandell, SK Wilson, P Ola Borjesson, in Proc. IEEE Veh. Technol. Conf. On channel estimation in OFDM systems, (1995), pp. 815–819.
H Minn, N Al-Dhahir, Optimal training signals for MIMO OFDM channel estimation. IEEE Trans. Wireless Comm. 5(5), 1158–1168 (2006).
F Gao, Y Zeng, A Nallanathan, T-S Ng, Robust subspace blind channel estimation for cyclic prefixed MIMO ODFM systems: algorithm, identifiability and performance analysis. IEEE J. Select. Areas Comm. 26(2), 378–388 (2008).
C-C Tu, B Champagne, Subspace-based blind channel estimation for MIMO-OFDM systems with reduced time averaging. IEEE Trans. Veh. Technol. 59(3), 1539–1544 (2010).
E Moulines, P Duhamel, J-F Cardoso, S Mayrargue, Subspace methods for the blind identification of multichannel FIR filters. IEEE Trans. Signal Process. 43(2), 516–525 (1995).
Y Zeng, T-S Ng, A semi-blind channel estimation method for multiuser multiantenna OFDM systems. IEEE Trans. Signal Process. 52(5), 1419–1429 (2004).
E de Carvalho, DT Slock, Blind and semi-blind FIR multichannel estimation: (global) identifiability conditions. IEEE Trans. Signal Process. 52(4), 1053–1064 (2004).
A Masmoudi, T Le-Ngoc, A maximum-likelihood channel estimator for self-interference cancellation in full-duplex systems. IEEE Trans. Veh. Technol. 65(7), 5122–5132 (2016).
LTE; evolved universal terrestrial radio access (E-UTRA); user equipment (UE) radio transmission and reception (3GPP TS 36.101 version 11.2.0 release 11). ETSI, Sophia Antipolis Cedex, France (2012).
DW Bliss, TM Hancock, P Schniter, in Proc. ASILOMAR Signals, Syst., Comput. Hardware phenomenological effects on cochannel full-duplex MIMO relay performance (IEEE Pacific Frove, 2012).
E Ahmed, A Eltawil, A Sabharwal, in Proc. ASILOMAR Signals, Syst., Comput. Self-interference cancellation with nonlinear distortion suppression for full-duplex systems (IEEE Pacific Frove, 2013).
D Korpi, L Anttila, V Syrjala, M Valkama, Widely linear digital self-interference cancellation in direct-conversion full-duplex transceiver. IEEE J. Selected Areas Commun. 32(9), 1674–1687 (2014).
A Masmoudi, T Le-Ngoc, in Proc. IEEE Wireless Commun. and Netw. Conf. Self-interference cancellation for full-duplex MIMO transceivers (IEEE New Orleans, 2015).
WH Gerstacker, R Schober, A Lampe, Receivers with widely linear processing for frequency-selective channels. IEEE Trans. Commun. 51(9), 1512–1523 (2003).
R Schober, WH Gerstacker, L-J Lampe, Data-aided and blind stochastic gradient algorithms for widely linear MMSE MAI suppression for DS-CDMA. IEEE Trans. Signal Process. 52(3), 746–756 (2004).
M Kristensson, B Ottersten, D Slock, in Proc. ASILOMAR Signals, Syst., Comput. Blind subspace identification of a BPSK communication channel (IEEE Pacific Frove, 1996).
A Masmoudi, T Le-Ngoc, in Proc. IEEE Int. Conf. Commun. A digital subspace-based self-interference cancellation in full-duplex MIMO transceivers (IEEE London, 2015), pp. 4954–4959.
A Masmoudi, T Le-Ngoc, in Proc. IEEE Int. Conf. Commun. Residual self-interference after cancellation in full-duplex systems (IEEE Sydney, 2014).
JG McMichael, KE Kolodziej, in 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton). Optimal tuning of analog self-interference cancellers for full-duplex wireless communication, (2012), pp. 246–251.
FD Neeser, JL Massey, Proper complex random processes with applications to information theory. IEEE Trans. Inf. Theory. 39(4), 1293–1302 (1993).
A Masmoudi, T Le-Ngoc, in Proc. IEEE Veh. Technol. Conf. A maximum-likelihood channel estimator in MIMO full-duplex systems (IEEE Vancouver, 2014).
This work was supported in part by an R&D Contract from Huawei Technologies Canada and in part by a Grant from the Natural Sciences and Engineering Research Council of Canada.
Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec, Canada
Ahmed Masmoudi & Tho Le-Ngoc
Ahmed Masmoudi
Tho Le-Ngoc
Correspondence to Ahmed Masmoudi.
Masmoudi, A., Le-Ngoc, T. Subspace-based self-interference cancellation for full-duplex MIMO transceivers. J Wireless Com Network 2017, 55 (2017). https://doi.org/10.1186/s13638-017-0839-x
Full-duplex communication
SI suppression
parameter estimation
Subspace method
Second-order statistics
Full-Duplex Radio: Theory, Design, and Applications
|
CommonCrawl
|
Convergence in Distribution of Sums of Random Variable
Suppose I have $X_1,X_2,...,X_n$ random variables that are independent and identically distributed, from ANY distribution. Suppose that $E(X_i)=\mu$ and $V(X_i)=\sigma^2$.
Suppose I define the following random variable:
$$Y=\sum_{i=1}^nX_i$$
What is the limiting distribution of $Y$? That is, as $n$ goes to infinity, what distribution can $Y$ be approximated by?
My intuation tells me that $Y\rightarrow N(n\mu,n\sigma^2)$. In other words, say $200$ was a sufficiently large number for $n$. Then I could approximate $Y$ by a normal distribution with mean $200\mu$ and variance $200\sigma^2$. Is this true, and if so, how can you prove it? If not, what is the limiting distribution of $Y$?
probability probability-theory statistics probability-distributions central-limit-theorem
jippyjoe4jippyjoe4
$\begingroup$ FYI: Note that it makes no sense for $Y$ to have a distribution whose mean and variance depend on $n$ as $n \to \infty$. You should see stats.stackexchange.com/questions/317852/… $\endgroup$ – Clarinetist Feb 16 '18 at 4:23
$\begingroup$ Okay, that seems like logical reasoning to me. But there's still the question; what exactly is the limiting distribution? $\endgroup$ – jippyjoe4 Feb 16 '18 at 4:42
$\begingroup$ In general: it depends. As you probably know, there are a ton of distributions that when you sum iid random variables with a given distribution, given the conditions you have, you get another distribution. The whole idea that large $n$ gives an approximate normal distribution is just that: i.e., it's an approximate normal distribution. By no means is it exact. $\endgroup$ – Clarinetist Feb 16 '18 at 4:45
$\begingroup$ Right. So, going back to my example, say $n=200$. In that situation, sure, I can approximate it by a normal distribution; but what are the mean and variance? Are they what I have stated? $\endgroup$ – jippyjoe4 Feb 16 '18 at 4:49
$\begingroup$ Yes. (and adding more characters to meet the minimum requirement.) $\endgroup$ – Clarinetist Feb 16 '18 at 4:51
Any statement that says $\lim_{n\to\infty}(\cdots\cdots) = (\text{something depending on $n$})$ is wrong if taken literally, and usually wrong if taken any other way.
The distribution $N(n\mu,n\sigma^2)$ depends on $n$ and does not approach a limit as $n$ grows.
However, the distribution of $$ \frac{Y-n\mu}{\sigma\sqrt n} \tag 1 $$ does approach a limit as $n$ grows (unless $\sigma=+\infty,$ as happens in some cases). That limit is $N(0,1).$
This may be understood as meaning that the c.d.f. of $(1)$ converges pointwise to the c.d.f. of $N(0,1).$ If the limit were a distribution that concentrates positive probability at some points, it would be understood as meaning that the c.d.f. converges pointwise except at points where the limiting distribution assigns positive probability.
Michael HardyMichael Hardy
The (modified, in the sense you will see!) result you are after follows directly from a class of convergence results called central limit theorems (of probability theory). There are several versions of central limit theorems, according majorly as the dependence and the distribution heterogeneity conditions vary. In the present case of your interest, we are concerned with the prototype of central limit theorem, dealing with a sequence of sums of independent identically distributed random variables. To prove it requires lots of prerequisites that I suspect you have not yet been exposed to; so let me present the idea of a typical proof instead. The typical proof scheme is to utilize the fact that weak convergence of distribution functions is the pointwise convergence of the corresponding characteristic functions. Then it can be shown by using the independence assumption and a second-order Taylor expansion that the sequence of standardized sums of random variables converges in distribution to a standard normal random variable. It turns out that the weak convergence of the distribution functions of the standardized sums of random variables to the standard normal distribution function is also uniform (even in the case where the random variables involved are independent but nonindentical, as long as the so-called Lindeberg condition is satisfied.). So, in fact, if $F_{Y_{n}}$ is the distribution function of $Y_{n} := \sum_{1}^{n}X_{i}$ and if $\Phi$ is that of a standard normal random variable, then we have $F_{Y_{n}}(y) - \Phi\big(\frac{y - n\mu}{n\sigma^{2}}) \to 0$ as $n \to \infty$ for all $y$. Then, precisely speaking, we say that $Y_{n}$ is distributed asymptotically as $N(n\mu, n\sigma^{2})$. I guess this is what you are after.
To sum: we say that $(Y_{n}-EY_{n})/\sqrt{\text{var}(Y_{n})}$ converges in distribution to $N(0,1)$ or that $N(0,1)$ is the limiting distribution of the sequence of the standardized $Y_{n}$, but that $Y_{n} \sim_{A} N(EY_{n}, \text{var}(Y_{n})$), reading $Y_{n}$ is asymptotically normally distributed with mean $EY_{n}$ and variance $\text{var}Y_{n}$.
MegadethMegadeth
Not the answer you're looking for? Browse other questions tagged probability probability-theory statistics probability-distributions central-limit-theorem or ask your own question.
Central Limit Theorem and sum of squared random variables
Posterior distribution Question for normal
Explain why a gamma random variable with parameters $(t, \lambda)$ has an approximately normal distribution when $t$ is large.
Showing convergence of a random variable in distribution to a standard normal random variable
Approximating the probability distribution of a sum of random variables.
Limiting Distribution of Unusual Quantities
Distribution of Conditional Bernoulli Random Variable
Probability limits of random variable sums
|
CommonCrawl
|
Higher-rank polymorphism over unboxed types
I have a language in which types are unboxed by default, with type inference based on Hindley–Milner. I'd like to add higher-rank polymorphism, mainly for working with existential types.
I think I understand how to check these types, but I'm not sure what to do when compiling. Currently, I compile polymorphic definitions by generating specialisations, much like C++ templates, so that they can work with unboxed values. E.g., given a definition of f<T>, if the program invokes only f<Int32> and f<Char>, then only those specialisations appear in the compiled program. (I'm assuming whole-program compilation for now.)
But when passing a polymorphic function as an argument, I don't see how I can generate the right specialisation statically, because the function could be selected at runtime. Do I have no choice but to use a boxed representation? Or is there a way around the issue?
My first thought was to somehow encode rank-n polymorphism as rank 1, but I don't believe it's possible in general because a formula in constructive logic doesn't necessarily have a prenex normal form.
type-theory type-inference polymorphism
Jon Purdy
Jon PurdyJon Purdy
$\begingroup$ An alternative is to reduce the amount of boxing needed by storing bitmaps for which arguments of a function and words in memory are pointers. Then a polymorphic function/struct is actually polymorphic over a pointer or an arbitrary word of data, and structs can store their last field (even if it's polymorphic) inline. Those bitmaps can also be used by the GC to avoid the need for tagwords for non-sum types. $\endgroup$ – fread2281 Feb 16 '17 at 22:19
$\begingroup$ @fread2281: I actually used to do something like that in an older version of the language. I don't currently generate tags for non-sum types, and there's no GC. I think that's compatible with Neel K's approach as well. $\endgroup$ – Jon Purdy Feb 16 '17 at 23:16
I've thought a bit about this. The main issue is that in general, we don't know how big a value of polymorphic type is. If you don't have this information, you have have to get it somehow. Monomorphisation gets this information for you by specializing away the polymorphism. Boxing gets this information for you by putting everything into a representation of known size.
A third alternative is to keep track of this information in the kinds. Basically, what you can do is to introduce a different kind for each data size, and then polymorphic functions can be defined over all types of a particular size. I'll sketch such a system below.
$$ \newcommand{\bnfalt}{\;\;|\;\;} \newcommand{\rule}[2]{{\mathord{\array{#1}} \over {\mathord{#2}}}} \newcommand{\judge}[3]{{#1} \vdash {#2} : {#3}} $$ $$ \begin{array}{llcl} \mbox{Kinds} & \kappa & ::= n \\ \mbox{Type constructors} & A & ::= & \forall a:\kappa.\; A \bnfalt \alpha \bnfalt A \times B \bnfalt A + B \bnfalt A \to B \\ & & | & \mathsf{ref}\;A \bnfalt \mathsf{Pad}(k) \bnfalt \mu \alpha:\kappa.\; A\\ \end{array} $$
Here, the high level idea is that the kind of a type tells you how many words it takes to lay out an object in memory. For any given size, it's easy to be polymorphic over all types of that particular size. Since every type -- even polymorphic ones -- still has a known size, compilation isn't any harder than it is for C.
The kinding rules turn this English into math, and should look something like this: $$ \rule{ \alpha:n \in \Gamma} { \judge{\Gamma}{\alpha}{n} } \qquad \rule{ \judge{\Gamma, \alpha:n}{A}{m} } { \judge{\Gamma}{\forall \alpha:n.\; A}{m} } $$ $$ \rule{ \judge{\Gamma}{A}{n} & \judge{\Gamma}{B}{m} } { \judge{\Gamma}{A \times B}{n + m} } \qquad \rule{ \judge{\Gamma}{A}{n} & \judge{\Gamma}{B}{n} } { \judge{\Gamma}{A + B}{n + 1} } $$ $$ \rule{ \judge{\Gamma}{A}{m} & \judge{\Gamma}{B}{n} } { \judge{\Gamma}{A \to B}{1} } \qquad \rule{ \judge{\Gamma}{A}{n} } { \judge{\Gamma}{\mathsf{ref}\;A}{1} } $$ $$ \rule{ } { \judge{\Gamma}{\mathsf{Pad}(k)}{k} } \qquad \rule{ \judge{\Gamma, \alpha:n}{A}{n} } { \judge{\Gamma}{\mu \alpha:n.\; A}{n} } $$
So the forall quantifier requires you to give the kind you are ranging over. Likewise, pairing $A \times B$ is an unboxed pair type, which just lays out an $A$ next to a $B$ in memory (like a C struct type). Disjoint unions take two values of the same size, and then add a word for a discriminator tag. Functions are closures, represented as usual by a pointer to a record of the environment and the code.
References are interesting -- pointers are always one word, but they can point to values of any size. This lets programmers implement polymorphism to arbitrary objects by boxing, but doesn't require them to do so. Finally, once explicit sizes are in play, it's often useful to introduce a padding type, which uses space but doesn't do anything. (So if you want to take the disjoint union of an int and a pair of ints, you'll need to add padding the first int, so that the object layout is uniform.)
Recursive types have the standard formation rule, but note that recursive occurences have to be the same size, which means you usually have to stick them in a pointer to make the kinding work out. Eg, the list datatype could be represented as
$$ \mu \alpha:1.\; \mathsf{ref}\;(\mathsf{Pad}(2) + \mathsf{int} \times \alpha) $$
So this points to an empty list value, or a pair of an int and a pointer to another linked list.
Type checking for systems like this is also not very hard; the algorithm in my ICFP paper with Joshua Dunfield, Complete and Easy Bidirectional Typechecking for Higher Rank Polymorphism applies to this case with almost no changes.
Neel KrishnaswamiNeel Krishnaswami
$\begingroup$ Cool, I think this neatly covers my use case. I was aware of using kinds to reason about value representations (like GHC's * vs. #), but hadn't considered doing it this way. It seems reasonable to restrict higher-ranked quantifiers to types of known size, and I think this would also let me generate per-size specialisations statically, without needing to know the actual type. Now, time to re-read that paper. :) $\endgroup$ – Jon Purdy Feb 14 '17 at 3:34
This seems to be closer to a compilation problem than a "theoretical computer science" problem, so you're probably better off asking elsewhere.
In the general case, indeed, I think there is no other solution than using a boxed representation. But I also expect that in practice there are many different alternative options, depending on the specifics of your situation.
E.g. the low-level representation of unboxed arguments can usually be categorized into very few alternatives, e.g. integer-or-similar, floating-point, or pointer. So for a function f<T>, maybe you really only need to generate 3 different unboxed implementations and you can represent the polymorphic one as a tuple of those 3 functions, so instantiating T to Int32 is just selecting the first element of the tuple, ...
StefanStefan
$\begingroup$ Thanks for your help. I wasn't really sure where to ask, since a compiler spans from high-level theory down to low-level engineering, but I figured people around here would have some ideas. It's looking like boxing may indeed be the most flexible approach here. After reading your answer and thinking on it more, the only other reasonable solution I've been able to come up with is to give up some flexibility and require polymorphic arguments to be known statically, e.g., by passing them as type parameters themselves. It's tradeoffs all the way down. :P $\endgroup$ – Jon Purdy Feb 13 '17 at 3:57
$\begingroup$ The OP's question contains perfectly valid TCS problems, like how to do type inference when Damas-Hindley-Milner is extended with higher rank types. In general rank-2 polymorphism has decidable type-inference but for rank k>2 type-inference is undecidable. Whether the Damas-Hindley-Milner restriction changes this, I don't know. Finally just about everything modern compilers do should be part of TCS, but usually isn't because the compiler implementors are ahead of theoreticians. $\endgroup$ – Martin Berger Feb 13 '17 at 9:15
Not the answer you're looking for? Browse other questions tagged type-theory type-inference polymorphism or ask your own question.
Context Sensitive Grammars and Types
With equirecursive types are there downsides to making all types potentially recursive?
What are the practical issues with intersection and union types?
Generalization and instantiation of types in Hindley-Milner type inference
How high are the higher types that appear in practice?
Decidability of parametric higher-order type unification
Dependent types over Church-encoded type in PTS/CoC
Universe polymorphism: the inference of universes and their constraints
Can Isorecursive types capture mutually recursive data types?
|
CommonCrawl
|
Enhancing biogas plant production using pig manure and corn silage by adding wheat straw processed with liquid hot water and steam explosion
Michał Gaworski1,
Sławomir Jabłoński1,
Izabela Pawlaczyk-Graja2,
Rafał Ziewiecki2,
Piotr Rutkowski3,
Anna Wieczyńska2,
Roman Gancarz2 &
Marcin Łukaszewicz ORCID: orcid.org/0000-0002-1453-83761
Biotechnology for Biofuelsvolume 10, Article number: 259 (2017) | Download Citation
Pig manure utilization and valorization is an important topic with tightening regulations focused on ecological and safety issues. By itself pig manure is a poor substrate for biogas production because of its excessive nitrogen content relative to available organic carbon. Such substrate is alkaline, and methanogenesis can be suppressed, and so additional substrates with high organic carbon must be added. The most promising is straw, which is available from adjacent biogas plant cultures. However, the abundant lignocellulosic biomass of wheat straw undergoes slow decomposition, and only a fraction of the chemical energy can be converted into biogas; thus economical methods for pretreatment increasing bioavailability are sought.
A method was investigated to increase the methane yield in a full-scale plant for co-fermenting pig manure with corn silage, which was the default substrate in the original source reactors. Increased lignocellulosic bioavailability of wheat straw was achieved by combining liquid hot water (LHW) and steam explosion (SE). According to FT-IR analysis, the treatment resulted in hemicellulose hydrolysis, partial cellulose depolymerization, and lignin bond destruction. Low-mass polysaccharides (0.6 × 103 g mol−1) had significantly higher concentration in the leachate of LHW-SE wheat straw than raw wheat straw. The methanogenic potential was evaluated using inoculum from two different biogas plants to study the influence of microorganism consortia. The yield was 24–34% higher after the pretreatment process. In a full-scale biogas plant, the optimal conditions were ~ 165 °C, ~ 2.33 MPa, and 10 min in LHW and ~ 65 °C and ~ 0.1 MPa for SE. The processes did not generate detectable inhibitors according to GC–MS analysis, such as furfural and 5-hydroxymethylfurfural.
The LHW-SE combined pretreatment process increases the bioavailability of carbohydrates from wheat straw. The LHW-SE treated wheat straw gave similar biogas yields to corn silage, thus enables at least partial replacement of corn silage and is good for diversification of substrates. Surprisingly, microorganisms consortia from other biogas plant fed with other substrates may have higher efficiency in utilization of tested substrate. Thus, methanogenic consortia may be considered in the process of optimization at industrial scale. The efficiency was calculated, and the LHW-SE may be profitable at full industrial scale and further optimization is proposed.
The profitability of biogas plants in the European Union using biomass could be compromised without preferential regulations or market fluctuations, such as establishing low prices of green certificates [1]. Environmentally, the most advantageous option is processing organic waste in biogas plants instead of dedicated biomass grown on fields. However, the production capacity of biogas from waste may be too low for a biogas plant to be profitable. Therefore, there is a need for process optimization and the use of additional substrate [2, 3].
Biogas substrates vary in terms of the decomposition rate and in methane production yield. Therefore, a combination of feed additives and pretreatment methods should give the highest efficiency of a biogas production while reducing the decomposition time required for a substrate [4]. There are several methods of substrate pretreatment to improve the decomposition and methane yield [5]. However, for application on an industrial scale, these methods must be evaluated in terms of net energy gain and economic viability.
Pig manure alone is a poor substrate for biogas production, because of its excessive nitrogen content relative to available organic carbon. In addition high nitrogen content may result in toxic level of ammonia. Thus, additional substrates with high organic carbon must be added. The most promising is straw, which is available from adjacent biogas plant cultures [6]. However, the abundant lignocellulosic biomass of wheat straw undergoes slow decomposition, and only a fraction of the chemical energy can be converted into biogas. Increased lignocellulosic biomass conversion may be achieved by pretreatment methods such as liquid hot water (LHW) and steam explosion (SE) [7, 8]. The aim of this study is to-find alternatives and more economical methods for methane production for the Koczała full-scale biogas plant (POLDANOR; Poland) using pig manure and corn silage. For this purpose, the structural changes and the methanogenic potential in treated and untreated materials were investigated.
Novelty of our research results from the analysis of the impact of LHW-SE pre-treatment processing of wheat straw on its real biogas yield potential in the full-scale biogas plant with its comprehensive analysis. To date there are many studies which try to predict theoretically how particular substrate after pretreatment would behave in a full-scale biogas plant [9, 10] which are provided in small scale plants and laboratory studies. They point out the need to confront their assumptions with a full-scale plant results [11]. As shown in our research theoretical and real estimations are not consistent and the theoretical yield of biogas proved to be underestimated.
Wheat straw (Triticum aestivum L.) was kindly supplied by the farms of Poldanor S. A. (Człuchów County, Pomorskie Voivodeship, Poland). The straw was dried in the field under the atmospheric conditions of a hot, dry summer and then stored in warehouse until use. The dry matter content of the wheat straw was 93.30% ± 0.20%. For LHW-SE pretreatment, light yellow, non-moldy wheat straw was chopped into approximately 10-mm pieces by a crop chopper ("DOZAMECH", Odolanów, Poland). Recycled water was used in the LHW-SE pretreatment, which was obtained by mechanically squeezing post-fermentation sludge from a biogas plant.
Liquid hot water–steam explosion pretreatment of wheat straw
LHW-SE pretreatment of the wheat straw was carried out in an industrial-scale combined installation (Koczała agricultural biogas plant, Poldanor S. A., Przechlewo, Poland). The concept of the plant construction is based on the general principles of the LHW and SE processes [12]. Briefly, the ground, dry wheat straw and recycled water were moved through a pipe reactor by a set of high-pressure pumps (2.33 MPa) with temperature maintained under the boiling point (~ 165 °C). The retention time in the pipe reactor was about 10 min to maintain the severity factor at the point where the inhibitors of the methane fermentation process are not produced, such as furfurals and 5-hydroxymethylfurfural (HMF) [7].
The wheat straw pulp then enters the decompression tank, where a rapid phase transition occurs. After expansion at 65 °C in the decompression tank, the wheat straw pulp is directly fed to the biogas plant. The liquid effluent (recycled water) from the biogas plant was used as a reaction medium in the LHW-SE process. The ratio of wheat straw to recycled water was between 20:1 and 23:1. The daily continuous LHW-SE processing plant processes 2300–3800 kg of wheat straw using 100–160 m3 of recycled water.
The total solids (TS), volatile solids (VS), and ash contents were estimated according to the standard methods of the American Public Health Association [13] for preliminary characterization of the wheat straw, LHW-SE wheat straw, recycled water, and inocula used for biogas production. Fourier transform infrared (FT-IR) spectra of dry raw and LHW-SE pretreated wheat straw blades were obtained in the range of 400–4000 cm−1 on an FT-IR spectrophotometer (Bruker Vector 22 FT-IR) with a DTGS detector (Bruker, Germany) using a KBr disc containing 1% of the analyzed sample. The spectra were used to determine the changes in the functional groups that may have been caused by the pretreatment.
Size-exclusion chromatography (SEC) with an HPLC system was used to estimate the molecular mass of the water-soluble wheat straw products, compare them to the LHW-SE-processed pulp, and eliminate possible impurities from the recycled water. Water-soluble compounds from the wheat straw were isolated by maceration of 200 g of the ground wheat straw with 1000 mL of deionized water at room temperature for 7 days in the dark. The extract was then filtered through the Whatman filter paper to remove solids and evaporated under reduced pressure until dry.
The LHW-SE wheat straw pulp was centrifuged at 15,000×g for 10 min (Eppendorf Centrifuge 5804, Germany). The supernatant was collected and evaporated under reduced pressure until dry. The recycled water was also filtered through Whatman filter paper to remove some solid impurities and evaporated under reduced pressure until dry using a rotary evaporator. Each dry sample was dissolved in deionized water to obtain a concentration of 3 mg mL−1 and then centrifuged at 2000×g for 5 min. Each supernatant was filtered through a syringe filter with 0.45-µm pore size (Costar, Corning, NY, USA) and degassed before analysis.
For the chromatographic separation, tandem columns consisting of a Hema-Bio 300 and Hema-Bio 100 (Tessek, Czech Republic) were used with a total resolving power of mass in the range of 8 × 104–6 × 105 g mol−1. Deionized water was used as the eluent with flow rate of 0.6 mL min−1. The injection volume was maintained at 100 μL. The molecular mass and its distribution among the samples was analyzed based on saccharides and phenolics using an HPLC system (Gilson, Poland) equipped with a GX-271 Liquid Handler, a UV/VIS-152 detector at a wavelength of 270 nm, and a prepELS II evaporative light scattering detector. The temperature in the drift tube and the spray chamber were set as 45 and 10 °C, respectively. The molecular mass of the samples was estimated carried out using a calibration curve of dextran standards (7 × 104, 2 × 105, 5 × 105 and 1 × 106 g mol−1) (Sigma-Aldrich, Germany). The results were analyzed using Trilution LC software v2.1.
Some of the volatile products in LHW-SE wheat straw were analyzed using GC–MS to verify that the LHW-SE process was carried out under the conditions where the inhibitors of the methane fermentation process are absent. A pulp sample of LHW-SE wheat straw was centrifuged at 8000×g for 10 min to separate the straw blade fraction from the liquid suspension. The straw blades were dried at 37 °C for 14 days and then extracted according to a previously described method [14]. The drying was performed under vacuum in vacuum dryer [Binder VD 23 (E2.1), Germany] to avoid potential microbial degradation. The dry wheat straw blades (10.0 g) were macerated with 200 mL of chloroform and then 200 mL of methanol for 72 h each at room temperature. Each extract was filtered through Whatman filter paper and evaporated under reduced pressure until dry.
GC–MS analysis was performed according to Rutkowski and Kubacki [15]. In brief, each collected extract was dissolved in its previous solvent and analyzed using an HP6890 gas chromatograph equipped with an HP5973 mass selective detector and HP-5 ms column (25 m × 0.25 mm i.d., 0.25-µm film thickness, cross-linked 5% PH ME siloxane). The oven temperature program was 50–280 °C (4 °C min−1) after an initial 1 min isothermal period. The final temperature was kept for 10 min, and the flow rate of helium was 1 mL min−1. The inlet temperature was set at 260 °C. The sample injection was done in split mode (1:5). The mass spectrometer was set at an ionizing voltage of 70 eV with a mass range of m/z 15–450. Organic compounds were identified by comparing the mass spectra of the resolved components using NIST electronic-library search routines.
Inocula and substrates
LHW-SE wheat straw and raw wheat straw were used as the initial substrates in laboratory-scale biogas production. Both were stored for 14 days before use in the dark in sterile, anaerobic, dry conditions in high-density polyethylene (HDPE) bags. The inocula of the methane reactors were taken from the Koczała biogas plant (KB) (Poldanor S. A. Przechlewo Poland), which processes pig manure and corn silage. A positive control was obtained from the Strzelin agricultural biogas plant (SB) (Südzucker Polska S.A. Strzelin Poland), which processes beet pulp. Both samples of inocula were taken 4 days before the experiment and stored at 20–37 °C in polyethylene jars.
Experimental design of LHW-SE pretreated wheat straw methanogenic potential
Methanogenic potential tests were conducted similarly to Jabłoński et al. [16] with modifications. In the experiment 30 batch glass reactors with volumes of 1000 mL were used for measurements, five reactors for each experiment with dry wheat straw, LHW-SE pretreated wheat straw and without substrate as a reference sample. The reactors were loaded with inocula and operated for 28 days at a constant temperatures corresponding to the initial process carried out in the biogas plants which was 50 °C in KB and 39 °C in SB. At the beginning, 500 mL of inoculum was added into each bioreactor. Half of the bioreactors received the inoculum from SB and the other half received inoculum from KB. Next, 100 mL of LHW-SE wheat straw substrate was added to the first bioreactors containing SB inoculum (SB1). Similarly, 100 mL of LHW-SE wheat straw was added to bioreactors containing KB inoculum (KB1). For the third group of bioreactors with SB inoculum, 3.2 g of dry wheat straw and 100 mL of recycled water were added (SB2), and to the fourth group with KB inoculum received 3.2 g of the dry wheat straw and 100 mL of recycled water (KB2). The fifth group with SB inoculum was used as control probes and received 100 mL of distilled water (SBc). The sixth group of bioreactors with KB inoculum was the control mixtures and received 100 mL of distilled water (KBc).
The mass of dry wheat straw added to the digestate was chosen so that the initial amount of VS from the substrates would be equal. The samples were stirred manually just before the gas measurements. The amount of biogas produced from the biomass was calculated as the difference between the production in the sample bottles and the production in the blank bottles (without the addition of substrate). PVC urine bags of 2000 mL (Cezal, Poland) with drain valves connected to the reactors outlets were used as to collect the biogas. The volumes of the biogas produced were measured at established time intervals after 1, 2, 3, 4, 5, 7, 10, 12, 14, 17, 21 and 28 days. The gas samples were taken from the collecting containers from the dedicated outlet port using a 100-mL PVC syringe. The same operation was repeated for each reactor. Biogas volumes were calculated for the standard state (0.1 MPa).
Koczała biogas plant basic characteristics
KB contains three fermentation tanks with a capacity of approximately 3010 m3 and two digestion tanks with a capacity of approximately 3990 m3. The temperature of the biogas production in the digesters is 50 °C, and the pH value is in the range of 7.45–7.60. The organic loading rate (OLR) [17] is 5.6 \({{\text{kg VS}} \mathord{\left/ {\vphantom {{\text{kg VS}} {\left( {{\text{m}}^{ 3} \;{\text{day}}} \right)}}} \right. \kern-0pt} {\left( {{\text{m}}^{ 3} \;{\text{day}}} \right)}}\), and the hydraulic retention time (HRT) is 31 days [17]. The maximum energy efficiency of the cogeneration engines (electric energy/thermal energy) of KB is 2126/2206 kWh, and the average methane concentration in the biogas is 51.5%. The engine efficiency is assumed to be 40%, and the methane energy value assumed to be 5.15 kWh m−3.
To theoretical biogas production was predicted as:
$$V_{\text{b}} = m_{\text{dm}} \times \;V_{\text{teo}} ,$$
where V b is the theoretical biogas volume obtained from a substrate dry mass [m3], m dm is the added mass of a particular substrate [t], and V teo is the assumed biogas volume, which should be obtained from the substrate's dry mass. V teo is obtained from experiments or published sources. The sum of each type of a substrate for which the theoretical biogas volume was calculated as:
$$\mathop {\sum {V_{c} = V_{x} + V_{y} + V_{z} + V_{i} ,} }\nolimits$$
where V x , V y , V z , and V i are theoretical biogas volumes calculated according to Eq. (1). Substrates used in the production of biogas were pig manure, corn silage and LWE-SE pretreated wheat straw. Recirculate counted as the fourth substrate.
Energy balance calculation
Theoretical average daily energy gain [G] from LHW-SE pretreated WS was predicted as for period III:
$$G = M \cdot V \cdot P \cdot N,$$
where [M] is an average mass input of LHW-SE pretreated WS; [V] is an estimated actual LHW-SE pretreated WS biogas yield potential, [P] is an average methane concentration in the biogas; [En] is the methane energy value.
The final electrical energy [Ee] and thermal energy [Et] value was estimated from the engine efficiency [Ef]:
$${\text{Ee}} = G \cdot {\text{Ef}}$$
$${\text{Et}} = G - {\text{Ee}} .$$
Experimental data were statistically analyzed with a Student's t test with statistical significance level 0.05, implemented in Microsoft Office 2007.
Raw, dry wheat straw (Triticum aestivum L.) was used as a material for pretreatment with LHW-SE to obtain a better substrate for biogas production. It was necessary to verify the laboratory results of the model process of biogas production with inoculum received from KB (Fig. 1) to assess the usefulness of the LHW-SE process in the biogas production process. HPLC, FT-IR, and GC–MS analyses were conducted to explain the influence of the pretreatment process on the wheat straw structure.
Schematic representation of the processes involved in the experimental setup
Chemical characterization of wheat straw, LHW-SE wheat straw, and inocula
TS, ash, and VS amounts were estimated in the substrates, inocula, and recycled water (Table 1). Dry wheat straw contained high TS (93.3% w/w) and VS content (90.4% w/w), whereas the LHW-SE wheat straw had only 6.4% TS (w/w) and 5.4% VS (w/w). Both the wheat straw and LHW-SE wheat straw contained low amounts of ash (2.9 and 1.0% w/w, respectively). The Koczała inoculum used in the laboratory-scale experiments had 4.5% TS (w/w) and 3.6% VS (w/w), whereas the inoculum from the Strzelin plant used as a control contained less TS (3.1% w/w) and less VS (2.3% w/w). This difference might result from higher amount of the microorganism of KB or higher content of the undigested organic compounds. Both the KB inoculum and SB inoculum contained only 0.9% ash (w/w), while the recycled water was only 3.4% TS (w/w) and 0.8% ash (w/w). Recycled water contained (2.6% w/w) of VS. This may suggest a possible influence of using the recycled water as a medium in the KB biogas production process. To consider the possible impact on the results, recycled water was used in further analyses.
Table 1 TS, ash and VS used in the experiments of the biogas production
The finely ground samples of the wheat straw and LHW-SE wheat straw were analyzed using FT-IR spectroscopy (Fig. 2) to confirm that the pretreatment process caused structural changes in the plant material. The spectra indicated some similarities and differences. In the frequency region higher than 3000 cm−1, both spectra showed two wide bands. The first of them with a maximum occurred at 3272 cm−1 in both spectra, and the second bands occurred around 3094 and 3117 cm−1 for the wheat straw and for LHW-SE wheat straw, respectively. The second bands might correspond to the stretching vibrations ν(O–H) of phenolic groups of lignocellulosic structures as well as hydroxyl bonds from other saccharides in the cell walls of the wheat straw [18, 19]. Interestingly, after the pretreatment of plant tissues, the second band became more intense, which might indicate a higher concentration of free –OH from other carbohydrate compounds.
FT-IR spectra of wheat straw (WS) and its pretreated solids product (LHW-SE WS)
Another two bands of high intensity with maximums at 2918 cm−1 and at 2852 cm−1 were observed in both spectra due to stretching vibrations ν(C–H), but just the first of them was characteristic for −CH3 bonds, which are often present in the lignin network and as methyl esters of uronic acids in hemicelluloses. The second band confirmed the presence of (−CH2−) fragments from saccharide units [18, 19]. The stronger intensity of the first band in the wheat straw spectrum due to the symmetric stretching vibrations of (C–H) indicated a dominant amount of methyl bonds, which are probably present as the terminal ether bonds of the branched lignin structure [20]. These were almost not present in the LHW-SE wheat straw product. It might suggest that deesterification and perhaps even some delignification occurred during the pretreatment process [18].
Two sharp peaks centered around 2360 cm−1 were assigned to the characteristic vibration of the aromatic rings present in the lignocellulosic material [21]. In the range of 1800–2000 cm−1, a group of less intense signals was observed in both spectra. These peaks were overtones of aromatic rings, confirming the presence of a rich phenolic lignin structures. Guaiacyl–syringyl lignins (GS) in grass (including cereals) contain major amounts of structural elements derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol [20], as well as some polyphenolic acids as ferulic and p-coumaric acids [22, 23]. They create lignocellulosic macromolecular conglomerates with cellulose and with hemicelluloses full of carboxylic functional groups in esterified and free forms.
The presence of (C=O) bonds of the esterified types [19, 24, 25] were observed in both spectra as stretching vibration signals at 1748 and 1732 cm−1, but they were less intense in the LHW-SE wheat straw spectrum. In the contrast, a group of signals in the range of 1697–1634 cm−1 was more intense in the spectrum of the pretreated liquor product compared to the untreated wheat straw. This region typically shows stretching vibration signals of carbonyl groups from carboxylic bonds that are not esterified [19, 24, 25]. An additional band of the symmetric ν(C=O) stretching vibrations with a maximum at 1418 cm−1 was detected in both spectra. Another group of peaks in the range of 1580–1480 cm−1 was ascribed to the skeletal interactions of aromatic rings in lignin [26]. It became more intense after pretreatment of the wheat straw with the LHW-SE process. In summary, the characteristic features of delignification might be the increase in intensity of the general carbonyl absorbances in the range of 1770–1630 and around 1260 cm−1 [26].
Further bands indicated the presence of (C–H) bonds located at 1456 and 1373 cm−1, which were responsible for asymmetric and symmetric stretching interactions, respectively [23]. Moreover, the 1456/1506 cm−1 ratio was representative of the ratio of syringyl to guaiacyl (S/G) in lignin [27]. The lower S/G ratio in the LHW-SE wheat straw in comparison to the untreated wheat straw might suggest a loss of S monomers in the process of delignification. The other peaks might confirm this theory, where there was a less intense band for the ν(C–O–C) stretching vibrations typical for the syringyl rings at 1317 cm−1 in the spectrum of the pretreated liquor product. The band of the ν(C–O–C) stretching vibrations of the guaiacyl rings detected at 1261 cm−1 was shifted to 1258 cm−1 [23]. There was also lower intensity for the band of the ν(C–O–C) stretching interactions of p-coumaric ester groups typical for p-hydroxyphenyl guaiacyl and syringal (GSH) lignins detected at 1163 cm−1 [28].
FT-IR bands indicating the presence of polysaccharides were found at about 1074, 1038 and 1103 cm−1 [ν(C–O–C), ν(C–OH), and ν(C–C) of the saccharide rings], which were derived from cellulose and hemicelluloses [19, 29, 30]. After the LHW-SE process, the intensity of the signals in this region significantly decreased and shifted. In the spectrum of the LHW-SE wheat straw, signals were detected with maximums at 1126, 1092–1080, and 1047–1016 cm−1 [ν(C–O–C), ν(C–OH), and ν(C–C) of the saccharide rings]. This change might suggest a degradation of the polysaccharide network to shorter saccharide chains and monosaccharides.
In the anomeric regions of the FT-IR spectra of untreated wheat straw and LHW-SE wheat straw, the clear bands with low intensity at 897–893 cm−1 were attributed to the β-glycosidic linkages (1 → 4), which are especially characteristic of cellulose structure. For the α form, the bands typically occur at 837–840 cm−1 [30, 31]. The band of β bonds was much smaller in the spectrum of the pretreated liquor product. There might be few reasons, i.e. a weaker bonding dynamics of cellulose fibers due to the violation of the ordered crystal structure in the pretreatment process. Another reason of the lower intensity of this band might be the degradation of β-glycosidic bonds with polyphenolic compounds, where some loss of polyphenols in the pretreated liquor product by SEC analysis is confirmed.
SEC analysis of the water-soluble components was also performed using HPLC (Fig. 3a–c). Using a dual detection system comprising UV–Vis and electrospray light scattering (ELS) detectors, it was possible to detect some compounds with and without chromophore groups in their structures; i.e., polyphenolic glycoconjugates and pure saccharides. The water-soluble extracts of both wheat straw and LHW-SE wheat straw contained three fractions with polyphenolic–polysaccharide or oligosaccharide nature (Fig. 3, Table 2). SEC analysis of the wheat straw extract indicated peaks with molecular masses (Mp) of ~ 1500 × 103 g mol−1 (22.0% of the analyzed mixture), ~ 30 × 103 g mol−1 (39.1%), and ~ 1–10 × 103 g mol−1 (37.7%). In the chromatogram of the LHW-SE wheat straw extract, peaks with the following Mp were detected: ~ 2 300 × 103 g mol−1 (7.9%), ~ 30 × 103 g mol−1 (16.3%), and ~ 0.2–1 × 103 g mol−1. The last value is notable in that it represents as much as 75.6% of the analyzed mixture. Both chromatograms suggested a conjugate nature of the separated fractions, where saccharides were detected with similar retention time to polyphenolics, but the value was lower for the LHW-SE wheat straw. The water-soluble fractions of LHW-SE wheat straw contained much less polyphenolic compounds, and the average molecular mass of the last one suggested oligo- or even monosaccharide nature.
Size exclusion chromatography (SEC) analysis of water-soluble compounds. a Saccharide profile of wheat straw (WS) and of its pretreated liquor product (LHW-SE WS), identified by evaporative light scattering (ELS) detection method; b polyphenolic profile of wheat straw (WS) and of its pretreated liquor product (LHW-SE WS), where they were detected using UV–Vis detection system (λ = 270 nm)
Table 2 Results of the water-soluble components of the LHW-SE WS pulp, and recycled water SEC HPLC analysis
After the pretreatment process the peak of monosaccharides with the longest retention time (Fig. 3a) significantly increased. Two peaks of macromolecular structures decreased, what is in the SEC chromatogram (Fig. 3b) well observed. In conclusion, the (LHW-SE WS) product is containing much more low molecular weight saccharides, which are the best absorbed carbon source for the microorganisms, to growth and development, what is inseparable from the increase in the productivity of biogas formation.
It was also necessary to check for possible impurities in the recycled water used in the pretreatment process. The results (Table 2) indicated that the use of compost water obtained from the post-fermentation process may have some influence on LHW-SE pretreatment process and methanogenesis in biogas plant. Small amounts of polysaccharides and polyphenolics such as glycoconjugates were found, but there were no monosaccharides. This suggests that the recycled water might be as good as tap water. In summary, the SEC analysis confirmed that the LHW-SE process leads to the hydrolysis of the polysaccharides contained in wheat straw into oligo- and monosaccharides.
Literature data indicate that inhibitors of the methanogenesis process such as furfural and its derivatives (i.e., HMF) might be produced in the LHW-SE pretreatment process [17]. GC–MS analysis was performed on extracts of the LHW-SE wheat straw, which were obtained using chloroform (extracted mass 4.7% w/w) and methanol (extracted mass 12.2% w/w). Concentrations of chloroform and methanol extracts used in GC–MS analysis were 0.0237 and 0.0610 g mL−1. A group of compounds were detected (Table 3), and some of them may have a positive influence on the methanogenesis process, such as carboxylic acids. These compounds are intermediate substrates and lead to the formation of acetate, carbon dioxide, and hydrogen, which are a crucial substrates for methanogenic archaea. These substrates may affect the overall high amounts of methane produce within the process. No common methanogenesis inhibitors were found [32] including furfural and its derivatives. Extracts of pretreated wheat straw solids did not show any typical inhibitors of the methane fermentation process, such as furfural and HMF, which indicates that the process was carried out under appropriate conditions, although there is still room for improvement. As we do not detect inhibitory compounds (ex. furfural) it should be possible to increase SF (severity factor) of the LHW-SE process [7].
Table 3 GC-MS analysis of the LHW-SE WS chloroform extract and methanol extract compounds
Laboratory-scale of biogas from LHW-SE wheat straw
The influence of the LHW-SE pretreatment of wheat straw was evaluated by biological tests of the methanogenic potential using two different methanogenic consortia: the inocula from the methane reactor of KB and SB. The SB inoculum was used as a control to see whether the different compositions of microorganisms have a significant effect on the amount of gas produced from the processed straw. The cumulative daily biogas production is presented in Fig. 4.
Biogas yield potential measurement of wheat straw (WS) and its pretreated product (LHW-SE WS)
Experiments carried out in a laboratory. Data expressed in the cumulative average daily production after deduction of the control samples C.
The data show that the biogas production increased when using pretreated wheat straw as a raw material in comparison to raw wheat straw. The LHW-SE pretreatment improved the wheat straw decomposition and the methane yield by 24% when using inoculum from KB and by 35% when using inoculum from SB. Notably, the biogas production with the methanogenic consortium from SB was significantly higher than that obtained with the KB consortium. This may result from differences the in anaerobic digestion temperature, which was 50 °C for KB and 39 °C for SB. These temperatures could affect the species composition. Methanogenic species have a range spectrum of metabolic capabilities [33]. Thus, modification of the consortium could potentially be a good target for further increasing the process efficiency. Independently of the methanogenic consortium, the higher biogas yield after pretreatment suggests a change in the structure of the wheat straw, which contributed to the accelerated and increased production of biogas.
Biogas and LHW-SE plants processing data
To estimate the actual impact of the LHW-SE pretreatment on methane fermentation in a biogas plant, the theoretical and real biogas yields were compared. Processing data from the KB biogas plant are presented for a span of over 5 years, including the averages of 10 days of sampling and standard deviations. The data presented in Figs. 5 and 6 include the average biogas yield in reference to the total organic dry mass (TOC) (Fig. 5a), the type and quantity of organic mass input (Fig. 5b), and actual and theoretically estimated biogas yield (Fig. 6).
Processing data in Koczała biogas plant (KB) presented in the span of over 5 years. a The average biogas production on the total organic dry mass input. b Raw materials contribution in the total organic dry mass
Correlation of real and theoretical biogas yield charts in Koczała plant over 1570 days of work
The theoretical biogas yield was estimated by assigning a methanogenic potential to each independent substrate (Table 4). Observations started on the 400th day because many malfunctions occurred and the methanogenesis was not stable in the first year of operation which mainly resulted from pump failures, unsealing of high-pressure installations and clogging of pipes. The LHW-SE pretreatment plant was launched on the 1040th day, when observation period I ends and period II begins. During period I, a major malfunction occurred on the 750th day and lasted 200 days, during which the biogas yield dropped by half. This period enabled confirmation of the correlation between the theoretical biogas yield (for each biomass substrate except the pretreated wheat straw) and the real biogas yield. During period II, the LHW-SE pretreatment plant for wheat straw began operation and was stabilized.
Table 4 The average biogas yield produced from the dry organic mass
The biogas plant aims for an average biogas yield of ≈ 500 m3 t dm −1 in reference to TOC in methane fermentation, except in periods when the plant is overfed (Fig. 5a). Overfeeding occurs due to organic overload [16], when the amount of organic matter fed to the biogas plant exceeds the total degradation capacity of the microbes to produce biogas. In this case, Fig. 5b shows that the ratio of substrates changes because of the corn silage input increases together with the TOC. Despite the high input, the biogas yield drops due to overfeeding.
Figure 6 shows that the highest and most stable biogas production occurs in observation period III (between the 1300th and 1700th days), despite the lowest overall TOC and significant drop in corn silage input. The drop was slightly compensated by increasing the addition of LHW-SE wheat straw. The theoretical and real biogas yield in period III (Fig. 6) shows the actual long-term biogas yield was higher than the theoretically estimated yield for the first time, despite the low TOC and corn silage input. The only parameters that changed significantly during this time were the quantity and quality of the pretreated wheat straw. It was concluded that the laboratory data on the theoretical methanogenic potential underestimated the actual performance. The biogas potential estimated in the experiments was 350 m3 t dm −1 of biogas yield, but the value estimated from the theoretical and actual yield revealed 600 m3 t dm −1 of biogas yield from LHW-SE pretreated wheat straw. The difference between the laboratory-estimated and experimentally measured methanogenic potential of wheat straw may have resulted from the positive impact of the different substrates used in co-digestion [4].
Observation period IV includes an attempt to increase the biogas yield by increasing the input of corn silage and recirculated mass. However, the attempt failed and ended up in overfeed conditions. The biogas yield sudden decreases on the 1700th day, which lasted 270 days until the end of the experiment. The point with the biggest deviation occurred on day 1040, which corresponds to the launch of the LHW-SE pretreatment plant. The big deviation appeared because the digesters were not fed such for a few days before the launch. The big deviations on the 950th, 1260th, and 1320th days resulted from malfunctions in the biogas plant which mainly resulted from pump failures, unsealing of high-pressure installations and clogging of pipes.
In period III, the biogas yield of 600 m3 t dm −1 corresponds to the corn silage methanogenic potential. This observation suggests that the LHW-SE wheat straw could be a good substitute for corn silage, which is easily accessible and a cheap waste biomass material with the same methanogenic potential.
Theoretical profitability of LHW-SE pretreatment process
Theoretical profitability was estimated based on the average plant energy consumption, data presented in Table 5. Complementing them with LHW-SE pretreated WS methanogenic potential, data from Table 4 and the input and output data from Figs. 5 and 6, we estimated theoretical net energy profit presented in Table 6.
Table 5 Average daily energy consumption in liquid hot water–steam explosion plant
Table 6 Average theoretical daily energy net profitability from liquid hot water–steam explosion plant
Although the theoretical profit seems to be large, unfortunately its potential has not been exploited. This was due to the continuous failure of the installation, the lack of potential heat energy buyers and the unstable process of methanogenesis in the biogas plant caused by overfeeding. That is why sometimes gross profit from using this type of plant was negative.
This study confirmed the hypothesis that the LHW-SE combined pretreatment process increases the bioavailability of carbohydrates in wheat straw for methane fermentation microorganism consortia. The KB inoculum fed with pretreated wheat straw increased the methane yield by 24% in comparison to raw straw. Surprisingly, the SB inoculum produced biogas more efficiently, with 34% higher performance in comparison to the KB inoculum. The data obtained from the KB biogas plant before and after using the LHW-SE pretreated wheat straw suggest that it is good to diversify the substrates, which give similar biogas yields to corn silage. According to Jabłoński et al. [33], continuous-flow reactors are favored in one-step pretreatment processes because of their continuous procedure. However, they lead to major drawbacks of relatively low substrate concentrations and high energy demand for processing (due to pressure and heating in our case). The batch autoclave has an advantage because no substrate processing is necessary, and high solid-to-water ratios can be used. However, the decomposition of sugars can lead to undesired degradation products (furfural, HMF), insufficient lignin removal, and poor enzymatic digestibility. Rogalinski [34] proposed using a fixed-bed reactor that minimizes the disadvantages and enhances benefits of these two types of reactors. Combined processes of liquid hot water (LHW) and steam explosion (SE) could be considered as a good option for the green pretreatment of biomass. However, a new kind of plant should be developed while taking into account the minimization of heat and processing costs, as well as undesired degradation products with higher lignin degradation rates and enzymatic availability. The latest research indicates that the hydrothermal pretreatment of lignocellulosic biomass continues to be developed [35], and its profitability could still be increased.
ELS:
evaporative light scattering
FT-IR:
KB:
the Koczała biogas plant
LHW-SE:
liquid hot water–steam explosion
Strzelin biogas plant
size exclusion chromatography
total organic dry mass
TS:
the total solids
WS:
Darmani A, Rickne A, Hidalgo A, Arvidsson N. When outcomes are the reflection of the analysis criteria: a review of the tradable green certificate assessments. Renew Sustain Energy Rev. 2016;62:372–81.
Jabłoński SJ, Biernacki P, Steinigeweg S, Lukaszewicz M. Continuous mesophilic anaerobic digestion of manure and rape oilcake—experimental and modelling study. Waste Manag. 2015;35:105–10.
Jabłoński S, Krasowska A, Januszewicz J, Vogt A, Łukaszewicz M. Cascade reactor system for methanogenic fermentation. Chall Mod Technol. 2011;2:37–41.
Szlachta J, Fugol M, Prask H, Kordasz L, Luberański A, Kułażyński M. Analiza i przygotowanie wsadu zawierającego organiczne odpady rolnicze, hodowlane i przemysłowe oraz odchody. Wrocław: Modelowe Kompleksy Agroenergetyczne; 2014.
Montgomery L, Bochmann G. Pretreatment of feedstock for enhanced biogas production. IEA Bioenergy. 2014. Available at: http://www.build-a-biogas-plant.com/PDF/pretreatment_iea2014.pdf. Accessed 30 June 2017.
Madej A. Straw balance in Poland in the years 2010–2014 and forecast to the year 2030. Stow Ekon Rol i Agrobiznesu Rocz Nauk. 2014;18/1:163–8.
Hendriks ATWM, Zeeman G. Pretreatments to enhance the digestibility of lignocellulosic biomass. Bioresour Technol. 2009;100:10–8.
Shaw MD, Karunakaran C, Tabil LG. Physicochemical characteristics of densified untreated and steam exploded poplar wood and wheat straw grinds. Biosyst Eng. 2009;103:198–207.
Alvira P, Negro MJ, Ballesteros I, González A, Ballesteros M. Steam explosion for wheat straw pretreatment for sugars production. Bioethanol. 2016;2:66–75. http://www.degruyter.com/view/j/bioeth.2015.2.issue-1/bioeth-2016-0003/bioeth-2016-0003.xml.
Shafiei M, Kabir MM, Zilouei H, Sárvári Horváth I, Karimi K. Techno-economical study of biogas production improved by steam explosion pretreatment. Bioresour Technol. 2013;148:53–60.
Janzon R, Schütt F, Oldenburg S, Fischer E, Körner I, Saake B. Steam pretreatment of spruce forest residues: optimal conditions for biogas production and enzymatic hydrolysis. Carbohydr Polym. Elsevier Ltd.; 2014;100:202–10. http://dx.doi.org/10.1016/j.carbpol.2013.04.093.
Kumar P, Barrett DM, Delwiche MJ, Stroeve P. Methods for pretreatment of lignocellulosic biomass for efficient hydrolysis and biofuel production. Ind Eng Chem Res. 2009;48:3713–29.
APHA/AWWA/WEF. Standard methods for the examination of water and wastewater. 20th edn. Washington, 1999. Available at: http://www.standardmethods.org. Accessed 30 June 2017.
Durmaz G, Gökmen V. Determination of 5-hydroxymethyl-2-furfural and 2-furfural in oils as indicators of heat pre-treatment. Food Chem. 2010;123:912–6.
Rutkowski P, Kubacki A. Influence of polystyrene addition to cellulose on chemical structure and properties of bio-oil obtained during pyrolysis. Energy Convers Manag. 2006;47:716–31.
Jabłoński S, Kułażynski M, Sikora I, Łukaszewicz M. The influence of different pretreatment methods on biogas production from Jatropha curcas oil cake. J Environ Manage. 2016;203:714–9.
Drosg B. Process monitoring in biogas plants. IEA Bioenergy. 2013. Available at: http://www.iea-biogas.net/files/daten-redaktion/download/Technical Brochures/Technical Brochure process_montoring.pdf. Accessed 30 June 2017.
Sun RC, Tomkinson J. Comparative study of lignins isolated by alkali and ultrasound-assisted alkali extractions from wheat straw. Ultrason Sonochem. 2002;9:85–93.
Kačuráková M, Capek P, Sasinková V, Wellner N, Ebringerová A. FT-IR study of plant cell wall model compounds: pectic polysaccharides and hemicelluloses. Carbohydr Polym. 2000;43:195–203.
Upton BM, Kasko AM. Strategies for the conversion of lignin to high-value polymeric materials: review and perspective. Chem Rev. 2016;116:2275–306.
Ruiz HA, Ruzene DS, Silva DP, Macieira da Silva FF, Vicente AA, Teixeira JA. Development and characterization of an environmentally friendly process sequence (autohydrolysis and organosolv) for wheat straw delignification. Appl Biochem Biotechnol. 2011;164:629–41.
Sain M, Panthapulakkal S. Bioprocess preparation of wheat straw fibers and their characterization. Ind Crop Prod. 2006;23:1–8.
Sun XF, Sun RC, Fowler P, Baird MS. Extraction and characterization of original lignin and hemicelluloses from wheat straw. J Agric Food Chem. 2005;53:860–70.
Bijak M, Saluk-Juszczak J, Tsirigotis-Maniecka M, Komorowska H, Wachowicz B, Zaczyńska E, Czarny A, Czechowski F, Nowak P, Pawlaczyk I. The influence of conjugates isolated from Matricaria chamomilla L. on platelets activity and cytotoxicity. Intern J Biol Macromol. 2013;61:218–29.
Šutovská M, Capek P, Fraňová S, Pawlaczyk I, Gancarz R. Antitussive and bronchodilatory effects of Lythrum salicaria polysaccharide–polyphenolic conjugate. Int J Biol Macromol. 2012;51:794–9.
Stewart D, Wilson HM, Hendra PJ, Morrison IM. Fourier-transform infrared and Raman-spectroscopic study of biochemical and chemical treatments of oak wood (Quercus rubra) and barley (Hordeum vulgare) straw. J Agric Food Chem. 1995;43:2219–25.
Faix O. Classification of lignins from different botanical origins by FT-IR spectroscopy. Holzforschung. 1991;45:21–7.
Iskalieva A, Yimmou BM, Gogate PR, Horvath M, Horvath PG, Csoka L. Cavitation assisted delignification of wheat straw: a review. Ultrason Sonochem. 2012;19:984–93.
Šutovská M, Capek P, Kocmálová M, Fraňová S, Pawlaczyk I, Gancarz R. Characterization and biological activity of Solidago canadensis complex. Intern J Biol Macromol. 2013;52:192–7.
Zhbankov RG, Adrianov VM, Marchewka MK. Fourier transform IR and Raman spectroscopy and structure of carbohydrates. J Mol Struct. 1997;436(437):637–54.
Pawlaczyk-Graja I, Balicki S, Ziewiecki R, Matulova M, Capek P, Gancarz R. Polyphenolic–polysaccharide conjugates of Sanguisorba officinalis L. with anticoagulant activity mediated mainly by heparin cofactor II. Int J Biol Macromol. 2016;93:1019–29.
Zhou Z, Meng Q, Yu Z. Effects of methanogenic inhibitors on methane production and abundances of methanogens and cellulolytic bacteria in in vitro ruminal cultures. Appl Environ Microbiol. 2011;77:2634–9.
Jabłoński S, Rodowicz P, Łukaszewicz M. Methanogenic archaea database containing physiological and biochemical characteristics. Int J Syst Evol Microbiol. 2015;65:1360–8.
Rogalinski T, Ingram T, Brunner G. Hydrolysis of lignocellulosic biomass in water under elevated temperatures and pressures. J Supercrit Fluids. 2008;47:54–63.
Veluchamy C, Kalamdhad AS. Enhanced methane production and its kinetics model of thermally pretreated lignocellulose waste material. Bioresour Technol. 2017;241:1–9.
MG, SJ, RG and MŁ conceived the work. MG, IP-G and MŁ wrote the manuscript. IP-G and MG performed the FT-IR analysis and description. IP-G and RZ performed the SEC-HPLC analysis and description. MG, AW and PR performed the GC–MS analysis. MG and SJ evaluated methanogenic potential. All authors read and approved the final manuscript.
The authors would like to thank to the POLDANOR S.A. company, especially to Mr. Grzegorz Brodziak, Mr. Łukasz Majewski, and Mr. Beny Laursen for their assistance and the use of their facility. We also would like to thank to the Südzucker Polska S.A. company for accessing the inoculum from Strzelin biogas plant used in the experiments. Dr. Katarzyna Pstrowska and Dr. Marek Kułażyński are acknowledged for their assistance in the FT-IR analysis.
All data generated or analyzed during this study are included in this published article.
This work was financially supported by Wroclaw Center of Biotechnology, program: The Leading National Research Center (KNOW) for years 2014–2018; by Polish National Centre for Research and Development (NCBiR) as a part of a project KompUtyl (BIOSTRATEG2/298357/8/NCBR/2016); and by statutory activity subsidies from Polish Ministry of Science and Higher Education for the Faculty of Chemistry of Wrocław University of Science and Technology. Part of analyzes was made on the HPLC system purchased by the Project "WroVasc—Integrated Cardiovascular Center", co-financed by the European Regional Development Fund, within Innovative Economy Operational Program, 2007–2013 realized in Regional Specialist Hospital, Research and Development Center in Wroclaw. "European Funds—for the development of innovative economy".
Department of Biotransformation, Faculty of Biotechnology, University of Wrocław, Fryderyka Joliot-Curie 14a, 50-383, Wrocław, Poland
Michał Gaworski
, Sławomir Jabłoński
& Marcin Łukaszewicz
Department of Organic and Pharmaceutical Technology, Faculty of Chemistry, Wrocław University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370, Wrocław, Poland
Izabela Pawlaczyk-Graja
, Rafał Ziewiecki
, Anna Wieczyńska
& Roman Gancarz
Department of Polymer and Carbonaceous Materials, Faculty of Chemistry, Wrocław University of Science and Technology, Gdańska 7/9, 50-344, Wrocław, Poland
Piotr Rutkowski
Search for Michał Gaworski in:
Search for Sławomir Jabłoński in:
Search for Izabela Pawlaczyk-Graja in:
Search for Rafał Ziewiecki in:
Search for Piotr Rutkowski in:
Search for Anna Wieczyńska in:
Search for Roman Gancarz in:
Search for Marcin Łukaszewicz in:
Correspondence to Marcin Łukaszewicz.
Lignocellulosic biomass pretreatment
Steam explosion
Liquid hot water extraction
Methane production
|
CommonCrawl
|
The patellofemoral morphology and the normal predicted value of tibial tuberosity-trochlear groove distance in the Chinese population
Zhe Li1,
Guanzhi Liu1,
Run Tian1,
Ning Kong1,
Yue Li1,
Yiyang Li1,
Kunzheng Wang1 &
Pei Yang1
Our objective was to obtain normal patellofemoral measurements to analyse sex and individual differences. In addition, the absolute values and indices of tibial tuberosity-trochlear groove (TT-TG) distances are still controversial in clinical application. A better method to enable precise prediction is still needed.
Seventy-eight knees of 78 participants without knee pathologies were included in this cross-sectional study. A CT scan was conducted for all participants and three-dimensional knee models were constructed using Mimics and SolidWorks software. We measured and analysed 19 parameters including the TT-TG distance and dimensions and shapes of the patella, femur, tibia, and trochlea. LASSO regression was used to predict the normal TT-TG distances.
The dimensional parameters, TT-TG distance, and femoral aspect ratio of the men were significantly larger than those of women (all p values < 0.05). However, after controlling for the bias from age, height, and weight, there were no significant differences in TT-TG distances and anterior-posterior dimensions between the sexes (all p values > 0.05). The Pearson correlation coefficients between the anterior femoral offset and other indexes were consistently below 0.3, indicating no relationship or a weak relationship. Similar results were observed for the sulcus angle and the Wiberg index. Using LASSO regression, we obtained four parameters to predict the TT-TG distance (R2 = 0.5612, p < 0.01) to achieve the optimal accuracy and convenience.
Normative data of patellofemoral morphology were provided for the Chinese population. The anterior-posterior dimensions of the women were thicker than those of men for the same medial-lateral dimensions. More attention should be paid to not only sex differences but also individual differences, especially the anterior condyle and trochlea. In addition, this study provided a new method to predict TT-TG distances accurately.
Although total knee arthroplasty (TKA) has proven to be a successful surgical procedure for alleviating pain and improving function in patients with knee osteoarthritis, patient satisfaction rates after TKA vary between 75 and 89 % [1]. Anterior knee pain is a major reason for dissatisfaction, which may be caused by a variety of abnormalities, including patellofemoral pathologies [2]. An increasing number of researchers have observed mismatches of the patellofemoral joints after TKA. Matz et al. reported that the probabilities of changes in anterior femoral offset, anteroposterior size of the femur and anterior patellar offset after TKA were 40 %, 60 %, and 71 %, respectively, compared with those before TKA [3]. Kalichman et al. suggested that increased trochlear angles were associated with exacerbated functional impairments [4]. Moreover, Jan et al. highlighted that the patellofemoral geometry was of great importance in TKA, but was often overlooked [5]. Thus, increasing attention has been devoted to the modified design of patellofemoral joint prostheses in the field of TKA [6], among which the study of patellofemoral morphology is the basic research focus.
In the last few decades, researchers have reported the shape differences of knees between ethnicity and sex where there are relatively few studies on the patellofemoral joint. An extensive study from Mahfouz et al. analysed 1000 normal adult knees to identify differences in three-dimensional knee morphology among white American, African American, and East Asian populations, calculating 11 femoral and 9 tibial measurements [7]. Asseln et al. comprehensively analysed 412 pathological knees following TKA using 33 femoral and 21 tibial features to investigate sex differences and they indicated that large interindividual variations should also be important for specific implant design despite sex differences [8]. Yue et al. investigated the morphologic measurements of the femur and tibia in healthy Chinese and white participants. They found that Chinese women have a narrower distal femur and described the differences in knee anthropometry between sexes and ethnicities [9]. Although most of the literature affirmed sex differences between knee morphologies, several studies suggested no sex differences regarding anterior and posterior condylar of the distal femur [10, 11]. Additionally, relatively few studies on sex differences in patellofemoral morphology have been investigated, and most of them analysed the patella but not the anterior and posterior condylar regions of the femur or entire patellofemoral joint [12,13,14]. Thus, the sexual dimorphism of the entire patellofemoral morphology is still unclear. However, many studies have shown that anatomical differences of the knees, especially the distal femurs, are not only sex differences but also individual differences [8, 15,16,17]. Taken together, studies on the sex differences and individual differences of the entire patellofemoral morphology are still needed.
The tibial tubercle-trochlear groove (TT-TG) distance is a well-established reliable index that evaluates tibial tubercle lateralization and patellofemoral instability [18, 19]. A TT-TG value of > 15 mm is recommended as abnormal, and a value of > 20 mm is the threshold for performing tibial tubercle osteotomy [20, 21]. However, this absolute value does not account for the anatomic differences between sex and ethnicity, and it was hypothesized that the TT-TG distance is highly correlated with knee size [22,23,24]. An increasing number of researchers have raised disputes about inaccurate absolute value results [25, 26]. Thus, Cao et al. described the application of the TT-TG indices (ratio of the TT-TG distance to the tibial maximal mediolateral axis), and the result needed to be further confirmed [27]. Hernigou et al. predicted the normal TT-TG distances in Belgium using the femoral width and tibial width for the first time. However, the mediolateral width of the femur and tibia might not be the best parameters to describe the knee sizes, and similar parameters also included the height and weight of the patient [28]. Thus, it is still necessary to analyse appropriate parameters to predict the normal TT-TG distance and to create the reference criterion in Chinese individuals.
Therefore, we hypothesized that (a) most parameters of entire patellofemoral morphology including the dimensions of the patellofemoral joints and the shape of the distal femur exhibit sexual dimorphism except the anterior and posterior condylar of the distal femur; (b) patellofemoral morphology exhibits individual differences; and (c) the TT-TG distance is moderately correlated with knee size. By screening relevant parameters, the TT-TG distance can be predicted by more complex and accurate methods compared with the current methods proposed in the above literature.
Participant demographics
Seventy-eight participants (38 women) were recruited from the neighbouring communities of the Second Affiliated Hospital of Xi'an Jiaotong University from May 2017 to October 2017. The local ethics committee of the hospital approved the project. Informed consent was obtained from all included participants. Only one side of the knee was chosen to be studied and the left or right knee was chosen at random to maintain the independence of the data.
The inclusion criteria were as follows: (1) age ≥ 18 years; (2) height greater than 155 cm and less than 190 cm; (3) only asymptomatic and nonpathological joints were included in this research, which were verified through clinical examination and CT images.
The exclusion criteria were as follows: (1) pregnant women or those who plan to conceive within the next year; (2) history of poliomyelitis, rickets, dwarfism, rheumatoid arthritis, or other diseases that affect lower limbs; (3) a congenital deformity in either lower limb; (4) history of patellar instability, patellar dislocation, or the patellar apprehension test was positive on physical examination; (5) osteoarthritis of either hip or knee confirmed by previous imaging or by current CT scan; (6) fracture caused by injury to femur, tibia, hip, or knee; (7) joint arthroplasty, arthroscopic surgery or other surgery on the hip or knee.
CT Protocol and creation of 3-dimensional knee model
A CT scan of the lower limb was obtained using a helical CT scanner (120 kV, 200 mA, reconstruction thickness 0.6 mm, reconstruction spacing 0.4 mm, GE revolution CT, General Electric Company, Milwaukee, Wis). All participants were supine and non-weightbearing as per the protocol with the extension of all knees. All DICOM images were imported and segmented in Mimics software (version 17.0, Materialise Inc., Leuven, Belgium), which exported the 3-dimensional reconstructions of the tibia, femur, and patella. The reconstructed models were then processed with SolidWorks engineering software (version 2017, Dassault Systemes Company, Concord, Massachusetts). Using SolidWorks software, we adjusted the position of the models, obtained the standard radiographic views, constructed appropriate coordinate systems, and finally transformed the 3D models to planar images to finish the measurements accurately. Six standard radiographic views, including lower and lateral views of the patella and femur, axial views simulated in 30° knee flexion of the femur, and upper view of the tibia, were obtained. We defined the lower view of the patella as the view observed from the lower pole of the patella with the central ridge of the patella completely upward. The lateral view of the patella was defined as the view observed from the side of the patella with the central ridge of the patella completely rightward. We defined the lower view of the femur as the view observed below the distal femur with maximum mediolateral and anteroposterior sizes of the femur obtained. The lateral view of the femur was defined as the view observed from the side of the femur with medial and lateral condyles overlapping completely. We defined the upper view of the tibia as the view observed above the tibia with maximum mediolateral and anteroposterior sizes of the tibia obtained. In addition, we adjusted the position of the femur and angle of our observation to mimic the axial views simulated in 30° knee flexion of the femur. The specific operation was as follows: we first placed the femur in the lateral view, rotated the femur along the axis perpendicular to the lateral view so that the long axis of the femur formed an angle of 30° with the vertical line, then established the coordinate system, switched the femur to the lower view, and finally mimicked the axial views of the femur with 30° knee flexion.
Definitions and measurements of parameters
In the lower view of the patella, we measured the four following parameters as described by Muhamed et al. [13] (Fig. 1a):
Patella width (PW): the distance between the tangent line of the medial margin and the tangent line of the lateral margin of the patella.
Patella lateral facet width (PLFW): the distance between the most prominent point of the central ridge and the tangent line of the lateral margin of the patella.
Patella thickness (PT): the distance between the most prominent point of the central ridge and the tangent line of the anterior margin of the patella.
Patella facet thickness (PFT): the distance between the most prominent point of the central ridge and the tangent line of the deepest margin of the patella facet.
a Patella thickness (PT), patella facet thickness (PFT), patella width (PW), patella lateral facet width (PLFW); b longitudinal length of the whole patella (PLL) and longitudinal length of the articulating surface of the patella (PAL)
Additionally, the Wiberg index (PLFW/PW) was calculated [29], and the morphology of the patella was determined by the Wiberg classification as modified by Baumgartl and Ficat [30].
In the lateral view of the patella, we measured the two following parameters described by Yoo et al. [31] (Fig. 1b):
Longitudinal length of the whole patella (PLL): the distance between the most prominent point in the upper pole and the most prominent point in the lower pole of the patella.
Longitudinal length of the articular surface of the patella (PAL): the distance between the upper margin and the lower margin of the articular surface of the patella.
In the lower view of the femur, we measured the two following parameters as described by Yue et al. [9] (Fig. 2a):
a The mediolateral (fML) and anteroposterior (fAP) sizes of the femur; b the mediolateral (tML) and anteroposterior (tAP) sizes of the tibia
The mediolateral (fML) and anteroposterior (fAP) sizes of the femur: taking the posterior condylar line (PCL, the line along the most posterior margins on each condyle) as a reference, a rectangular bounding box that fitted the distal femur was created. The fML and fAP values were measured using the bounding box. Additionally, the femoral aspect ratio (fML/fAP) was calculated.
In the upper view of the tibia, we measured the two following parameters as described by Mahfouz et al. [7] (Fig. 2b):
The mediolateral width of the tibia (tML): the maximum width of the tibial plateau in the mediolateral direction.
The anteroposterior size of the tibia (tAP): the maximum length of the tibial plateau in the anteroposterior direction, through the midpoint of the intercondylar eminence.
We measured the TT-TG distance as described by Hernigou et al. [28] The TT-TG distance was defined as the distance between the most anterior point of the tibial tuberosity and the deepest point of the trochlear groove, parallel to and reference to the PCL (Fig. 3). CT images were observed in Mimics software and the transverse picture of the most anterior point of the tibial tuberosity was first selected. Next, the transverse picture of the proximal trochlea at the level of the "roman arch" was selected. The two pictures were processed and merged using ImageJ software (the National Institutes of Health, Bethesda, MD, USA) and the TT-TG distance was measured in the final merged picture. All transverse pictures were kept perpendicular to the vertical axis of the lower limbs, implemented through online reslice of Mimics software, to maintain the accuracy and consistency of the obtained data. The specific operation was as follows: we first determined the femoral mechanical axis as described by previous studies [32, 33] and obtained a new axis by rotating the mechanical axis inwards by 3° on the coronal plane, which was defined as the vertical axis. Then, we resliced the CT images along the vertical axis to obtain the transverse images perpendicular to the vertical axis. Finally, we measured the TT-TG distances accurately using the new transverse images.
TT-TG distance
In the lateral view of the femur, we measured the anterior femoral offset and the posterior femoral offset as described by Matz et al. and Voleti et al. respectively [3, 11] (Fig. 4):
Anterior femoral offset (AFO): the distance between the anterior edge of the femoral cortex and the anterior aspect of the anterior femoral condyle.
Posterior femoral offset (PFO): the distance between the posterior edge of the femoral cortex and the posterior aspect of the posterior femoral condyle.
Anterior femoral offset (AFO) and posterior femoral offset (PFO)
In the axial views simulated in 30° knee flexion of the femur, we measured the four following parameters as described by Stefanik et al. [34] (Fig. 5a-c):
Sulcus angle (SA): the angle between the two lines connecting the highest points of the medial and lateral condyles to the lowest point of the femoral sulcus.
Lateral and medial trochlear inclination (LTI, MTI): the angle between the PCL and the line connecting from the highest points of the lateral and medial condyles to the lowest point of the femoral sulcus, respectively. In addition, the SA, LTI, and MTI add up to 180°.
Trochlear angle (TA): the angle between the PCL and the line passing along the most anterior edge of the medial and lateral trochlear facets.
a Sulcus angle (SA); b lateral trochlear inclination (LTI), medial trochlear inclination (MTI); (c) trochlear angle (TA)
Two authors who had 12 and 8 years of experience with radiography took the measurements and repeated the measurements after 2 weeks.
All statistical analyses were performed using R (version 3.5.1,R Foundation for Statistical Computing, Vienna, Austria), and p values less than 0.05 were considered significant. The intraclass correlation coefficient (ICC) was calculated to determine intrarater and interrater reliability, and the intrarater and interrater reliabilities were good to excellent (all ICCs > 0.85, Table 1).
Table 1 Reliability assessment
For normally distributed data, the two-sample Student's t-test was performed to determine the significance of the difference between the sexes, otherwise, the Mann–Whitney U test was used. Next, multiple variable linear regression analysis was used to analyse the significant difference between the sexes again after controlling the bias from age, height, and weight. The Pearson correlation coefficients were calculated to explore the relationships among all parameters. We defined r < 0.3 as weak correlation, 0.3 < r < 0.8 as moderate correlation, and r > 0.8 as high correlation.
The least Absolute Shrinkage and Selection Operator(LASSO) regression models were constructed to predict the normal TT-TG distances. First, a boxplot was generated to determine the outliers from the distribution of TT-TG distances in men and women. Among women, the outliers were 15.02 mm and 14.80 mm, respectively, which were more than Q3 + 1.5*IQR. Among men, the outliers were 8.36 mm and 6.50 mm, respectively, which were less than Q1-1.5IQR (Additional file 2). With the purpose of predicting the TT-TG distances to meet most people, we removed the four outliers. The LASSO regression models were created using the "glmnet" package of R software. We selected the directly measured parameters above the moderate correlation coefficient with TT-TG distance to enter the model for coefficient progression. In addition, sex also entered the initial model. The LASSO regression introduces λ as a tuning parameter on the basis of linear regression, which controls the overall strength of the penalty. The greater the penalty is, the fewer parameters are retained in the model. Then the independent variable that has a strong influence on the dependent variable is selected and a relatively simplified model can be obtained. We used the mean squared error (MSE) as the selection criterion to describe the performance of the model. Tenfold cross-validation was automatically performed to calculate the λ value and MSE for a varying number of independent variables. We used λ at which the minimal MSE is achieved (lambda.min) and the largest λ at which the MSE is within one standard error of the minimal MSE (lambda.1se) to select the optimal model.
After screening 102 participants, a total of 78 participants meeting the inclusion and exclusion criteria were included in the study. Women were younger (29 ± 5 years vs. 34 ± 3 years, P < 0.01), shorter (165 ± 3 cm vs. 177 ± 5 cm, P < 0.01), and weighed significantly less than men (57 ± 6 kg vs. 70 ± 9 kg, P < 0.01). The means and standard deviations for all parameters, as well as age, height, weight, and BMI, are shown in Table 2. Regarding the Wiberg classification as modified by Baumgartl and Ficat, types I, II and III accounted for 7.5 % (n = 3), 90 % (n = 36) and 2.5 % (n = 1) in men; and for 10.5 % (n = 4), 89.5 % (n = 34) and 0 (n = 0) in women, respectively. No other type was found. This indicates that most of the selected patellas were classified as type I and II, which were considered to be stable [35, 36].
Table 2 Demographic statistics and sex differences (n = 78 knees)
Sex differences of patellofemoral measurements
When using a two-sample t-test for independent samples for sexual comparison, the six dimensions of the patella, mediolateral and anteroposterior sizes of the femur, femoral aspect ratio, mediolateral and anteroposterior sizes of the tibia, TT-TG distances, anterior femoral offset and posterior femoral offset of men were significantly larger than those of women, and the differences were statistically significant (all p values < 0.05). After controlling the bias from age, height, and weight, there were no significant differences in TT-TG distances, longitudinal length of the articular surface of the patella, or anterior-posterior dimensions including patella thickness, patella facet thickness, anteroposterior size of the femur, and posterior femoral offset between the sexes (all p values > 0.05). Additionally, other dimensions and femoral aspect ratios of men were still significantly larger than those of women (all p values < 0.05). No significant differences between the sexes were identified for the Wiberg index and the angles (all p values > 0.05). (Table 2)
Pearson correlation coefficient analysis
1As shown in Fig. 6, the height, weight, dimensions of the patella, dimensions of the femur, dimensions of the tibia, and TT-TG distances were moderately-highly positively correlated with each other (r: 0.32 ~ 0.96, all p values < 0.05). In addition, the angles exhibited no or weak correlation with the dimensional parameters (r < 0.3), except that the sulcus angle was moderately correlated with patella lateral facet width (r = 0.33, p < 0.05). A medium correlation was found among angles (r: -0.49~-0.73, all p values < 0.05), except there was no correlation between lateral and medial trochlear inclination (r = 0.03, p = 0.79) or between the sulcus angle and trochlear angle (r=-0.07, p = 0.57).
The Pearson correlation coefficients of all variables
Interestingly, the Pearson correlation coefficients between the anterior femoral offset and other parameters were consistently below 0.3, indicating no or weak relationship between anterior femoral offset and all other parameters. Similar results were observed for the sulcus angle and the Wiberg index.
LASSO regression to predict normal TT-TG distance
A LASSO regression model was constructed to analyse the prediction of the normal TT-TG distances. Thirteen parameters were selected into the model. Coefficient progression is shown in Fig. 7. Taking lambda.min as a reference, 11 parameters were included and only PW and PLL were excluded (R2 = 0.7052, p < 0.01), which was not convenient to calculate. Thus we took lambda.1se as a reference, and height, fML, tML, and tAP were included in the final model (R2 = 0.5612, p < 0.01). The formula is defined as:
$$ ``\mathrm{TT}-\mathrm{TGdistance}=\mathrm{height}\ast 0.029+\mathrm{fML}\ast 0.069+\mathrm{tML}\ast 0.005+\mathrm{tAP}\ast 0.010+2.307" $$
The height is expressed in cm, while the fML, tML, and tAP are expressed in mm. In this study, only data with heights ranging from 160 to 185 cm, fML ranging from 67.51 to 94.36 mm, tML ranging from 64.37 to 87.73 mm, and tAP ranging from 44.27 to 63.15 mm were included in the LASSO regression model. Thus, this formula might not be available to populations beyond that range. Sex was excluded from this model, which indicated that sex had little influence on the predictive performance of the model. Taken together, on the premise of ensuring high model quality, we reduced the parameters to the minimum and established a formula that is convenient to calculate.
Coefficient progression with LASSO. a As the parameters shrink, the mean-squared error flattens at first and increases rapidly after four parameters are retained. b The change in the coefficient of each parameter as the parameters shrink. The first dotted line means that we used the λ at which the minimal mean squared error (MSE) is achieved (lambda.min) to select the optional model. The second dotted line means that we used the largest λ at which the MSE is within one standard error of the minimal MSE (lambda.1se) to select the optional model. The latter was selected as the final model in this study
In the past few decades, the development of TKA has been considerable, but there is a high rate of dissatisfaction, often due to patellofemoral pathologies. In addition to surgical techniques, inappropriate prosthetic design often results in mismatches of the patellofemoral joints [37, 38]. In this study, we performed comprehensive measurements of patellofemoral joints as a whole and found dimensional and shape differences between the sexes. Next, we exploratively found that the anterior femoral offset, sulcus angle, and Wiberg index all varied greatly among individuals. Finally, we found that the TT-TG distances were moderately correlated with the height, weight, and dimensional parameters. Applying the LASSO regression model, we used four parameters to predict the normal values of the TT-TG distances, namely height, fML, tML, and tAP, to achieve the best accuracy and convenience.
Many studies have reported measurements of patella thickness. The comparisons of patellar thickness with those measured in other studies are shown in Table 3. The results indicated that the patella thickness of Chinese individuals tended to be smaller than that of the Whites, comparable to that of Koreans and greater than that of Indians [13, 14, 31, 39,40,41]. This finding is consistent with the results from previous studies [12, 13, 31]. The re-establishment of original thickness and adequate residual bone thickness is considered a key surgery guideline in TKA [42]. However, due to the mismatch of the patellar implants, the surgeons had to choose between the re-establishment of original thickness and adequate residual bone thickness. By choosing the former, the low residual bone thickness likely causes fracture and instability; by choosing the latter, the increased thickness of the patella causes overstuffing of the patellofemoral joint and leads to anterior knee pain [43, 44]. Although several studies have shown that adverse clinical outcomes were unlikely to occur if the overall and residual bone thickness of the patella was maintained in a reasonable range (postoperative thickness within 3 mm of the original thickness of the patella, and residual thickness between 10 and 15 mm), the changes in the patella might affect the patellofemoral contact pressures, thus leading to complications of the patellofemoral joint [12]. Therefore, patellar prostheses with more available choices should be designed according to patellar characteristics in Chinese population.
Table 3 Comparison of patella thickness (mm) with the data from the literature
We explained many sex differences from this study. The results showed that the dimensional parameters of men were generally larger than those of women, which was consistent with previous studies [7, 14]. In terms of shape, after controlling for the bias from age, height, and weight, there were no significant differences in anterior-posterior dimensions, including patella thickness, patella facet thickness, the anteroposterior size of the femur, and posterior femoral offset, between the sexes, while other dimensions and the femoral aspect ratio of men were still significantly larger than those of women. This indicated that the patella and femur of women were thicker than those of men in the anterior-posterior direction for the same medial-lateral dimensions, which was consistent with the relatively small femoral aspect ratio in women. Therefore, the shape of the distal femur of men was more "flatter" than that of women, while women had a "narrower" distal femur than men. These results were comparable to those reported in previous studies [7, 9, 15, 16]. We found that the Wiberg index and the shape of the trochlea exhibited no dimorphisms between the sexes. Gillespie et al. reported that no significant difference between the sexes was found in the medial and lateral flanges, which was similar to our results [16]. Based on these features, sex-specific prostheses should be designed in consideration of sex characteristics. However, an increasing number of studies have focused on not only sex differences but also individual differences [8, 15]. Taking this issue into account, we explored the correlation coefficient between all parameters.
This study found that anterior femoral offset, sulcus angle, and the Wiberg index, as the primary description of the patellofemoral shape and thickness, all exhibited no or weak relationship with other parameters, which indicated that these three parameters varied greatly regardless of the sizes and shape of the knees. Further analysis of the results indicated that these three parameters varied greatly among individuals, which might need to be considered in the design of joint prostheses. To avoid overstuffing and notching of the patellofemoral compartment, AFO should be treated appropriately. Matz et al. reported that the probability of changes in AFO after TKA was 40 % compared with that before TKA [3]. Although some previous studies showed no significant differences between AFO restoration and clinical outcomes, there was a trend towards improved outcomes [3, 45]. Other studies showed that if the AFO increased after TKA and there was a risk of overstuffing due to the mismatch of the prosthesis, the pressure of the patellofemoral joint would increase, and then, there would be complications such as anterior knee pain and decreased knee motion [46]. Taking these issues into account, an increasing number of studies have analysed the shape and variance of the distal femur. Lonner and Gillespie et al. indicated that the overall variability of the anatomy of the distal femur should be taken into account but not sex differences [16, 17]. According to the individual differences, Everhart et al. proposed a binary classification system to describe the shape of the distal femur and five binary categories were selected based on the aspect ratio, trochlear width, trochlear tilt, the ratio of medial and lateral trochlear width, and trochlear groove angle [47]. In addition, Varadarajan et al. reported that the laterally oriented proximal part and medially oriented distal part formed the intact trochlear groove, and there was a turning point to distinguish these two parts [48]. Moreover, Chen et al. proposed a quaternary system based on the position of the turning point [49]. Due to the great individual variance of the distal femur, more studies on different shapes of the femoral components should be focused, and prosthetic implants with greater varieties in sizes and shapes of anterior femoral condyles need to be designed.
The TT-TG distance had a significant positive correlation with the tubercle sulcus angle (TSA) and Q-angle and was considered to be objective and reliable in the quantification of extensor mechanism malalignment and patellar instability [25, 28]. In previous studies, the measurement of the TT-TG distance was mainly used in image overlapping technology based on CT and MRI. However, several studies have reported the inaccuracy of the current measurement [25, 50], and we found that mild adduction or abduction of the lower extremities resulted in a greater change in this value. In this study, we took this issue into account, and used the online reslice of the Mimics software to standardize the selection of images, so that the collected transverse picture was as perpendicular to the vertical axis of the lower limb as possible, which greatly ensured the accuracy of measurement.
This study reported the average CT-based TT-TG distance to be 13.62 ± 1.76 mm. The average TT-TG distance from the research of Hernigou et al. was 13 mm, which was measured based on CT data and was similar to our results [28]. Tse et al. showed by MRI that the average TT-TG distance was 10.1 mm in Chinese individuals [51]. In a study conducted in New Zealand, Pandit et al. reported the average MRI-based values to be 9.91 mm for men and 10.04 mm for women [21]. Hinckel et al. reported that the MRI-based TT-TG distance was 3.1–3.6 mm smaller than the CT-based TT-TG distance, which explained the inconsistency of the above results [52]. At present, an increasing number of studies have recognized the limitation of the absolute threshold of the TT-TG distance. Although 20 mm was the main diagnostic threshold for surgical application, there are some disputes about its value. Franciozi et al. reported that tibial tubercle osteotomy combined with medial patellofemoral ligament reconstruction (MPFLR) resulted in better outcomes than MPFLR alone in the treatment of recurrent patellar instabilities in patients with a TT-TG distance of 17 to 20 mm [26]. Graf et al. reported the inaccuracy of surgical intervention and demonstrated the need for combining the TSA and TT-TG distances to avoid overcorrection during medial tibial tubercle osteotomy [25]. Our results reported that the TT-TG distance had a positive correlation with height and knee size, which was comparable to other studies [28, 53, 54]. Moreover, several studies have described that the application of TT-TG indices (the ratio of the TT-TG distance to the tibial maximal mediolateral axis) obtained more reliable and standardized results, but the results needed to be further confirmed [27, 54]. Hernigou et al. used fML and tML to establish normal TT-TG distances in Belgium. However, they also raised doubts about whether the two parameters were applied as the best predictors [28]. Taking these questions into account, the present study applied the LASSO regression model to analyse the best predictors of normal TT-TG distances. LASSO regression is a machine learning method that can shrink the coefficients of variables that do not contribute information to the model to zero and is well suited to feature selection for high-dimensional data [55]. Using this method, we obtained four parameters to predict the normal TT-TG distance, namely, height, fML, tML and tAP, to achieve the best accuracy and convenience. The prediction formula obtained by us might provide a more accurate reference for the clinical determination of patellar instability, rather than the absolute values or TT-TG indices, which needs further study to validate the results. Additionally, as Hernigou et al. described, they predicted the restored location of the tibial tuberosity using the mediolateral distances of the femur and the tibia when performing medial transfer of the tibial tuberosity. The formula could play a guiding role in the accurate restored localization of the tibial tubercle during tibial tubercle osteotomy, but this needs to be validated by further research.
Limitations of the present study include the relatively small sample size. We are continuing to recruit more participants to increase the validity of the anatomical data. Another limitation of the present study was that the formula for predicting TT-TG distance has not been clinically verified, and more studies on the clinical effectiveness of the formula need to be performed.
Normative data of patellofemoral morphology were provided for the Chinese population. In summary, the dimensional indexes of men were generally larger than those of women. In terms of shape, the patella and femur of women were thicker than those of men in the anterior-posterior direction for the same medial-lateral dimensions. Moreover, the anterior femoral offset, sulcus angle, and Wiberg index all varied greatly among individuals. More attention should be devoted to not only sex differences but also individual differences. In addition, using LASSO regression, we obtained four parameters to predict normal TT-TG distances, namely, height, mediolateral size of the femur, and mediolateral and anteroposterior sizes of the tibia, to achieve the best accuracy and convenience. This study provided a reference for prosthetic design and a new method to predict TT-TG distances accurately.
The dataset supporting the conclusions of this article is included within the article and its additional file (Additional file 1).
AFO:
Anterior femoral offset
CT:
fAP:
Anteroposterior size of the femur
fML:
Mediolateral size of the femur
ICC:
lambda.min:
The λ at which the minimal MSE is achieved
lambda.1se:
The largest λ at which the MSE is within one standard error of the minimal MSE
LASSO:
Least Absolute Shrinkage and Selection Operator
LTI:
Lateral trochlear inclination
MSE:
Mean squared error
MTI:
Medial trochlear inclination
PAL:
Longitudinal length of the articulating surface of the patella
PCL:
Posterior condylar line
PFO:
Posterior femoral offset
PFT:
Patella facet thickness
PLFW:
Patella lateral facet width
PLL:
Longitudinal length of the whole patella
PT:
Patella thickness
Patella width
Sulcus angle
TA:
Trochlear angle
tAP:
Anteroposterior size of the tibia
TKA:
Total knee arthroplasty
tML:
Mediolateral size of the tibia
TSA:
Tubercle sulcus angle
TT-TG distance:
The tibial tubercle-trochlear groove distance
Petersen W, Rembitzki IV, Bruggemann GP, Ellermann A, Best R, Koppenburg AG, et al. Anterior knee pain after total knee arthroplasty: a narrative review. Int Orthop. 2014;38(2):319–28.
Thomas S, Rupiper D, Stacy GS. Imaging of the patellofemoral joint. Clin Sports Med. 2014;33(3):413–36.
Matz J, Howard JL, Morden DJ, MacDonald SJ, Teeter MG, Lanting BA. Do Changes in Patellofemoral Joint Offset Lead to Adverse Outcomes in Total Knee Arthroplasty With Patellar Resurfacing? A Radiographic Review. J Arthroplasty. 2017;32(3):783–7.
Kalichman L, Zhu Y, Zhang Y, Niu J, Gale D, Felson DT, et al. The association between patella alignment and knee pain and function: an MRI study in persons with symptomatic knee osteoarthritis. Osteoarthritis Cartilage. 2007;15(11):1235–40.
Jan N, Fontaine C, Migaud H, Pasquier G, Valluy J, Saffarini M, et al. Patellofemoral design enhancements reduce long-term complications of postero-stabilized total knee arthroplasty. Knee Surg Sports Traumatol Arthrosc. 2019;27(4):1241–50.
Dubin JA, Muskat A, Westrich GH. Design Modifications of the Posterior-Stabilized Knee System May Reduce Anterior Knee Pain and Complications Following Total Knee Replacement. HSS J. 2020;16(Suppl 2):344–8.
Mahfouz M, Abdel Fatah EE, Bowers LS, Scuderi G. Three-dimensional morphology of the knee reveals ethnic differences. Clin Orthop Relat Res. 2012;470(1):172–85.
Asseln M, Hänisch C, Schick F, Radermacher K. Gender differences in knee morphology and the prospects for implant design in total knee replacement. Knee. 2018;25(4):545–58.
Yue B, Varadarajan KM, Ai S, Tang T, Rubash HE, Li G. Differences of knee anthropometry between Chinese and white men and women. J Arthroplasty. 2011;26(1):124–30.
Fehring TK, Odum SM, Hughes J, Springer BD, Beaver WB. Jr. Differences between the sexes in the anatomy of the anterior condyle of the knee. J Bone Joint Surg Am. 2009;91(10):2335–41.
Voleti PB, Stephenson JW, Lotke PA, Lee GC. No sex differences exist in posterior condylar offsets of the knee. Clin Orthop Relat Res. 2015;473(4):1425–31.
Kim TK, Chung BJ, Kang YG, Chang CB, Seong SC. Clinical implications of anthropometric patellar dimensions for TKA in Asians. Clin Orthop Relat Res. 2009;467(4):1007–14.
Muhamed R, Saralaya VV, Murlimanju BV, Chettiar GK. In vivo magnetic resonance imaging morphometry of the patella bone in South Indian population. Anat Cell Biol. 2017;50(2):99–103.
Rooney N, Fitzpatrick DP, Beverland DE. Intraoperative knee anthropometrics: correlation with cartilage wear. Proc Inst Mech Eng H. 2006;220(6):671–5.
Bellemans J, Carpentier K, Vandenneucker H, Vanlauwe J, Victor J. The John Insall Award: Both morphotype and gender influence the shape of the knee in patients undergoing TKA. Clin Orthop Relat Res. 2010;468(1):29–36.
Gillespie RJ, Levine A, Fitzgerald SJ, Kolaczko J, DeMaio M, Marcus RE, et al. Gender differences in the anatomy of the distal femur. J Bone Joint Surg Br. 2011;93(3):357–63.
Lonner JH, Jasko JG, Thomas BS. Anthropomorphic differences between the distal femora of men and women. Clin Orthop Relat Res. 2008;466(11):2724–9.
Tjoumakaris FP, Forsythe B, Bradley JP. Patellofemoral instability in athletes: treatment via modified Fulkerson osteotomy and lateral release. Am J Sports Med. 2010;38(5):992–9.
Hochreiter B, Hirschmann MT, Amsler F, Behrend H. Highly variable tibial tubercle-trochlear groove distance (TT-TG) in osteoarthritic knees should be considered when performing TKA. Knee Surg Sports Traumatol Arthrosc. 2019;27(5):1403–9.
Diederichs G, Issever AS, Scheffler S. MR imaging of patellar instability: injury patterns and assessment of risk factors. Radiographics. 2010;30(4):961–81.
Pandit S, Frampton C, Stoddart J, Lynskey T. Magnetic resonance imaging assessment of tibial tuberosity-trochlear groove distance: normal values for males and females. Int Orthop. 2011;35(12):1799–803.
Dornacher D, Reichel H, Kappe T. Does tibial tuberosity-trochlear groove distance (TT-TG) correlate with knee size or body height? Knee Surg Sports Traumatol Arthrosc. 2016;24(9):2861–7.
Pennock AT, Alam M, Bastrom T. Variation in tibial tubercle-trochlear groove measurement as a function of age, sex, size, and patellar instability. Am J Sports Med. 2014;42(2):389–93.
Hingelbaum S, Best R, Huth J, Wagner D, Bauer G, Mauch F. The TT-TG Index: a new knee size adjusted measure method to determine the TT-TG distance. Knee Surg Sports Traumatol Arthrosc. 2014;22(10):2388–95.
Graf KH, Tompkins MA, Agel J, Arendt EA. Q-vector measurements: physical examination versus magnetic resonance imaging measurements and their relationship with tibial tubercle-trochlear groove distance. Knee Surg Sports Traumatol Arthrosc. 2018;26(3):697–704.
Franciozi CE, Ambra LF, Albertoni LJB, Debieux P, Granata GSM Jr, Kubota MS, et al. Anteromedial Tibial Tubercle Osteotomy Improves Results of Medial Patellofemoral Ligament Reconstruction for Recurrent Patellar Instability in Patients With Tibial Tuberosity-Trochlear Groove Distance of 17 to 20 mm. Arthroscopy. 2019;35(2):566–74.
Cao P, Niu Y, Liu C, Wang X, Duan G, Mu Q, et al. Ratio of the tibial tuberosity-trochlear groove distance to the tibial maximal mediolateral axis: A more reliable and standardized way to measure the tibial tuberosity-trochlear groove distance. Knee. 2018;25(1):59–65.
Hernigou J, Chahidi E, Bouaboula M, Moest E, Callewier A, Kyriakydis T, et al. Knee size chart nomogram for evaluation of tibial tuberosity-trochlear groove distance in knees with or without history of patellofemoral instability. Int Orthop. 2018;42(12):2797–806.
Fick CN, Grant C, Sheehan FT. Patellofemoral Pain in Adolescents: Understanding Patellofemoral Morphology and Its Relationship to Maltracking. Am J Sports Med. 2020;48(2):341–50.
Tigchelaar S, Rooy J, Hannink G, Koëter S, van Kampen A, Bongers E. Radiological characteristics of the knee joint in nail patella syndrome. Bone Joint J. 2016;98-b(4):483–9.
Yoo JH, Yi SR, Kim JH. The geometry of patella and patellar tendon measured on knee MRI. Surg Radiol Anat. 2007;29(8):623–8.
Zhang YZ, Lu S, Zhang HQ, Jin ZM, Zhao JM, Huang J, et al. Alignment of the lower extremity mechanical axis by computer-aided design and application in total knee arthroplasty. Int J Comput Assist Radiol Surg. 2016;11(10):1881–90.
Cooke TD, Sled EA, Scudamore RA. Frontal plane knee alignment: a call for standardized measurement. J Rheumatol. 2007;34(9):1796–801.
Stefanik JJ, Zumwalt AC, Segal NA, Lynch JA, Powers CM. Association between measures of patella height, morphologic features of the trochlea, and patellofemoral joint alignment: the MOST study. Clin Orthop Relat Res. 2013;471(8):2641–8.
Insall JN, Scott WN. Surgery of the Knee (Third Edition). Philadelphia: Health Science Asia, Elsevier Science. 2001;13–16.
Reider B, Marshall JI. The anterior aspect of the knee joint, an anatomy study. J Bone Joint Surg Am. 1981;63:351.
Dejour D, Ntagiopoulos PG, Saffarini M. Evidence of trochlear dysplasia in femoral component designs. Knee Surg Sports Traumatol Arthrosc. 2014;22(11):2599–607.
Huang CH, Hsu LI, Chang TK, Chuang TY, Shih SL, Lu YC, et al. Stress distribution of the patellofemoral joint in the anatomic V-shape and curved dome-shape femoral component: a comparison of resurfaced and unresurfaced patellae. Knee Surg Sports Traumatol Arthrosc. 2017;25(1):263–71.
Baldwin JL, House CK. Anatomic dimensions of the patella measured during total knee arthroplasty. J Arthroplasty. 2005;20(2):250–7.
Chmell MJ, McManus J, Scott RD. Thickness of the patella in men and women with osteoarthritis. Knee. 1995;2(4):239–41.
Hitt K, Shurman JR 2nd, Greene K, McCarthy J, Moskal J, Hoeman T, et al. Anthropometric measurements of the human knee: correlation to the sizing of current knee arthroplasty systems. J Bone Joint Surg Am. 2003;85-A(Suppl 4):115–22.
Hamilton WG, Ammeen DJ, Parks NL, Goyal N, Engh GA, Engh CA. Jr. Patellar Cut and Composite Thickness: The Influence on Postoperative Motion and Complications in Total Knee Arthroplasty. J Arthroplasty. 2017;32(6):1803–7.
Alcerro JC, Rossi MD, Lavernia CJ. Primary Total Knee Arthroplasty: How Does Residual Patellar Thickness Affect Patient-Oriented Outcomes? J Arthroplasty. 2017;32(12):3621–5.
Slevin O, Schmid FA, Schiapparelli F, Rasch H, Hirschmann MT. Increased in vivo patellofemoral loading after total knee arthroplasty in resurfaced patellae. Knee Surg Sports Traumatol Arthrosc. 2018;26(6):1805–10.
Stryker LS, Odum SM, Springer BD, Fehring TK. Role of Patellofemoral Offset in Total Knee Arthroplasty: A Randomized Trial. Orthop Clin North Am. 2017;48(1):1–7.
Glogaza A, Schroder C, Woiczinski M, Muller P, Jansson V, Steinbruck A. Medial stabilized and posterior stabilized TKA affect patellofemoral kinematics and retropatellar pressure distribution differently. Knee Surg Sports Traumatol Arthrosc. 2018;26(6):1743–50.
Everhart JS, Chaudhari AM, Flanigan DC. Creation of a simple distal femur morphology classification system. J Orthop Res. 2016;34(6):924–31.
Varadarajan KM, Gill TJ, Freiberg AA, Rubash HE, Li G. Gender differences in trochlear groove orientation and rotational kinematics of human knees. J Orthop Res. 2009;27(7):871–8.
Chen S, Du Z, Yan M, Yue B, Wang Y. Morphological classification of the femoral trochlear groove based on a quantitative measurement of computed tomographic models. Knee Surg Sports Traumatol Arthrosc. 2017;25(10):3163–70.
Arendt EA. Editorial Commentary: Reducing the Tibial Tuberosity-Trochlear Groove Distance in Patella Stabilization Procedure. Too Much of a (Good). Thing? Arthroscopy. 2018;34(8):2427–8.
Tse MS, Lie CW, Pan NY, Chan CH, Chow HL, Chan WL. Tibial tuberosity-trochlear groove distance in Chinese patients with or without recurrent patellar dislocation. J Orthop Surg (Hong Kong). 2015;23(2):180–1.
Hinckel BB, Gobbi RG, Filho EN, Pécora JR, Camanho GL, Rodrigues MB, et al. Are the osseous and tendinous-cartilaginous tibial tuberosity-trochlear groove distances the same on CT and MRI? Skeletal Radiol. 2015;44(8):1085–93.
Dickschas J, Harrer J, Bayer T, Schwitulla J, Strecker W. Correlation of the tibial tuberosity-trochlear groove distance with the Q-angle. Knee Surg Sports Traumatol Arthrosc. 2016;24(3):915–20.
Ferlic PW, Runer A, Dirisamer F, Balcarek P, Giesinger J, Biedermann R, et al. The use of tibial tuberosity-trochlear groove indices based on joint size in lower limb evaluation. Int Orthop. 2018;42(5):995–1000.
Odgers DJ, Tellis N, Hall H, Dumontier M. Using LASSO Regression to Predict Rheumatoid Arthritis Treatment Efficacy. AMIA Jt Summits Transl Sci Proc. 2016;2016:176–83.
Especially Thanks to the technical and financial support provided by Chunli Zhengda Medical Equipment Co., Ltd.
This study was funded by National Natural Science Foundation of China (No. 81672173). The funding played role in enrolling the participants and paying for the CT scans.
Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Shaanxi, 710004, Xi'an, People's Republic of China
Zhe Li, Guanzhi Liu, Run Tian, Ning Kong, Yue Li, Yiyang Li, Kunzheng Wang & Pei Yang
Zhe Li
Guanzhi Liu
Run Tian
Yiyang Li
Kunzheng Wang
Pei Yang
All authors contributed to the study conception and design, especially KZW, PY, and ZL. Material preparation was performed by NK, YL, and YYL. Data collection was performed by RT and PY. Data analysis were performed by ZL, GZL. The first draft of the manuscript was written by ZL and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Correspondence to Pei Yang.
The ethics committee of the Second Affiliated Hospital of Xi'an Jiaotong University approved the project. Informed consent was written from all included participants.
12891_2021_4454_MOESM1_ESM.xlsx
12891_2021_4454_MOESM2_ESM.tiff
Li, Z., Liu, G., Tian, R. et al. The patellofemoral morphology and the normal predicted value of tibial tuberosity-trochlear groove distance in the Chinese population. BMC Musculoskelet Disord 22, 575 (2021). https://doi.org/10.1186/s12891-021-04454-8
Patellofemoral joint
Tibial tuberosity-trochlear groove distance
Knee morphology
LASSO regression
|
CommonCrawl
|
International Journal of Interdisciplinary Research
Wearable capacitive pressure sensor using interdigitated capacitor printed on fabric
TranThuyNga Truong1,
Ji-Seon Kim1,
Eunji Yeun1 &
Jooyong Kim ORCID: orcid.org/0000-0003-1969-57302
Fashion and Textiles volume 9, Article number: 46 (2022) Cite this article
This paper presented a systematic approach to electro-textile pressure sensors dependent on interdigitated capacitors (IDCs) printed on fabric. In this study, we proposed a highly sensitive, broad-range pressure sensor based on the combination of porous Ecoflex, carbon nanotubes (CNTs), and interdigitated electrodes. Firstly, characterizations of the interdigitated capacitor using silver ink on Cotton and Polyester fabric were completed by precision LCR meter across the frequency range from 1 to 300 kHz. The effect of the fabric on the performance of sensor sensitivity was included. Secondly, estimating and optimizing the volume fraction of CNTs and air gaps on the properties of composites are included. The presence of volume fraction CNTs enhanced the bond strength of composites and improved sensor deformability. The robustness of the presented sensor was demonstrated by testing under high pressure at 400 kPa for more than 20,000 cycles. Thirdly, the combination of CNTs and porous dielectric achieved a broad detection range (400 kPa) with a sensitivity range from 0.035 (at 400 kPa) to 0.15 \({\mathrm{KPa}}^{-1}\) (at 50 kPa). Finally, the Cotton and Polyester substrate comparison demonstrates that selecting a suitable dielectric substrate affects sensor sensitivity and signal output.
Nowadays, wearable sensors, particularly textile sensors, have become an exciting issue and attracted significant interest from researchers. Among these sensors, the exceptional properties of pressure sensors make them a promising component in the next generation of flexible electronics. They have been made for commercial purposes along with scientific fields such as healthcare monitors, aeronautics, robotics, etc. (Castano & Flatau, 2014; Huang et al., 2019; Seyedin et al., 2019). In addition, they can be attached to the skin or on clothing to monitor physiological signals or external pressure under continuous working conditions without disrupting or limiting the individual's day-to-day activities. Many efforts have been studied to develop flexible pressure sensors. There are numerous approaches for measuring pressure using piezocapacitive, piezoelectric, triboelectric, and piezoresistive effects. Among them, capacitive pressure sensors depending on a parallel plate capacitor are widely used due to the advantages of lower power consumption, faster response times, and a simple structure. In theory, the capacitance of a parallel plate capacitor is given by the Formula (1):
$$C=\frac{{\varepsilon }_{r}{\varepsilon }_{0}A}{d}$$
where \({\varepsilon }_{r}\) represents the dielectric constant of the material, \({\varepsilon }_{0}\) vacuum permittivity is, A is the effective area of upper and lower plates, and d is the thickness or spacing between two electrodes. By changing \({\varepsilon }_{r}\), A, and d, the capacitive sensors can be divided into three types: variable dielectric, variable area (Guo et al., 2019; Wan et al., 2017), and variable spacing distance (Mahata et al., 2020; Ruth et al., 2020). In this approach, thickness or dielectric layer changes under external force, simultaneously leading to variation in the capacitance of the sensor. Due to dependence on parameters A and d in Formula (1), changing the area or thickness affects the pressure sensitivity ("One-Rupee Ultrasensitive Wearable Flexible Low-Pressure Sensor | ACS Omega" n.d.). Therefore, the sensitivity that this method can achieve is typically very low (Zang et al., 2015). Most methodologies on flexible sensors have been focused on improving the sensitivity and flexibility of capacitive pressure sensors. Those depend on the dielectric layer's deformability ("Flexible Capacitive Pressure Sensor Enhanced by Tilted Micropillar Arrays | ACS Applied Materials & Interfaces" n.d.; Ruth & Bao, 2020; Wang et al., 2020; Xiong et al., 2020) or increase in effective area and thickness ("One-Rupee Ultrasensitive Wearable Flexible Low-Pressure Sensor | ACS Omega" n.d.). However, these methods have a slow recovery time, high cost, and complicated fabrication to make the microstructure. In addition, the high density of porosity in the dielectric layer may create noise and affect the stability and durability of sensors.
In this work, we proposed designing and implementing the textile pressure sensor based on calculating interdigital capacitance. We focused on only the modification of the dielectric layer to improve sensitivity. This method uses only one electrode, so the sensor is not affected by the distance of the dielectric layer but instead detects a variable capacitance from changing the relative permittivity of the porous polymer layer under compression. The electrode was fabricated using silver paste printed on cotton fabric, which resembles the comb with multiple interdigitated fingers. The efforts to increase the sensitivity in this paper were grouped into two major studies: the dielectric change of the elastic layer and the generation of microparticles inside the dielectric layer. Finally, the experiment results showed that the proposed sensors could be advantageous in size, sensitivity, cost, durability, and power consumption, which will have many potential applications in the next generation of wearable electronics.
The rest of the paper was separated into the following sections. Firstly, the experiment introduces novel pressure techniques based on changing the relative permittivity. Secondly, "fabrication" illustrates the fabrication process of the proposed capacitive pressure sensor. Next, "measurement results and discussion," where the comparison of characteristics between Polyester and Cotton substrates, including effects of sensitivity, cost, and durability. Finally, conclusions were drawn in the last section.
Conductive tracks and principle of the transducer
The interdigitated capacitor uses lumped circuit elements known as a multi-finger periodic structure. Unlike the parallel plate capacitor, the interdigitated capacitor requires only one-sided to detect variations of material under test (MUT). This design has a higher quality factor than a parallel capacitor (Aparicio & Hajimiri, 2002). The interdigitated sensor has the same principle of operation as two parallel plate capacitors. In this structure, the capacitance occurs across between narrow gaps of fingers. When the gaps decrease, the capacitance increases accordingly. The shape of the sensor is described by the parameters shown in Fig. 1.
a Top view and b cross-section view of the integrated capacitor
Typically, we chose the dimension of the gaps among fingers (G), and spaces at the end of fingers are the same. The capacitor design with eight fingers shown in Fig. 1 has the following parameters in Table 1. Due to the potential for high conductive and low cost, the structure of the presented sensor uses silver ink as a conductive material. This technique can achieve high geometrical precision with a resolution lower than 100 um. The characteristics and curing conditions of DM-SIP-2001 silver past applied in this study are shown in Table 2.
Table 1 Characteristics of silver ink printed on fabric
Table 2 Final structural dimensions of the proposed sensor
When the MUT (material under test) is applied onto the interdigital capacitor electrodes, the capacitor across the finger electrodes will change due to the frequency and dielectric variations. Finally, the presented sensor transforms dielectric changes of MUT into pressure. The capacitance change for the interdigital microstrip structure is determined by summing up the unit cell capacitance (Fig. 1b). Each unit cell is calculated as formula (2) (Ong & Grimes, 2000):
$${C}_{Cell}={C}_{MUT}+{C}_{Sub}+{C}_{G}$$
$${C}_{MUT}+{C}_{Sub}={\varepsilon }_{0}\frac{({\varepsilon }_{\mathrm{MUT}}+{\varepsilon }_{\mathrm{Sub}})K(\sqrt{1-{\delta }^{2}})}{2K(\delta )}$$
$${C}_{G}={\varepsilon }_{0}{\varepsilon }_{\mathrm{MUT}}\frac{\mathrm{h}}{\mathrm{a}}$$
$$\updelta =\frac{\mathrm{h}}{\mathrm{a}}$$
where \({\varepsilon }_{0}=8.85x{10}^{-12}F/m\) is the relative permittivity of free space, \({C}_{Sub}\) is the capacitance of substrate, and \({\varepsilon }_{MUT}\) is the relative permittivity of the MUT. \({C}_{MUT}\) is the capacitance of material under test (MUT). \({C}_{G}\) is the capacitance between electrodes. K(x) is the elliptic integrals of the first kind, h is the thickness of the metal layer, and a is the dimension one unit cell. The Formula (3) shows that \(({\varepsilon }_{\mathrm{MUT}}+{\varepsilon }_{\mathrm{Sub}})\) represents the capacitance changes in the sensor. This capacitance is far more significant than capacitance between electrodes (\({C}_{G}\)), so the effect of \({C}_{G}\) is not considered. In formula (3), capacitance increases correspond to increasing dielectric MUT and substrate featured by a gain equal to \(({\varepsilon }_{\mathrm{MUT}}+{\varepsilon }_{\mathrm{Sub}})\). In this series component, \({\varepsilon }_{\mathrm{MUT}}\) is assumed to be changed under pressure, whereas \({\varepsilon }_{\mathrm{Sub}}\) is considered constant. To evaluate the dielectric substrate's effect, we used two kinds of fabric: a low dielectric, whereas the other is high. Moreover, it is known that in terms of loss tangent, Polyester is the best loss tangent, while Cotton fabric is the worst (Cerovic et al., 2009), so both kinds of fabric are good choices for comparisons. Note that because the sensing capacitor depends on the change permittivity of MUT (\(\Delta \varepsilon\)), various in capacitance \(\Delta C\) can be understood by calculating (6):
$$\Delta C=\Delta \varepsilon {C}_{0}$$
where \({C}_{0}\) is baseline capacitance or initial capacitance. Capacitance values of interdigitated electrodes are typically low around several femtofarads. In addition, parasitic capacitances are unwanted elements that can affect signal-to-noise in the readout circuitry system. Therefore, from (6), a higher baseline capacitance is needed for more changes in capacitance variance. However, the sensitivity of capacitive pressure sensors is calculated by \(S=\frac{\frac{\Delta C}{{C}_{0}}}{P} W\) here P represents the applied pressure. Therefore, increasing the sensor's initial capacitance leads to a decrease in the sensitivity, respectively. This discussion will be gone further in the next section. Before that, it is necessary to define that the main challenge to transfer dielectric changes into pressure comes from \(\Delta \varepsilon\). The higher \(\Delta \varepsilon\), the more sensitivity can be achieved. In this paper, we used two methodologies to develop the deformability of the dielectric layer.
The first method is based on percolation theory using CNTs (carbon nanotubes) as filler load to improve polymer properties. In this method, two main issues should be considered since they affect the selection of suitable particles. The first is the interfacial interaction between CNTs or reinforcement and polymer, known as the percolation path. And the second is the proper distribution and dispersion of CNTs inside the polymer, known as concentrations. The reason for choosing CNTs comes from the aspect ratio between the length and diameter of CNTs. The CNTs have a much larger aspect ratio than 1. On the other hand, the spherical particles have an aspect ratio of approximately 1. The aspect ratio of CNTs is a great advantage for lower critical volumes since the percolation threshold reduces with an expanding aspect ratio of the filler (Bauhofer & Kovacs, 2009). In the next section, we will describe the relationship between concentrations and the percolation path of CNTs composites more clearly. According to the percolation theory, the relationship between relative permittivity (\(\varepsilon\)) and the volume fraction of the composite filler can be explained by the following power law (Shang et al., 2021; Wang et al., 2015):
$$\varepsilon \alpha {\varepsilon }_{p }{({f}_{c }- {f}_{CNTs })}^{-t}\mathrm{ for }{f}_{CNTs}<{f}_{c}$$
where \(\varepsilon\) is the dielectric permittivity of mixer composites, \({\varepsilon }_{p}\) is the relative permittivity of polymer, \({f}_{c}\) is the percolation threshold or critical volume, \({f}_{CNTs}\) is the volume fraction of filler, and t is the dielectric critical exponent.
In addition, to increase sensitivity, we studied the combination of the composite with the microporous. As explained in some earlier works, the capacitance changes when the dielectric of elastomeric changes (Kwon et al., 2016; Yoon et al., 2017). As a result, under pressure, the thickness change in the microporous dielectric layer leads to higher permittivity than none porous one. Then the variations in the effective relative permittivity of microporous (\({\varepsilon }_{\mathrm{e}}\)) under external pressure can be determined as follow (Atalay et al., 2018):
$${\varepsilon }_{\mathrm{e}}={\varepsilon }_{\mathrm{air}}{V}_{\mathrm{air}}+{\varepsilon }_{\mathrm{mixer}}{V}_{\mathrm{mixer}}$$
where \({\varepsilon }_{\mathrm{air}}=1\), and \({\varepsilon }_{\mathrm{mixer}}\) is dielectric mixer of the porous CNTs/polymer. Then \({V}_{\mathrm{air}}\) and \({V}_{\mathrm{mixer}}\) represent the volume fraction of air and composite mixer. Under pressure, the volume air gaps steadily reduce, increasing the relative permittivity of the mixer. In our case, when the air gap reduces to zero, we boost the capacitance change by increasing the ratio dispersion of CNTs inside the elastomeric layer. Note that increasing the number of CNTs inside the polymer leads to changes in the relative permittivity and elastomeric properties. Therefore, choosing a suitable dielectric elastomer is essential to enhancing sensitivity, especially under high pressure. Due to excellent flexibility, elasticity, and stability, the Ecoflex (Smooth-on Inc., Macungie, PA, USA) with hardness at shore 00–30 is suitable for this study.
A screen printing process was used to fabricate electrodes of the presented sensor. The structure of the proposed sensor is shown in Fig. 2, which consists of the fabric playing as the substrate layer, Ag electrodes, and pressure-sensing layer. Two kinds of fabric are used as the sensor substrate: 100% polyester (nonwoven, 0.3 mm in thickness) and 100% cotton (woven, 0.22 mm in thickness).
Schematic diagram of the fabricated pressure sensor. CNTs Carbon nanotubes
The microporous dielectric layer or the pressure-sensing layer was fabricated by the process shown schematically in Fig. 3. The Ecoflex solution was obtained by mixing a base (called Part A) and cured agent (called Part B) with a volume weight ratio of 1:1. After this process, single-wall carbon nanotubes (CNTs) (TUBALL, diameter smaller than 2 nm, produced by OCSiAl) were applied to disperse in the Ecoflex solution. The stirring at 120 rpm for 15 min helps segregate the CNTs bundles and evenly distribute the carbon fibers in the composite. After that, granulated brown sugar was added to the solution and stirred again. The different dielectric films were obtained by changing the volume weight ratio of CNTs and Sugar distributed in the Ecoflex, as shown in Table 3.
A fabrication process of the Porous Composite Dielectric Layer
Table 3 Dielectric films with different weight ratios of CNTs inside the Ecoflex polymer
The mixer was cured at room temperature (30 °C) for 3 h. In order to form the shape of dielectric films, we used a 3D mold with a length, width, and height of 2, 2, and 0.7 cm, respectively. After the curing time, the mixer was removed from the 3D mold and then dissolved the sugar in boiler water with magnetic stirring at 200 rpm two times, last at least 24 h. One of the reasons for using "weight concentration" is to deal with calculating represent of the amount of sugar in composite in the dissolved process. After the dissolving process, the weight of the composite was reduced by half. Finally, the dielectric films were dried to remove moisture in a conventional mini dryer for 2 h at \({100}^{0}\) C. In Fig. 4, we can see how the structure of composites interacts with and without appearing porous. As can be seen in Fig. 5, the porous dielectric film shows high mechanical elasticity and flexibility under compression and release. The obtained capacitors are highly flexible, allowing smaller than 20 degrees of bending angle (Fig. 5b).
Schematic illustrations of the capacitive pressure sensor a CNTs + Ecoflex; b CNTs + Sugar + Ecoflex; c Sugar + Ecoflex; d 3D view of the dielectric film
a High flexibility of the dielectric layer; b High flexibility of the silver layer
It is known that due to high aspect ratios and long-range Va der Waals interactions, SWCNTs tend to form ropes or bundles with a highly complicated structure, as shown in Fig. 6a. Therefore, CNTs were spread inside the polymer in the form of bundles, as can be seen in Fig. 6b, leading to developing bond strength and enhancing the durability of the polymer. In this study, The robustness of the presented sensor was demonstrated by testing under high pressure for more than 20,000 cycles. In addition, nanotube bundles tended to form a continuous cross-network under pressure, increasing the volume of CNTs dispersed in the composite due to shortening inter-filler distances, as shown in Fig. 4a. From formula (7), the dielectric permittivity increases with an increase in the filler volume fraction; here is CNTs. So the volume fraction plays one of the essential factors of composite. During the preparation process in the laboratory, we choose to due with the weight concentration (wt%) because it is much easier when converting mass fraction to volume fraction. Moreover, due to the high aspect ratio of CNTs, a considerable enhancement in the composite can be achieved with a small volume concentration of filler. The lower the CNTs contents, the more cost is saved; so in this study, using CNTs volumes as low as 0.25% CNTs could be observed.
SEM images of a the CNTs; b the CNTs Composite Dielectric Layer (CNTs + Ecoflex)
In order to study the effect of frequency, we tested the capacitance and dielectric loss of silver interdigitated electrodes on both polyester and cotton substrates by using Precision LCR Meter E4980AL. The measurements under 1 kHz are not stable; therefore, Figs. 8 and 9 show results over a frequency from 1 to 300 kHz. From formula (3), the higher permittivity \({\varepsilon }_{\mathrm{Sub}}\), the higher capacitance. Therefore, as can be seen in Fig. 7a, the capacitance value of Cotton fabric is more elevated than Polyester one. Moreover, from Fig. 7b, we can see the dielectric loss of the materials ranges from 0.00113 to 0.02, so it is not negligible in electro-textiles. In order to optimize the dielectric loss, we choose the frequency at the lowest dielectric loss point. Note that even the lowest dielectric loss of Polyester at 8.43 kHz is much lower than the Cotton one; however, it changed dramatically for both fabrics around 8.43 kHz point. So in order to evaluate the effect of non-conductive fabric on sensor sensitivity, in this study, we selected the lowest dielectric loss of cotton fabric at 45 kHz for all measurements.
Frequency dispersion of a capacitance; b loss tangent
The schematic diagram for evaluating the performance of the presented sensor is seen in Fig. 8a. The pressure was applied using a Keysight LCR meter (E4980AL) and a force gauge Universal Testing Machine (Dacell Co., Seoul, Korea). They were all connected to a computer to collect the data (Fig. 8b).
a Schematic of the universal testing machine; b Measurement setup for presented pressure sensor with Porous Composite Dielectric Film
Figure 9 shows the pressure-response of different proposed dielectric films on Cotton (Fig. 9a) and Polyester (Fig. 9b) fabric. Experiment results show that for both Figures, the porous CNTs dielectric film has the highest pressure-sensing performance from 0 to 400 kPa, while the porous dielectric film is the lowest. In order to demonstrate this phenomenon, we recall the Formula (8). Under pressure, the porous composite with lower dielectric due to the volume fraction air gap will be replaced by the higher dielectric constant of the non-porous one, consequently increasing the effective dielectric constant of the porous composite.
a Pressure-response curves of capacitive pressure sensors with Polyester fabric; b Pressure-response curves of capacitive pressure sensors with cotton fabric
Moreover, it should be considered that the difference in sensitivity between Cotton and Polyester is significant when using the same kind of dielectric film. As we mentioned in formula (6), the large baseline capacitance is necessary for more change in capacitance variance, making detecting the signal output much more manageable. However, the Formula of sensitivity shows a different way. In our experiment, the baseline capacitance of porous CNTs on the Polyester is around 7 pF, and the Cotton is 13 pF. Even under compression, the capacitance of Cotton fabric is higher than Polyester, the variations in subtraction \(\Delta C\) and division \(\frac{\Delta C}{{C}_{0}}\) lead to reducing sensitivity by half. That is why the sensitivity of the sensor printed on Polyester fabric is higher than that of the Cotton one. Note that no porous CNTs sample's baseline capacitance or initial capacitance is more elevated than a porous one. Hence, the sensitivity of no porous CNTs pieces printed on Polyester is three times higher than that of Cotton. In addition, when the initial capacitance of the sensor is too low, like in the porous Ecoflex (Ecoflex + sugar) samples, the variation of sensitivity is not significant on both fabrics.
The pressure response time of the presented sensor was investigated in Fig. 10a. Two different pressures were applied at low and high compressions, 47 kPa and 400 kPa, respectively, which are higher sensitivities than some previously reported pressure sensors (Table 4). The results show that the samples under low pressure were more highly stable and reversible than under high pressure. However, all pieces illustrated a rapid recovery time lower than 0.2 s. In this study, The robustness of the presented sensor was demonstrated by testing under high pressure for more than 20,000 cycles (shown in Fig. 10b). After the first 1000 cycles, the sensor remains stable. As can be seen, the performance of the presented sensor was stable after 20,000 cycles.
a Pressure-response time of capacitive pressure sensors at 47 kPa and 400 kPa; b Capacitance response of pressure sensor during 20,000 loading and unloading cycles at an applied pressure of 400 kPa
Table 4 Comparison of the proposed sensor with reported pressure
We presented a novel approach to electro-textile pressure sensors using interdigitated capacitors (IDCs) fabricated a flexible substrate. The presented sensor can accomplish a highly sensitive, broad range (400 kPa) with a sensitivity range from 0.035 (at 400 kPa) to 0.15 \({\mathrm{KPa}}^{-1}\) (at 50 kPa). The dielectric layer can achieve outstanding durability due to the combination of highly resilient characteristics of Ecoflex and CNTs. Furthermore, the proposed sensor exhibited fast response and recovery time with a wide detection range of more than 400 kPa. Moreover, the effect of dielectric substrate on the performance of sensor sensitivity plays a key role in insensitivity and detection of signal output. Therefore, selecting a suitable dielectric substrate for practical application should be considered.
Aparicio, R., & Hajimiri, A. (2002). Capacity limits and matching properties of integrated capacitors. IEEE Journal of Solid-State Circuits, 37(3), 384–393. https://doi.org/10.1109/4.987091
Atalay, O., Atalay, A., Gafford, J., & Walsh, C. (2018). A highly sensitive capacitive-based soft pressure sensor based on a conductive fabric and a microporous dielectric layer. Advanced Materials Technologies, 3(1), Article 1700237. https://doi.org/10.1002/admt.201700237
Bauhofer, W., & Kovacs, J. Z. (2009). A review and analysis of electrical percolation in carbon nanotube polymer composites. Composites Science and Technology, 69(10), 1486–1498. https://doi.org/10.1016/j.compscitech.2008.06.018
Castano, L. M., & Flatau, A. B. (2014). Smart fabric sensors and e-textile technologies: A review. Smart Materials and Structures, 23(5), Article 053001. https://doi.org/10.1088/0964-1726/23/5/053001
Cerovic, D. D., Dojcilovic, J. R., Asanovic, K. A., & Mihajlidi, T. A. (2009). Dielectric investigation of some woven fabrics. Journal of Applied Physics, 106(8), Article 084101. https://doi.org/10.1063/1.3236511
Chen, S., Song, Y., & Xu, F. (2018). Flexible and highly sensitive resistive pressure sensor based on carbonized crepe paper with corrugated structure. ACS Applied Materials & Interfaces, 10(40), 34646–34654. https://doi.org/10.1021/acsami.8b13535
Flexible Capacitive Pressure Sensor Enhanced by Tilted Micropillar Arrays|ACS Applied Materials & Interfaces. (n.d.). https://doi.org/10.1021/acsami.9b03718. https://pubs.acs.org/doi/abs/. Accessed 2 Sept 2021
Feng, C., Yi, Z., Jin, X., Seraji, S. M., Dong, Y., Kong, L., & Salim, N. (2020). Solvent crystallization-induced porous polyurethane/graphene composite foams for pressure sensing. Composites Part b: Engineering, 194, Article 108065. https://doi.org/10.1016/j.compositesb.2020.108065
Guo, Y., Gao, S., Yue, W., Zhang, C., & Li, Y. (2019). Anodized aluminum oxide-assisted low-cost flexible capacitive pressure sensors based on double-sided nanopillars by a facile fabrication method. ACS Applied Materials & Interfaces, 11(51), 48594–48603. https://doi.org/10.1021/acsami.9b17966
Huang, Y., Fan, X., Chen, S.-C., & Zhao, N. (2019). Emerging technologies of flexible pressure sensors: Materials, modeling, devices, and manufacturing. Advanced Functional Materials, 29(12), Article 1808509. https://doi.org/10.1002/adfm.201808509
Hwang, J., Kim, Y., Yang, H., & Oh, J. H. (2021). Fabrication of hierarchically porous structured PDMS composites and their application as a flexible capacitive pressure sensor. Composites Part b: Engineering, 211, Article 108607. https://doi.org/10.1016/j.compositesb.2021.108607
Kou, H., Zhang, L., Tan, Q., Liu, G., Lv, W., Lu, F., et al. (2018). Wireless flexible pressure sensor based on micro-patterned Graphene/PDMS composite. Sensors and Actuators a: Physical, 277, 150–156. https://doi.org/10.1016/j.sna.2018.05.015
Kwon, D., Lee, T.-I., Shim, J., Ryu, S., Kim, M. S., Kim, S., et al. (2016). Highly sensitive, flexible, and wearable pressure sensor based on a giant piezocapacitive effect of three-dimensional microporous elastomeric dielectric layer. ACS Applied Materials & Interfaces, 8(26), 16922–16931. https://doi.org/10.1021/acsami.6b04225
Lei, Z., Wang, Q., Sun, S., Zhu, W., & Wu, P. (2017). A Bioinspired mineral hydrogel as a self-healable, mechanically adaptable ionic skin for highly sensitive pressure sensing. Advanced Materials (deerfield Beach, Fla.). https://doi.org/10.1002/adma.201700321
Mahata, C., Algadi, H., Lee, J., Kim, S., & Lee, T. (2020). Biomimetic-inspired micro-nano hierarchical structures for capacitive pressure sensor applications. Measurement, 151, Article 107095. https://doi.org/10.1016/j.measurement.2019.107095
Ong, K. G., & Grimes, C. A. (2000). A resonant printed-circuit sensor for remote query monitoring of environmental parameters. Smart Materials and Structures, 9(4), 421–428. https://doi.org/10.1088/0964-1726/9/4/305
Ruth, S. R. A., & Bao, Z. (2020). Designing tunable capacitive pressure sensors based on material properties and microstructure geometry. ACS Applied Materials & Interfaces, 12(52), 58301–58316. https://doi.org/10.1021/acsami.0c19196
Ruth, S. R. A., Beker, L., Tran, H., Feig, V. R., Matsuhisa, N., & Bao, Z. (2020). Rational design of capacitive pressure sensors based on pyramidal microstructures for specialized monitoring of biosignals. Advanced Functional Materials, 30(29), Article 1903100. https://doi.org/10.1002/adfm.201903100
One-Rupee Ultrasensitive Wearable Flexible Low-Pressure Sensor | ACS Omega. (n.d.). https://doi.org/10.1021/acsomega.0c02278. Accessed 27 July 2021
Seyedin, S., Zhang, P., Naebe, M., Qin, S., Chen, J., Wang, X., & Razal, J. M. (2019). Textile strain sensors: A review of the fabrication technologies, performance evaluation and applications. Materials Horizons, 6(2), 219–249. https://doi.org/10.1039/C8MH01062E
Shang, S., Tang, C., Jiang, B., Song, J., Jiang, B., Zhao, K., et al. (2021). Enhancement of dielectric permittivity in carbon nanotube/polyvinylidene fluoride composites by constructing of segregated structure. Composites Communications, 25, Article 100745. https://doi.org/10.1016/j.coco.2021.100745
Wan, S., Bi, H., Zhou, Y., Xie, X., Su, S., Yin, K., & Sun, L. (2017). Graphene oxide as high-performance dielectric materials for capacitive pressure sensors. Carbon, 114, 209–216. https://doi.org/10.1016/j.carbon.2016.12.023
Wang, J., Jiu, J., Nogi, M., Sugahara, T., Nagao, S., Koga, H., et al. (2015). A highly sensitive and flexible pressure sensor with electrodes and elastomeric interlayer containing silver nanowires. Nanoscale, 7(7), 2926–2932. https://doi.org/10.1039/C4NR06494A
Wang, X., Xia, Z., Zhao, C., Huang, P., Zhao, S., Gao, M., & Nie, J. (2020). Microstructured flexible capacitive sensor with high sensitivity based on carbon fiber-filled conductive silicon rubber. Sensors and Actuators a: Physical, 312, Atricle 112147. https://doi.org/10.1016/j.sna.2020.112147
Xiong, Y., Shen, Y., Tian, L., Hu, Y., Zhu, P., Sun, R., & Wong, C.-P. (2020). A flexible, ultra-highly sensitive and stable capacitive pressure sensor with convex microarrays for motion and health monitoring. Nano Energy, 70, Atricle 104436. https://doi.org/10.1016/j.nanoen.2019.104436
Yoon, J. I., Choi, K. S., & Chang, S. P. (2017). A novel means of fabricating microporous structures for the dielectric layers of capacitive pressure sensor. Microelectronic Engineering, 179, 60–66. https://doi.org/10.1016/j.mee.2017.04.028
Zang, Y., Zhang, F., Di, C., & Zhu, D. (2015). Advances of flexible pressure sensors toward artificial intelligence and health care applications. Materials Horizons, 2(2), 140–156. https://doi.org/10.1039/C4MH00147H
This research was partly supported by the National Research Foundation of Korea (NRF-2019R1A2C2005933) grant funded by the Korea Government (MSIT) and Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0002397, HRD program for Industrial Convergence of Wearable Smart Devices).
This research was financially supported and funded by Soongsil University, Seoul 156-743, Korea.
Department of Smart Wearables Engineering, Soongsil University, Seoul, 156-743, Korea
TranThuyNga Truong, Ji-Seon Kim & Eunji Yeun
Department of Organic Materials and Fiber Engineering, Soongsil University, Seoul, 156-743, Korea
Jooyong Kim
TranThuyNga Truong
Ji-Seon Kim
Eunji Yeun
As a corresponding author, Kim Jooyong was responsible for the whole structure construction and drafted the manuscript. At the same time, Truong TranThuyNga was responsible for experiment design, data collection, data processing, and modeling. Kim Ji-Seon and Eunji Yeun were responsible for data cleaning and choosing materials. Finally, all authors read and approved the final manuscript.
Truong TranThuyNga is currently a lecturer and researcher as a Ph.D. candidate in the Department of Smart Wearables Engineering at Soongsil University, Seoul 156-743, Korea. Email address: [email protected]. Her research interest includes flexible wearable sensors and their applications in human activity monitoring or personal healthcare based on machine learning algorithms. Nga has four years of research experience with smart textiles, natural fibers, nanotechnology, and textile science, excellent communication skills, and strong collaboration ability. Her future goal is to become an academic working to advance teaching and research in institutes of education.
Kim Ji-Seon is a Ph.D. student in the Department of Smart Wearables Engineering at Soongsil University, Seoul 156-743, Korea. Email address: [email protected]. She was responsible for data cleaning and choosing materials.
Yeun Eunji is a Mater student in the Department of Smart Wearables Engineering at Soongsil University, Seoul 156-743, Korea. Email address: [email protected]. She was responsible for data cleaning and choosing materials.
Kim Jooyong is a professor in the Department of Organic Materials and Fiber Engineering at Soongsil University, Seoul 156-743, Korea. Email address: [email protected]. Prof. Jooyong Kim's research interest includes developing innovative fashion products based on electronic textiles.
Correspondence to Jooyong Kim.
Truong, T., Kim, JS., Yeun, E. et al. Wearable capacitive pressure sensor using interdigitated capacitor printed on fabric. Fash Text 9, 46 (2022). https://doi.org/10.1186/s40691-022-00320-w
Accepted: 10 October 2022
Capacitive pressure sensor
Electro-textile
Wearable sensor
Fabric sensor
Advanced Textiles and Intelligent Wearables for Comfort and Healthcare
|
CommonCrawl
|
uniform probability distribution examples and solutions
Then X ~ U (6, 15). Find the mean, Ninety percent of the time, the time a person must wait falls below what value? The notation for the uniform distribution is X ~ U(a, b) where a = the lowest value of x and b = the highest value of x. Find the 90th percentile. All values x are equally likely. Let X = the time needed to change the oil on a car. Let X = the time, in minutes, it takes a nine-year old child to eat a donut. Find the probability that a randomly selected furnace repair requires less than three hours. You must reduce the sample space. It is used to. This means that any smiling time from zero to and including 23 seconds is equally likely. a = smallest X; b = largest X, The mean is [latex]\displaystyle\mu=\frac{{{a}+{b}}}{{2}}\\[/latex], The standard deviation is [latex]\displaystyle\sigma=\sqrt{{\frac{{({b}-{a})}^{{2}}}{{12}}}}\\[/latex], Probability density function: [latex]\displaystyle{f{{({x})}}}=\frac{{1}}{{{b}-{a}}} \text{ for } {a}\leq{X}\leq{b}\\[/latex], Area to the Left of x: [latex]\displaystyle{P}{({X}{<}{x})}={({x}-{a})}{(\frac{{1}}{{{b}-{a}}})}\\[/latex], Area to the Right of x: [latex]\displaystyle{P}{({X}{>}{x})}={({b}-{x})}{(\frac{{1}}{{{b}-{a}}})}\\[/latex], Area Between c and d: [latex]\displaystyle{P}{({c}{<}{x}{<}{d})}={(\text{base})}{(\text{height})}={({d}-{c})}{(\frac{{1}}{{{b}-{a}}})}\\[/latex], [latex]\displaystyle{P}{({x}{<}{k})}={(\text{base})}{(\text{height})}={({12.5}-{0})}{(\frac{{1}}{{15}})}={0.8333}\\[/latex], [latex]\displaystyle{P}{({x}{>}{2}|{x}{>}{1.5})}={(\text{base})}{(\text{new height})}={({4}-{2})}{(\frac{{2}}{{5}})}=\frac{{4}}{{5}}\\[/latex], http://cnx.org/contents/[email protected]:36/Introductory_Statistics, http://cnx.org/contents/[email protected]. For example, it can arise in inventory management Auditing InventoryAuditing inventory is the process of cross-checking financial records with physical inventory and records. The histogram that could be constructed from the sample is an empirical distribution that closely matches the theoretical uniform distribution. It is important to practice examples of uniform distribution after learning it's formulas. Discrete uniform distribution is also useful in Monte Carlo simulationMonte Carlo SimulationMonte Carlo simulation is a statistical method applied in modeling the probability of different outcomes in a problem that cannot be simply solved, due to the interference of a random variable.. Another simple example is the probability distribution of a coin being flipped. Find the 30th percentile of furnace repair times. Moreover, statistics concepts can help investors monitor, The normal distribution is also referred to as Gaussian or Gauss distribution. The domain is a finite interval. Monte Carlo simulation is often used to forecast scenarios and help in the identification of risks. However, there is an infinite number of points that can exist. P(2 < x < 18) = 0.8; 90th percentile = 18. Find the mean and the standard deviation. The mean of X is [latex]\displaystyle{\mu}=\frac{{{a}+{b}}}{{2}}\\[/latex]. Suppose the time it takes a nine-year old to eat a donut is between 0.5 and 4 minutes, inclusive. Uniform Distribution has a constant probability. The possible values would be 1, 2, 3, 4, 5, or 6. The probability density function is [latex]\displaystyle{f{{({x})}}}=\frac{{1}}{{{b}-{a}}}\\[/latex] for a ≤ x ≤ b. Draw a graph. A good example of a continuous uniform distribution is an idealized random number generator. Examples of Uniform Distribution. Therefore, each time the 6-sided die is thrown, each side has a chance of 1/6. In this case, each of the six numbers has an equal chance of appearing. X = a real number between a and b (in some instances, X can take on the values a and b). a is zero; b is 14; X ~ U (0, 14); μ = 7 passengers; σ = 4.04 passengers. A continuous uniform distribution (also referred to as rectangular distribution) is a statistical distribution with an infinite number of equally likely measurable values. The uniform distribution is a continuous probability distribution and is concerned with events that are equally likely to occur. The total duration of baseball games in the major league in the 2011 season is uniformly distributed between 447 hours and 521 hours inclusive. The data follow a uniform distribution where all values between and including zero and 14 are equally likely. It is also known as rectangular distribution. Let X = length, in seconds, of an eight-week-old baby's smile. A form of probability distribution where every possible outcome has an equal likelihood of happening, Auditing inventory is the process of cross-checking financial records with physical inventory and records. The longest 25% of furnace repairs take at least 3.375 hours (3.375 hours or longer). In this Example we use Chebfun to solve two problems involving the uniform distribution from the textbook [1]. OpenStax, Statistics, The Uniform Distribution. CFI is the official provider of the global Financial Modeling & Valuation Analyst (FMVA)™FMVA® CertificationJoin 350,600+ students who work for companies like Amazon, J.P. Morgan, and Ferrari certification program, designed to help anyone become a world-class financial analyst. The sample mean = 7.9 and the sample standard deviation = 4.33. For example, if you stand on a street corner and start to randomly hand a $100 bill to any lucky person who walked by, then every passerby would have an equal chance of being handed the money. If X has a uniform distribution where a < x < b or a ≤ x ≤ b, then X takes on values between a and b (may include a and b). Notice that the theoretical mean and standard deviation are close to the sample mean and standard deviation in this example. Find the probability that a randomly selected student needs at least eight minutes to complete the quiz. The probability is constant since each variable has equal chances of being the outcome. We will assume that the smiling times, in seconds, follow a uniform distribution between zero and 23 seconds, inclusive. Another example with a uniform distribution is when a coin is tossed. The probability P(c < X < d) may be found by computing the area under f(x), between c and d. Since the corresponding area is a rectangle, the area may be found simply by multiplying the width and the height. The amount of time, in minutes, that a person must wait for a bus is uniformly distributed between zero and 15 minutes, inclusive. 3.375 hours is the 75th percentile of furnace repair times. Note: Since 25% of repair times are 3.375 hours or longer, that means that 75% of repair times are 3.375 hours or less. Find the probability that a different nine-year old child eats a donut in more than two minutes given that the child has already been eating the donut for more than 1.5 minutes. The possible outcomes in such a scenario can only be two. The graph of a uniform distribution is usually flat, whereby the sides and top are parallel to the x- and y-axes. For this example, X ~ U(0, 23) and [latex]\displaystyle{f{{({x})}}}=\frac{{1}}{{{23}-{0}}}\\[/latex] for 0 ≤ X ≤ 23. The McDougall Program for Maximum Weight Loss. Uniform distribution can be grouped into two categories based on the types of possible outcomes. Let X= leng… Suppose the time it takes a student to finish a quiz is uniformly distributed between six and 15 minutes, inclusive. We write X ∼ U(a, b). The data in the table below are 55 smiling times, in seconds, of an eight-week-old baby. There are several ways in which discrete uniform distribution can be valuable for businesses. Formulas for the theoretical mean and standard deviation are [latex]\displaystyle{\mu}=\frac{{{a}+{b}}}{{2}}{\quad\text{and}\quad}{\sigma}=\sqrt{{\frac{{{({b}-{a})}^{{2}}}}{{12}}}}\\[/latex], For this problem, the theoretical mean and standard deviation are [latex]\displaystyle{\mu}=\frac{{{0}+{23}}}{{2}}={11.50} \text{ seconds}{\quad\text{and}\quad}{\sigma}=\sqrt{{\frac{{{({23}-{0})}^{{2}}}}{{12}}}}={6.64} \text{ seconds}\\[/latex]. State the values of a and b. Let. The sample mean = 11.49 and the sample standard deviation = 6.23. Monte Carlo simulation is a statistical method applied in modeling the probability of different outcomes in a problem that cannot be simply solved, due to the interference of a random variable. Not all uniform distributions are discrete; some are continuous. On the average, how long must a person wait? Solve the problem two different ways (see Example 3). A deck of cards also has a uniform distribution. It is impossible to get a value of 1.3, 4.2, or 5.7 when rolling a fair die. This question has a conditional probability. McDougall, John A. [latex]\displaystyle{\mu}=\frac{{{a}+{b}}}{{2}}=\frac{{{15}+{0}}}{{2}}={7.5}\\[/latex]. Plume, 1995. A distribution is given as X ~ U (0, 20). Then find the probability that a different student needs at least eight minutes to finish the quiz given that she has already taken more than seven minutes. When working out problems that have a uniform distribution, be careful to note if the data is inclusive or exclusive. [latex]\displaystyle{\sigma}=\sqrt{{\frac{{{({b}-{a})}^{{2}}}}{{12}}}}=\sqrt{{\frac{{{({15}-{0})}^{{2}}}}{{12}}}}={4.3}\\[/latex]The standard deviation is 4.3 minutes. The data that follow are the number of passengers on 35 different charter fishing boats.
Chá De Orégano: Benefícios, Chicken Burrito Kit, Nascar Heat 4 Xbox One, What Happened To The Daughters Of Edward Darley Boit, Black Walnut Log Buyers Near Me, Flight Operations Department Responsibilities, Shin Megami Tensei 1, Korvold, Fae-cursed King Story,
|
CommonCrawl
|
Skip to main content Skip to sections
Tools and Algorithms for the Construction and Analysis of Systems
International Conference on Tools and Algorithms for the Construction and Analysis of Systems
TACAS 2018: Tools and Algorithms for the Construction and Analysis of Systems pp 270-287 | Cite as
Daisy - Framework for Analysis and Optimization of Numerical Programs (Tool Paper)
Eva Darulova
Anastasiia Izycheva
Fariha Nasir
Fabian Ritter
Heiko Becker
Robert Bastian
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10805)
Automated techniques for analysis and optimization of finite-precision computations have recently garnered significant interest. Most of these were, however, developed independently. As a consequence, reuse and combination of the techniques is challenging and much of the underlying building blocks have been re-implemented several times, including in our own tools. This paper presents a new framework, called Daisy, which provides in a single tool the main building blocks for accuracy analysis of floating-point and fixed-point computations which have emerged from recent related work. Together with its modular structure and optimization methods, Daisy allows developers to easily recombine, explore and develop new techniques. Daisy's input language, a subset of Scala, and its limited dependencies make it furthermore user-friendly and portable.
Finite Precision Computations Roundoff Error Analysis Affine Arithmetic (AA) Fixed-point Arithmetic Interval Subdivision
Download conference paper PDF
Floating-point or fixed-point computations are an integral part of many embedded and scientific computing applications, as are the roundoff errors they introduce. They expose an interesting tradeoff between efficiency and accuracy: the more precision we choose, the closer the results will be to the ideal real arithmetic, but the more costly the computation becomes. Unfortunately, the unintuitive and complex nature of finite-precision arithmetic makes manual optimization infeasible such that automated tool support is indispensable.
This has been recognized previously and several tools for the analysis and optimization of finite-precision computations have been developed. For instance, the tools Fluctuat [22], Rosa [14], Gappa [17], FPTaylor [41], Real2Float [31] and PRECiSA [34] automatically provide sound error bounds on floating-point (and some also on fixed-point) roundoff errors. Such a static error analysis is a pre-requisite for any optimization technique providing rigorous results, such as recent ones which choose a mixed-precision assignment [10] or an error-minimizing rewriting of the non-associative finite-precision arithmetic [15, 37].
Many of these techniques are complementary. The static analysis techniques have different strengths, weaknesses, and accuracy/efficiency tradeoffs, and optimization techniques should ideally be combined for best results [16]. However, today's techniques are mostly developed independently, resulting in re-implementations and making re-combination and re-use challenging and time-consuming.
In this paper, we present the framework Daisy for the analysis and optimization of finite-precision computations. In contrast to previous work, we have developed Daisy from the ground up to be modular, and thus easily extensible. Daisy is being actively developed and currently already provides many of today's state-of-the-art techniques — all in one tool. In particular, it provides dataflow- as well as optimization-based sound roundoff error analysis, support for mixed-precision and transcendental functions, rewriting optimization, interfaces to several SMT solvers and code generation in Scala and C. Daisy furthermore supports both floating-point and fixed-point arithmetic (whenever the techniques do), making it generally applicable to both scientific computing and embedded applications.
Daisy is aimed at tool developers as well as non-expert users. To make it user-friendly, we adopt the input format of Rosa, which is a real-valued functional domain-specific language in Scala. Unlike other tools today, which have custom input formats [41] or use prefix notation [12], Daisy's input is easily human readable1 and natural to use.
Daisy is itself written in the Scala programming language [35] and has limited and optional dependencies, making it portable and easy to install. Daisy's main design goals are code readability and extensibility, and not necessarily performance. We demonstrate with our experiments that roundoff errors computed by Daisy are nonetheless competitive with state-of-the-art tools with reasonable running times.
Daisy has replaced Rosa for our own development, and we are happy to report that simple extensions (e.g. adding support for fused multiply-add operations) were integrated quickly by MSc students previously unfamiliar with the tool.
Contributions. We present the new tool Daisy which integrates several techniques for sound analysis and optimization of finite-precision computations:
static dataflow analysis for finite-precision roundoff errors [14] with mixed-precision support and additional support for the dReal SMT solver [21],
FPTaylor's optimization-based absolute error analysis [41],
transcendental function support, for dataflow analysis following [13],
interval subdivision, used by Fluctuat [22] to obtain tighter error bounds,
rewriting optimization based on genetic programming [15].
We show in Sect. 5 that results computed by Daisy are competitive. The code is available open-source at https://github.com/malyzajko/daisy.
We focus primarily on sound verification techniques. The goal of this effort is not to develop the next even more accurate technique, rather to consolidate existing ones and to provide a solid basis for further research. Other efforts related to Daisy, which have been described elsewhere and which we do not focus on here are the generation and checking of formal certificates [4], relative error computation [26], and mixed-precision tuning [16].
2 User's Guide: An Overview of Daisy
We first introduce Daisy's functionality from a user's perspective, before reviewing background in roundoff error analysis (Sect. 3) and then describing the developer's view and the internals of Daisy (Sect. 4).
Installation. Daisy is set up with the simple build tool (sbt) [30], which takes care of installing all Scala-related dependencies fully automatically. This basic setup was successfully tested on Linux, macOS and Windows. Some of Daisy's functionality requires additional libraries, which are also straight-forward to install: the Z3 and dReal SMT-solvers [19, 21], and the MPFR arbitrary-precision library [20]. Z3 works on all platforms, we have tested MPFR on Linux and Mac, and dReal on Linux.
Input Specification Language. The input to Daisy is a source program written in a real-valued specification language; Fig. 1 shows an example nonlinear embedded controller [15]. The specification language is not executable (as real-valued computation is infeasible), but it is a proper subset of Scala. The Open image in new window data type is implemented with Scala's dedicated support for numerical types.
Open image in new window
Example input program
Each input program consists of a number of functions which are handled by Daisy separately. In the function's precondition (the Open image in new window clause), the user provides the ranges of all input variables2. In addition, Daisy allows to specify an initial error (beyond only roundoff) on input variables with the notation Open image in new window as well as additional (non-interval) constraints, e.g. Open image in new window .
The function body consists of a numerical expression with possibly local variable declarations. Daisy supports arithmetic (\(+, -, *, /, \sqrt{}\)), the standard transcendental functions (\(\sin , \cos , \tan , \log , \exp \)) as well as fused multiply-add (FMA). Daisy currently does not support conditionals and loops; we discuss the challenges and possible future avenues in Sect. 6. The (optional) postcondition in the Open image in new window clause specifies the result's required accuracy in terms of worst-case absolute roundoff error. For our controller, this information may be for instance determined from the specification of the system's sensors or the analysis of the controller's stability [32].
Main Functionality. The main mode of interaction with Daisy is through a command-line interface. Here we review Daisy's main features through the most commonly used command-line options. Brackets denote a choice and curly braces optional parameters. For more options and more fine-grained settings, run Open image in new window .
The main feature of Daisy is the analysis of finite-precision roundoff errors. For this, Daisy provides several methods:
Daisy supports forward dataflow analysis (as implemented in Rosa, Fluctuat and Gappa) and an optimization-based analysis (as implemented in FPTaylor and Real2Float). These methods compute absolute error bounds, and whenever a relative error can be computed, it is also reported. Daisy also supports a dedicated relative error computation [26] which is often more accurate, but also more expensive. All methods can be combined with interval subdivision, which can provide tighter error bounds at the expense of larger running times. We explain these analyses in more detail in Sect. 3.
Accuracy and correspondingly cost of both dataflow and optimization-based analysis can be adjusted by choosing the method which is used to bound ranges:
With the Open image in new window option, the user can select between currently two SMT solvers, which have to be installed separately. For dataflow analysis, one can also select the method for bounding errors: Open image in new window .
Daisy performs roundoff error analysis by default w.r.t. to uniform double floating-point precision, but it also supports various other floating-point and fixed-point precisions:
Mixed-precision, i.e. choosing different precisions for different variables, is supported by providing a mapping from variables to precisions in a separate file ( Open image in new window ).
Finite-precision arithmetic is not associative, i.e. different rewritings, even though they are equivalent under a real-valued semantics, will exhibit different roundoff errors. The Open image in new window optimization [15] uses genetic search to find a rewriting for which it can show the smallest roundoff error.
Daisy prints the analysis result to the terminal. If a postcondition is specified, but the computed error does not satisfy it, Daisy also prints a warning. Optionally, the user can also choose to generate executable code ( Open image in new window ) in Scala or C, which is especially useful for fixed-point arithmetic, as Daisy's code generator includes all necessary bit shifts.
Static analysis computes a sound over-approximation of roundoff errors, but an under-approximation can also be useful, e.g. to estimate how big the over-approximation of static analysis is. This is provided by the Open image in new window analysis in Daisy which runs a program in the finite precision of interest and a higher-precision version side-by-side. For this, the MPFR library is required.
Online Interface. We also provide an online interface for Daisy, which allows one to quickly try it out, although it does not yet support all the options: daisy.mpi-sws.org, see the screenshot in Fig. 2.
Screenshot of Daisy's online interface
3 Theoretical Foundations
Before describing the inner architecture of Daisy, we review necessary background on finite-precision arithmetic and static analysis of their roundoff errors.
Floating-Point Arithmetic. One of the most commonly used finite-precision representations is floating-point arithmetic, which is standardized by IEEE754 [24]. The standard defines several precisions as well as rounding operators; here we will consider the most commonly used ones, i.e. single and double precision with operations in rounding-to-nearest mode. Then, arithmetic operations satisfy the following abstraction:
$$\begin{aligned} x \circ _{fl} y = (x \circ y)(1 + e) + d \;\text {, }\; |e |\le \epsilon _m, |d |\le \delta _m \end{aligned}$$
where \(\circ \in {+, -, *, /}\) and \(\circ _{fl}\) denotes the respective floating-point version. Square root follows similarly, and unary minus does not introduce roundoff errors. The machine epsilon \(\epsilon _m\) bounds the maximum relative error for so-called normal values. Roundoff errors of subnormal values, which provide gradual underflow, are expressed as an absolute error, bounded by \(\delta _m\). \(\epsilon _m = 2^{-24}, \delta _m = 2^{-150}\) and \(\epsilon _m = 2^{-53}, \delta _m = 2^{-1075}\) for single and double precision, respectively.
Higher precisions are usually implemented in software libraries on top of standard double floating-point precision [2]. Daisy supports quad and quad-double precision, where we assume \(\epsilon _m = 2^{-113}\) and \(\epsilon _m = 2^{-211}\), respectively. Depending on the library, \(\delta _m\) may or may not be defined, and Daisy can be adjusted accordingly.
Static analyses usually use this abstraction of floating-point arithmetic, as bit-precise reasoning does not scale, and furthermore is unsuitable for computing roundoff errors w.r.t. continuous real-valued semantics (note that Eq. 1 is also real-valued). The abstraction furthermore only holds in the absence of not-a-number special values (NaN) and infinities. Daisy's static analysis detects such cases automatically and reports them as errors.
Fixed-Point Arithmetic. Floating-point arithmetic requires dedicated support, either in hardware or software, and depending on the application this support may be too costly. An alternative is fixed-point arithmetic which can be implemented with integers only, but which in return requires that the radix point alignments are precomputed at compile time. While no standard exists, fixed-point values are usually represented by bit vectors with an integer and a fractional part, separated by an implicit radix point. At runtime, the alignments are then performed by bit-shift operations. These shift operations can also be handled by special language extensions for fixed-point arithmetic [25]. For more details see [1], whose fixed-point semantics we follow. We use truncation as the rounding mode for arithmetic operations. The absolute roundoff error at each operation is determined by the fixed-point format, i.e. the (implicit) number of fractional bits available, which in turn can be computed from the range of possible values at that operation.
Range Arithmetic. The magnitude of floating-point and fixed-point roundoff errors depends on the magnitudes of possible values. Thus, in order to accurately bound roundoff errors, any static analysis first needs to be able to bound the ranges of all (intermediate) expressions accurately, i.e. tightly. Different range arithmetics have been developed and each has a different accuracy/efficiency tradeoff. Daisy supports interval [33] and affine arithmetic [18] as well as a more accurate, but also more expensive, combination of interval arithmetic and SMT [14].
Interval arithmetic (IA) [33] is an efficient choice for range estimation, which computes a bounding interval for each basic operation \(\circ \in \lbrace +, -, *, / \rbrace \) as
$$\begin{aligned}{}[x_0, x_1] \circ [y_0, y_1] = [ \min (x \circ y), \max ( x \circ y)], \text { where }x \in [x_0, x_1], y \in [y_0, y_1] \end{aligned}$$
and analogously for square root. Interval arithmetic cannot track correlations between variables (e.g. \(x - x \ne [0, 0]\)), and thus can introduce significant over-approximations of the true ranges, especially when the computations are longer.
Affine arithmetic (AA) [18] tracks linear correlations by representing possible values of variables as affine forms:
$$\begin{aligned} \hat{x} = x_0 + \sum _{i=1}^n x_i \epsilon _i, \quad \text { where } \epsilon _i \in [-1, 1] \end{aligned}$$
where \(x_0\) denotes the central value (of the represented interval) and each noise term \(x_i\epsilon _i\) denotes a deviation from this central value. The range represented by an affine form is computed as \([\hat{x}] = [x_0 - rad(\hat{x}), x_0\,+\,rad(\hat{x})]\), \(rad(\hat{x}) = \sum _{i=1}^n |x_i|\). Linear operations are performed term-wise and are computed exactly, whereas nonlinear ones need to be approximated and thus introduce over-approximations. Overall, AA can produce tighter ranges in practice (though not universally). In particular, AA is often beneficial when the individual noise terms (\(x_i\)'s) are small, e.g. when they track roundoff errors.
The over-approximation due to nonlinear arithmetic can be mitigated [14] by refining ranges computed by IA with a binary search in combination with a SMT solver which supports nonlinear arithmetic such as Z3 [19] or dReal [21].
Static Analysis for Roundoff Error Estimation. The worst-case absolute roundoff error that most static analyses approximate is:
$$\begin{aligned} \max _{x \in [a, b]}\;\;|f(x) - \tilde{f}(\tilde{x}) |\end{aligned}$$
where [a, b] is the range for x given in the precondition, and f and x are a mathematical real-valued arithmetic expression and variable, respectively, and \(\tilde{f}\) and \(\tilde{x}\) their finite-precision counterparts. This definition extends to multivariate f component-wise.
An automated and general estimation of relative errors (\(\frac{|f(x) - \tilde{f}(\tilde{x}) |}{|f(x)|}\)), though it may be more desirable, presents a significant challenge today. For instance, when the range of f(x) includes zero, relative errors are not well defined and this is often the case in practice. For a more thorough discussion, we refer the reader to [26]; the technique is also implemented within Daisy.
For bounding absolute errors, two main approaches exist today, which we review in the following.
Dataflow Analysis. One may think that just evaluating a program in interval arithmetic and interpreting the width of the resulting interval as the error bound would be sufficient. While this is certainly a sound approach, it computes too pessimistic error bounds in general. This is especially true if we consider relatively large ranges on inputs; we cannot distinguish which part of the interval width is due to the input interval or due to accumulated roundoff errors.
Thus, dataflow analysis computes roundoff error bounds in two steps, recursively over the abstract syntax tree (AST) of the arithmetic expression:
range analysis computes sound range bounds (for real semantics),
error analysis propagates errors from subexpressions and computes the new worst-case roundoffs using the previously computed ranges.
In practice, these two steps can be performed in a single pass over the AST. A side effect of this separation is that it provides us with a modular approach: we can choose different range arithmetics with different accuracy/efficiency tradeoffs for ranges and errors (and possibly for different parts of a program).
The main challenge of dataflow analysis is to minimize over-approximations due to nonlinear arithmetic (linear arithmetic can be handled well with AA). Previous tools chose different strategies. For instance, Rosa [14] employs the combination of interval arithmetic with a non-linear SMT-solver, which we described earlier. Fluctuat [22], which uses AA for both bounding the ranges as well as the errors, uses interval subdivision. In Fluctuat, the user can designate up to two variables whose input ranges will be subdivided into intervals of equal width. The analysis is performed separately for each and the overall error is then the maximum error over all subintervals. Interval subdivision increases the runtime of the analysis, especially for multivariate functions, and the choice of which variables to subdivide and by how much is usually not straight-forward.
Optimization-based Analysis. FPTaylor [41], Real2Float [31] and PRECiSA [34], unlike Daisy, Rosa, Gappa and Fluctuat, formulate the roundoff error bounds computation as an optimization problem, where the absolute error expression from Eq. 2 is to be maximized, subject to interval constraints on its parameters. Due to the discrete nature of floating-point arithmetic, FPTaylor optimizes the continuous, real-valued abstraction from Eq. 1. However, this expression is still too complex and features too many variables for optimization procedures in practice.
FPTaylor introduces the Symbolic Taylor approach, where the objective function is simplified using a first order Taylor approximation with respect to e and d (the variables representing roundoff errors at each arithmetic operation). To solve the optimization problem, FPTaylor uses a rigorous branch-and-bound procedure.
4 Developer's Guide: Daisy's Internals
This section provides more details on Daisy's architecture and explains some of our design decisions. Daisy is written in the Scala programming language which provides a strong type system as well as a large collection of (parallel) libraries. While Scala supports both imperative and functional programming styles, we have written Daisy functionally as much as possible, which we found to be beneficial to ensuring correctness and readability of code.
4.1 Input Language and Frontend
Daisy's input language is implemented as a domain-specific language in Scala, and Daisy's frontend calls the Scala compiler which performs parsing and type-checking. While designing our own simple input format and parser would be certainly more efficient in terms of Daisy's running time (and could be done in the future), we have deliberately chosen not to do this. An existing programming language provides clear semantics and feels natural to users. Using the Scala compiler furthermore helps to ensure that Daisy parses the program correctly, for instance that it indeed conforms e.g. to Scala's typing rules. Furthermore, extending the input language is usually straight-forward.
The other major design decision was to make the input program real-valued. This explicitly specifies the baseline against which roundoff errors should be computed, but it also makes it easy for the user to explore different options. For instance, changing the precision only requires changing a flag, whereas a finite-precision input program (like FPTaylor's or Fluctuat's) requires editing the source code.
Mixed-precision is also supported respecting Scala semantics and is thus transparent. The user may annotate variables, including local ones, with different precisions. To specify the precision of every individual operation, the program can be transformed into three-address form (Daisy can do this automatically), and then each arithmetic operation can be annotated via the corresponding variable.
Daisy currently does not support data structures such as arrays or lists in its input language, mainly because the static analysis of these is largely orthogonal to the analysis of the actual computation and we believe that standard strategies like unrolling computations over array elements or abstracting the array as a single variable can be employed.
4.2 Modular Architecture
Daisy is built up in a modular way by implementing its functionality in phases, which can be combined. See the overview in Fig. 3. Each phase takes as input and returns as output a Open image in new window and a Open image in new window , and can modify both. For instance, rewriting transforms the program and roundoff error analysis adds the analysis information to the context. This information is then re-used by later phases, for instance the analysis information is used to generate fixed-point arithmetic programs in the code generation phase. This modularity allows, for instance, the rewriting optimization phase to be combined with any other roundoff error analysis.
Overview of Daisy's phases. Phases in curly braces are optional.
In addition to the modular architecture, Daisy's main functionality is provided as a set of library tools, which allows for further reuse across different phases. It could also be used as a separate library in other tools. Here we highlight the main functionality provided:
Open image in new window provides an implementation of rational numbers based on Java's BigInteger library. Rationals are used throughout Daisy for computations in order to avoid internal roundoff errors which could affect soundness.
Open image in new window is an interface to GNU's MPFR arbitrary precision library [20].
Open image in new window and Open image in new window provide implementations of interval and affine arithmetic. Daisy uses no external libraries for these in order to facilitate extensions and integration.
Open image in new window implements Rosa's combination of interval arithmetic with an SMT solver [14] for improved range bounds. Daisy uses the scala-smtlib library3 to interface with the Z3 and dReal SMT solvers. Other solvers can be added with little effort, provided they support the SMT-LIB standard [3].
Open image in new window implement dataflow roundoff error analysis. The analysis is parametric in the range method used, and due to its implementation as a library function can be easily used in different contexts.
Open image in new window provides methods for computing and simplifying partial derivatives.
Open image in new window provides a generic implementation of a (simple) genetic search, which is currently used for the rewriting optimization.
The fixed-point precision class in Daisy supports any bitlength (i.e. only the frontend has a limited selection) and floating-point types can be straight-forwardly added by specifying the corresponding machine epsilon and representable range.
4.3 Implementation Details
Here we provide details about Daisy's implementation of previous techniques. The dataflow analysis approach, e.g. in Rosa, only considered arithmetic operations without transcendental functions. Daisy extends this support by implementing these operations in interval and affine arithmetic. The former is straight-forward, whereas for AA Daisy computes sound linear approximations of the functions, following [13] which used this approach in a dynamic analysis. Following most libraries of mathematical functions, we assume that transcendental functions are rounded correctly to one unit in the last place. Since internal computations are performed with rational types, the operations for transcendental functions are approximated with the corresponding outward or upwards rounding to ensure soundness. To support the combination of interval arithmetic and SMT, we integrate the dReal solver in Daisy, which provides support for transcendental functions. Although dReal is only \(\delta \)-complete, this does not affect Daisy's soundness as the algorithm relies on UNSAT answers, which are always sound in dReal.
Interval subdivision can be an effective tool to reduce overapproximations in static analysis results, which is why Daisy offers it for all its analyses. Daisy subdivides every input variable range into a fixed number of subintervals (the number can be controlled by the user) and takes the cartesian product. The analysis is then performed separately for each set of subintervals. This clearly increases the running time, but is also trivially parallelizable.
Daisy also includes an initial implementation of FPTaylor's optimization-based static analysis. The major difference is that Daisy does not use a branch-and-bound algorithm for solving the optimization problem, but relies on the already existing range analyses. We would like to include a proper optimization solver in the future; currently custom interfaces have been an obstacle.
5 Experimental Evaluation
We have experimentally evaluated Daisy's roundoff error analysis on a number of finite-precision verification benchmarks taken from related work [15, 16, 31, 41]. Benchmarks marked with a superscript \(^T\) contain transcendental functions. The goal of this evaluation is twofold. First, Daisy should be able to compute reasonably tight error bounds in a reasonable amount of time to be useful. Secondly, exploiting the fact that Daisy implements several different analysis methods within a single tool allows us to provide a direct comparison of their tradeoffs.
We compare Daisy with FPTaylor, which has been shown previously to provide tight error bounds [41]. It furthermore implements the optimization-based approach, which we re-implement in Daisy (in an albeit preliminary version). We do not compare against tools which employ dataflow static analysis, as Daisy's analyses essentially subsume those.
Roundoff errors for uniform 64-bit double precision by dynamic analysis, FPTaylor and Daisy (subset of benchmarks).
FPTaylor
IA - AA
Z3 - AA
Z3 - IA
dReal - AA
AA-AA+sub
opt - Z3
Z3-AA+rw
bspline0
2.84e-17
himmilbeau
invertedPend.
kepler0
rigidBody1
sineOrder3
sqroot
train4 out1
train4 state9
turbine1
pendulum1\(^T\)
analysis1\(^T\)
logExp\(^T\)
sphere\(^T\)
Execution times of FPTaylor and Daisy for different settings
AA-AA (sub)
bspline
2 s 884 ms
14 s 69 ms
2 s 31 ms
18 s 629 ms
traincar
7s 452 ms
transcendental
Comparison with FPTaylor. We first compare roundoff errors computed by Daisy with different methods against errors computed by FPTaylor (version from 20 Sept 2017) in Table 1. All errors are computed for uniform double floating-point precision, assuming roundoff errors on inputs. We abbreviate the settings used in Daisy by e.g. IA - AA, where IA and AA specify the methods used for computing the ranges and errors, respectively. 'sub' means subdivision, 'rw' rewriting and 'opt' denotes the optimization-based approach. We underline the lowest roundoff errors computed among the different Daisy settings (without rewriting). The column marked '%' denotes the factor by which the lowest error computed by Daisy differs from FPTaylor's computed error.
FPTaylor supports different backend solvers; we have performed experiments with the internal branch-and-bound and the Gelpia solver, but observed only minor differences. We thus report results for the Gelpia solver. We furthermore chose the lowest verbosity level in both FPTaylor and Daisy to reduce the execution time. Table 1 also shows an underapproximation of roundoff errors computed using Daisy's dynamic analysis which provides an idea of the tightness of roundoff errors.
Table 2 shows the corresponding execution times of the tools. Execution times are average real time measured by the bash time command. We have performed all experiments on a Linux desktop computer with an Intel Xeon 3.30 GHz processor and 32 GB RAM, with Scala version 2.11.11.
The focus when implementing Daisy was to provide a solid framework with modular and clear code, not to improve roundoff error bounds. Nonetheless, Daisy's roundoff error bounds are mostly competitive with FPTaylor's, with the notable exception of the jetEngine benchmark (additionally, interval arithmetic fails to bound the divisor away from zero).
Overall we observe that using an SMT solver for tightening ranges is helpful, but interval subdivision is preferable. Furthermore, using affine arithmetic for bounding errors is preferable over interval arithmetic. Finally, rewriting can often improve roundoff error bounds significantly.
Our optimization-based analysis is not yet quite as good as FPTaylor's, but acceptable for a first re-implementation. We suspect the difference is mainly due to the fact that Daisy does not use a dedicated optimization procedure, which we hope to include in the future.
Execution times of FPTaylor and Daisy are comparable. It should be noted that the times are end-to-end, and in particular for Daisy this includes the Scala compiler frontend, which takes a constant 1.3 s (irrespective of input). Clearly, with a hand-written parser this could be improved, but we do not consider this as critical. Furthermore, Daisy performs overflow checks at every intermediate subexpression; it is unclear whether FPTaylor does this as well.
Table 1 seems to suggest that one should use FPTaylor's optimization-based approach for bounding roundoff errors. We include dataflow analysis in Daisy nonetheless for several reasons. First, dataflow analysis computes overflow checks without extra cost. Secondly, the optimization-based approach is only applicable when errors can be specified as relative errors, which is not the case for instance for fixed-point arithmetic, which is important for many embedded applications.
Fixed-Point vs Floating-Point. In Table 3 we use Daisy to compare roundoff errors for 32-bit fixed-point and 32-bit floating-point arithmetic, with and without rewriting. For this comparison, we use the dataflow analysis, as the optimization-based approach is not applicable to fixed-point arithmetic. Not surprisingly, the results confirm that (at least for our examples with limited ranges) fixed-point arithmetic can provide better accuracy for the same bitlength, and furthermore that rewriting can improve the error bounds further.
Roundoff errors for 32-bit floating-point and fixed-point arithmetic.
Z3 - AA + rewriting
float 32
fixed 32
8.69e-8
invertedPendulum
We have already mentioned the directly related techniques and tools Gappa, Fluctuat, Rosa, FPTaylor, Real2Float and PRECiSA throughout the paper. Except for Fluctuat and Rosa, these tools also provide either a proof script or a certificate for the correctness (of certain parts) of the analysis, which can be independently checked in a theorem prover. Certificate generation and checking for Daisy has been described in a separate paper [4].
Daisy currently handles straight-line arithmetic expressions, i.e. it does not handle conditionals and loops. Abstract interpretation of floating-point programs handles conditionals by joins, however, for roundoff error analysis this approach is not sufficient. The real-valued and finite-precision computations can diverge and a simple join does not capture this 'discontinuity error'. Programs with loops are challenging, because roundoff errors in general grow with each loop iteration and thus a nontrivial fixpoint does not exist in general (loop unrolling can however be applied). Widening operators compute non-trivial bounds only for very special cases where roundoff errors decrease with each loop iteration. These challenges have been (partially) addressed [16, 23], and we plan to include those techniques in Daisy in the future. Nonetheless, conditionals and loops remain open problems.
Sound techniques have also been applied for both the range and the error analysis for bitwidth optimization of fixed-point arithmetic, for instance in [28, 29, 36, 38] and Lee et. al. [29] provide a nice overview of static and dynamic techniques.
Dynamic analysis can be used to find inputs which cause large roundoff errors, e.g. running a higher-precision floating-point program alongside the original one [5] or with a guided search to find inputs which maximize errors [11]. In comparison, Daisy's dynamic analysis is a straight-forward approach, and some more advanced techniques could be integrated as well.
Optimization techniques targeting accuracy of floating-point computations, like rewriting [37] or mixed-precision tuning [10] include some form of roundoff error analysis, and any of the above approaches, including Daisy's, can be potentially used as a building block.
More broadly related are abstract interpretation-based static analyses, which are sound w.r.t. floating-point arithmetic [6, 9, 27]. These techniques can prove the absence of runtime errors, such as division-by-zero, but cannot quantify roundoff errors. Floating-point arithmetic has also been formalized in theorem provers and entire numerical programs have been proven correct and accurate within these [7, 39]. Most of these formal verification efforts are, however, to a large part manual. Floating-point arithmetic has also been formalized in an SMT-lib [40] theory and SMT solvers exist which include floating-point decision procedures [8, 19]. These are, however, not suitable for roundoff error quantification, as a combination with the theory of reals would be necessary which does not exist today.
We have presented the framework Daisy which integrates several state-of-the-art techniques for the analysis and optimization of finite-precision programs. It is actively being developed, improved and extended and we believe that it can serve as a useful building block in future optimization techniques.
We realize a preference for prefix or infix notation is personal.
The magnitude of roundoff errors depends on the magnitude of all intermediate expressions; in general, with unbounded ranges, roundoff errors are also unbounded.
https://github.com/regb/scala-smtlib.
Anta, A., Majumdar, R., Saha, I., Tabuada, P.: Automatic verification of control system implementations. In: EMSOFT (2010)Google Scholar
Bailey, D.H., Hida, Y., Li, X.S., Thompson, B.: C++/Fortran-90 double-double and quad-double package. Technical report (2015)Google Scholar
Barrett, C., Fontaine, P., Tinelli, C.: The SMT-LIB Standard: Version 2.6. Technical report, University of Iowa (2017). www.SMT-LIB.org
Becker, H., Darulova, E., Myreen, M.O.: A verified certificate checker for floating-point error bounds. Technical report (2017). arXiv:1707.02115
Benz, F., Hildebrandt, A., Hack, S.: A dynamic program analysis to find floating-point accuracy problems. In: PLDI (2012)Google Scholar
Blanchet, B., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: A static analyzer for large safety-critical software. In: PLDI (2003)CrossRefGoogle Scholar
Boldo, S., Clément, F., Filliâtre, J.-C., Mayero, M., Melquiond, G., Weis, P.: Wave equation numerical resolution: a comprehensive mechanized proof of a C program. J. Autom. Reason. 50(4), 423–456 (2013)MathSciNetCrossRefGoogle Scholar
Brain, M., D'Silva, V., Griggio, A., Haller, L., Kroening, D.: Deciding floating-point logic with abstract conflict driven clause learning. Form. Methods Syst. Des. 45(2), 213–245 (2013)CrossRefGoogle Scholar
Chen, L., Miné, A., Cousot, P.: A sound floating-point polyhedra abstract domain. In: Ramalingam, G. (ed.) APLAS 2008. LNCS, vol. 5356, pp. 3–18. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89330-1_2CrossRefGoogle Scholar
Chiang, W.-F., Gopalakrishnan, G., Rakamaric, Z., Briggs, I., Baranowski, M.S., Solovyev, A.: Rigorous floating-point mixed precision tuning. In: POPL (2017)Google Scholar
Chiang, W.-F., Gopalakrishnan, G., Rakamaric, Z., Solovyev, A.: Efficient search for inputs causing high floating-point errors. In: PPoPP (2014)Google Scholar
Damouche, N., Martel, M., Panchekha, P., Qiu, C., Sanchez-Stern, A., Tatlock, Z.: Toward a standard benchmark format and suite for floating-point analysis. In: Bogomolov, S., Martel, M., Prabhakar, P. (eds.) NSV 2016. LNCS, vol. 10152, pp. 63–77. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54292-8_6CrossRefGoogle Scholar
Darulova, E., Kuncak, V.: Trustworthy numerical computation in scala. In: OOPSLA (2011)Google Scholar
Darulova, E., Kuncak, V.: Sound compilation of reals. In: POPL (2014)Google Scholar
Darulova, E., Kuncak, V., Majumdar, R., Saha, I.: Synthesis of fixed-point programs. In: EMSOFT (2013)Google Scholar
Darulova, E., Sharma, S., Horn, E.: Sound mixed-precision optimization with rewriting. Technical report (2017). arXiv:1707.02118
Daumas, M., Melquiond, G.: Certification of bounds on expressions involving rounded operators. ACM Trans. Math. Softw. 37(1), 2:1–2:20 (2010)MathSciNetCrossRefGoogle Scholar
de Figueiredo, L.H., Stolfi, J.: Affine arithmetic: concepts and applications. Numer. Algorithms 37(1), 147–158 (2004)MathSciNetCrossRefGoogle Scholar
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24CrossRefGoogle Scholar
Fousse, L., Hanrot, G., Lefèvre, V., Pélissier, P., Zimmermann, P.: MPFR: a multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 33(2) (2007)CrossRefGoogle Scholar
Gao, S., Kong, S., Clarke, E.M.: dReal: an SMT solver for nonlinear theories over the reals. In: Bonacina, M.P. (ed.) CADE 2013. LNCS (LNAI), vol. 7898, pp. 208–214. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38574-2_14CrossRefGoogle Scholar
Goubault, E., Putot, S.: Static analysis of finite precision computations. In: Jhala, R., Schmidt, D. (eds.) VMCAI 2011. LNCS, vol. 6538, pp. 232–247. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-18275-4_17CrossRefGoogle Scholar
Goubault, E., Putot, S.: Robustness analysis of finite precision implementations. In: Shan, C. (ed.) APLAS 2013. LNCS, vol. 8301, pp. 50–57. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-03542-0_4CrossRefGoogle Scholar
Computer Society IEEE. IEEE Standard for Floating-Point Arithmetic. IEEE Std 754–2008 (2008)Google Scholar
ISO/IEC. Programming languages — C — Extensions to support embedded processors. Technical report ISO/IEC TR 18037 (2008)Google Scholar
Izycheva, A., Darulova, E.: On sound relative error bounds for floating-point arithmetic. In: FMCAD (2017)Google Scholar
Jeannet, B., Miné, A.: Apron: a library of numerical abstract domains for static analysis. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 661–667. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02658-4_52CrossRefGoogle Scholar
Kinsman, A.B., Nicolici, N.: Finite precision bit-width allocation using SAT-modulo theory. In: DATE (2009)Google Scholar
Lee, D.U., Gaffar, A.A., Cheung, R.C.C., Mencer, O., Luk, W., Constantinides, G.A.: Accuracy-guaranteed bit-width optimization. Trans. Comp.-Aided Des. Integ. Cir. Sys. 25(10), 1990–2000 (2006)CrossRefGoogle Scholar
Lightbend. sbt - The interactive build tool (2017). http://www.scala-sbt.org/
Magron, V., Constantinides, G., Donaldson, A.: Certified roundoff error bounds using semidefinite programming. ACM Trans. Math. Softw. 43(4) (2017)MathSciNetCrossRefGoogle Scholar
Majumdar, R., Saha, I., Zamani, M.: Synthesis of minimal-error control software. In: EMSOFT (2012)Google Scholar
Moore, R.E.: Interval Analysis. Prentice-Hall, Englewood Cliffs (1966)zbMATHGoogle Scholar
Moscato, M., Titolo, L., Dutle, A., Muñoz, C.A.: Automatic estimation of verified floating-point round-off errors via static analysis. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10488, pp. 213–229. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66266-4_14CrossRefGoogle Scholar
Odersky, M., Spoon, L., Venners, B.: Programming in Scala: A Comprehensive Step-by-Step Guide. Artima Incorporation (2008)Google Scholar
Osborne, W.G., Cheung, R.C.C., Coutinho, J., Luk, W., Mencer, O.: Automatic accuracy-guaranteed bit-width optimization for fixed and floating-point systems. In: Field Programmable Logic and Applications, pp. 617–620 (2007)Google Scholar
Panchekha, P., Sanchez-Stern, A., Wilcox, J.R., Tatlock, Z.: Automatically improving accuracy for floating point expressions. In: PLDI (2015)Google Scholar
Pang, Y., Radecka, K., Zilic, Z.: An efficient hybrid engine to perform range analysis and allocate integer bit-widths for arithmetic circuits. In: ASPDAC (2011)Google Scholar
Ramananandro, T., Mountcastle, P., Meister, B., Lethin, R.: A unified Coq framework for verifying C programs with floating-point computations. In: CPP (2016)Google Scholar
Rümmer, P., Wahl, T.: An SMT-LIB theory of binary floating-point arithmetic. In: SMT (2010)Google Scholar
Solovyev, A., Jacobsen, C., Rakamarić, Z., Gopalakrishnan, G.: Rigorous estimation of floating-point round-off errors with symbolic Taylor expansions. In: Bjørner, N., de Boer, F. (eds.) FM 2015. LNCS, vol. 9109, pp. 532–550. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-19249-9_33CrossRefGoogle Scholar
© The Author(s) 2018
<SimplePara><Emphasis Type="Bold">Open Access</Emphasis> This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.</SimplePara> <SimplePara>The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.</SimplePara>
Email authorView author's OrcID profile
1.MPI-SWSSaarbrückenGermany
2.Technische Universität MünchenMunichGermany
3.Saarland UniversitySaarbrückenGermany
About this paper
Cite this paper as:
Darulova E., Izycheva A., Nasir F., Ritter F., Becker H., Bastian R. (2018) Daisy - Framework for Analysis and Optimization of Numerical Programs (Tool Paper). In: Beyer D., Huisman M. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2018. Lecture Notes in Computer Science, vol 10805. Springer, Cham. https://doi.org/10.1007/978-3-319-89960-2_15
DOI https://doi.org/10.1007/978-3-319-89960-2_15
Publisher Name Springer, Cham
eBook Packages Computer Science Computer Science (R0)
Cite paper
Share paper
Not logged in Not affiliated 3.239.51.78
|
CommonCrawl
|
Technical advance
Advantages of the net benefit regression framework for trial-based economic evaluations of cancer treatments: an example from the Canadian Cancer Trials Group CO.17 trial
Jeffrey S. Hoch ORCID: orcid.org/0000-0002-4880-42811,
Annette Hay1,
Wanrudee Isaranuwatchai1,
Kednapa Thavorn1,
Natasha B. Leighl1,
Dongsheng Tu1,
Logan Trenaman1,
Carolyn S. Dewa1,
Chris O'Callaghan1,
Joseph Pater1,
Derek Jonker1,
Bingshu E. Chen1 &
Nicole Mittmann1
Economic evaluations commonly accompany trials of new treatments or interventions; however, regression methods and their corresponding advantages for the analysis of cost-effectiveness data are not widely appreciated.
To illustrate regression-based economic evaluation, we review a cost-effectiveness analysis conducted by the Canadian Cancer Trials Group's Committee on Economic Analysis and implement net benefit regression.
Net benefit regression offers a simple option for cost-effectiveness analyses of person-level data. By placing economic evaluation in a regression framework, regression-based techniques can facilitate the analysis and provide simple solutions to commonly encountered challenges (e.g., the need to adjust for potential confounders, identify key patient subgroups, and/or summarize "challenging" findings, like when a more effective regimen has the potential to be cost-saving).
Economic evaluations of patient-level data (e.g., from a clinical trial) can use net benefit regression to facilitate analysis and enhance results.
We must deal with the escalating price of cancer therapy now… We cannot ignore the cumulative costs of the tests and treatments we recommend and prescribe. As the agents of change, professional societies, including their academic and practicing oncologist members, must lead the way. The time to start is now [1].
Cancer is a costly disease; there are huge costs physically, mentally and financially. A major component of many treatment regimens is pharmaceuticals. Fiscal toxicity of cancer treatment is not unique to patients and their families; healthcare payers also experience financial distress. Without the resources to pay for all treatments for all diseases for all patients, most healthcare payers have embraced an evidence informed decision-making process involving recommendation committees. Frequently, these recommendation committees embrace other types of evidence in addition to clinical evidence. For example, in Canada, the pan-Canadian Oncology Drug Review (pCODR), a national recommendation committee for oncology drugs, uses a deliberative framework that includes clinical evidence, patient values, system feasibility as well as economic evidence [2]. In the United States, the Institute for Clinical and Economic Review considers both net clinical benefit as well as value (i.e., cost-effectiveness and budget impact). Usually, the economic evidence used by recommendation committees is in the form of a cost-effectiveness model with inputs, based in part, on patient-level trial data.
In advance of formal drug reimbursement dossier submissions trial data are often presented at national conferences and published in scientific journals, providing an initial (and often impactful) preview of the clinical and economic evidence. Thus, cost-effectiveness analyses based entirely on patient-level trial data have the potential to play a major role in influencing clinical and decision maker perceptions of whether a drug provides value (e.g., is economically attractive). The analysis of a cost-effectiveness dataset provides insight into the value of the clinical benefit, over the same time horizon as the clinical study. In this way, the extra costs of the extra patient benefits accruing in the trial can be appreciated concurrently. However, there are some challenges that attend the analysis of patient-level cost-effectiveness data. For example, in cancer studies, these can involve the need to i) adjust for potential confounders, ii) identify key patient subgroups (e.g., with biomarkers), and iii) summarize the economic evidence when there is a negative cost-effectiveness ratio (e.g., when more effective treatment regimens are also potentially cost saving).
This article illustrates a regression-based method for analyzing patient-level cost-effectiveness data called net benefit regression. It has a variety of benefits that address shortcomings in conventional cost-effectiveness analysis methods. These benefits are illustrated using the Canadian Cancer Trials Group CO.17 study showing that patients with advanced colorectal cancer had improved overall survival and greater costs when cetuximab, an epidermal growth factor receptor-targeting antibody, was given in addition to best supportive care. Although the concepts of net benefit and net benefit regression have been applied in other healthcare areas, their application in oncology has not been widespread [3,4,5]. It is the goal of this article to clarify how to use and interpret the net benefit regression method, so that more authors and readers can appreciate what it offers.
Case study description
Mittmann and colleagues [6] conducted an economic evaluation of cetuximab plus best supportive care versus best supportive care alone in unselected advanced colorectal cancer patients. The initial clinical trial was conducted by the Canadian Cancer Trials Group as a multicenter, open-label, randomized phase III trial of cetuximab plus best supportive care versus best supportive care alone in patients with chemotherapy-refractory metastatic EGFR-positive colorectal cancer (ClinicalTrials.gov number NCT00079066). Survival times for the entire study population and for patients whose tumors harbored wild-type KRAS were calculated over an 18- to 19-month period [6], and the trial (hereafter referred to as CO.17) found a statistically significant overall survival advantage for cetuximab with a 1.5 month difference in median survival for cetuximab versus best supportive care [7]. In patients with wild-type KRAS tumors, there was a larger survival advantage (i.e., 4.7 months additional median survival for cetuximab) [8].
Mittmann and colleagues conducted a cost-effectiveness analysis using prospectively collected cost and quality adjusted life year (QALY) data for patients in the CO.17 [6]. For patients in the trial, cetuximab showed unattractively high incremental cost-effectiveness ratios. The incremental cost-effectiveness ratios (ICERs) were more favorable for patients whose tumors harbored wild-type KRAS but were still more than $186,000 per quality-adjusted life-year gained. Since there is no universally agreed upon cost-effectiveness threshold or willingness to pay (WTP) value, jurisdictions often adopt fuzzy thresholds that are guided by several factors [9,10,11]. Nevertheless, the likelihood of a positive funding recommendation appears inversely related to the incremental cost-effectiveness ratio (i.e., higher ICERs have a lower probability of being funded). [12, 13] This suggest that cost-effectiveness methods that explicitly allow the WTP threshold to vary may be helpful.
In the following section, we describe net benefit regression before applying the technique to analyze the cost-effectiveness data for patients in the CO.17 study.
Net benefit regression framework
We briefly review below the key components of net benefit regression and offer additional references for the interested reader [14,15,16]. With the net benefit regression approach, analysts can use regression-based techniques to analyze cost-effectiveness data; some advantages of the net benefit regression approach include facilitating solutions to challenging statistical situations (e.g., negative cost-effectiveness ratios or when Fieller's theorem will not yield a confidence interval) [14]. The net benefit regression framework was proposed a decade ago to marry regression and cost-effectiveness methods [17]. At that time, the conventional statistic reported in most cost-effectiveness studies was the ICER.
Building from the ICER
Mathematically, the ICER estimate is defined as Extra Cost ÷ Extra Effect, where Extra Cost is defined as ΔC = Expected Cost with New Treatment - Expected Cost with Usual Care and Extra Effect is defined as ΔE = Expected Effect with New Treatment - Expected Effect with Usual Care. With a cost-effectiveness dataset, it is common to use the Average Cost and Average Effect to represent Expected values. The ICER is troublesome to estimate because it is a ratio; however, its parts—the numerator and denominator—can be estimated easily by regression.
If one defines a binary treatment indicator variable as TX = 1 for a study participant receiving the new treatment, and TX = 0 for a study participant receiving usual care, then one can use ordinary least squares (OLS) to estimate linear regressions for cost (ci) and effect (ei). By adding an interaction term (say, between the KRAS status and TX indicator variables), it is possible to explore hypothesis-generating questions about subgroups for whom the new intervention may be more (or less) cost-effective. For example, is a drug more cost-effective for patients with wild-type KRAS tumors?
Willingness to pay (WTP)
When a new treatment costs more (ΔC > 0) and is more effective (ΔE > 0), the ICER > 0. For decisions, an ICER must be compared with a WTP threshold value. Unfortunately, a decision maker's WTP is unknown, so methods that treat WTP as unknown are best (e.g., varying WTP and exploring how a recommendation based on the estimated ICER may change). Net benefit regression addresses the unknown nature of the "correct" WTP value within the incremental net benefit.
Incremental net benefit regression
By computing each patient's net benefit (NB) as WTP × ei − ci and using it for a dependent variable, one can run a simple or multiple linear regression of the form
$$ \mathrm{NB}={\mathrm{b}}_0+{\mathrm{b}}_{\mathrm{TX}}\mathrm{TX}+{\upvarepsilon}_{\mathrm{NB}} $$
$$ \mathrm{NB}={\mathrm{b}}_0+{\mathrm{b}}_{\mathrm{TX}}\mathrm{TX}+{\mathrm{b}}_1{\mathrm{X}}_1+\cdots +{\mathrm{b}}_{\mathrm{p}}{\mathrm{X}}_{\mathrm{p}}+{\upvarepsilon}_{\mathrm{NB}}, $$
(respectively)
If bTX > 0, the new treatment is cost-effective since bTX equals the incremental net benefit (INB); the INB conveys by how much the value of the extra effect outweighs the extra cost (i.e., INB = WTP × ΔE − ΔC) [17]. Another way to view the INB is as the difference in the average net benefits between the new treatment and usual care: new treatment is more cost-effective if it has higher net benefits than usual care. The linearity of the dependent variable NB means the estimate of bTX = WTP × ΔE − ΔC. While the 95% confidence interval (CI) for the ICER cannot be made from the separate CIs for the estimates of ΔC and ΔE (because this process ignores the correlation between the cost and effect data) [18], the 95% CI for bTX is the 95% CI for the INB. If there is concern about using a parametric method for the 95% CI, one can use a non-parametric method like bootstrapping [19].
By estimating net benefit regression equations with various WTP values, one can gauge the sensitivity of cost-effectiveness findings in relation to WTP assumptions. One WTP value that should always be checked in a net benefit regression is WTP = $ΔC/ΔE since this should yield an INB estimate of zero (i.e., bTX = 0). By setting WTP = $0, the INB should become − 1 × ΔC. One can characterize uncertainty using CIs or p-values to create cost-effectiveness acceptability curves (e.g., see [20] for a step by step tutorial on using p-values this way). Because the INB and the ICER are related through WTP, both their estimates and uncertainty are closely connected. A graph of INB by WTP has a y-intercept equal to -ΔC, a slope of ΔE and an x-intercept of the ICER. The addition to the graph of 95% CIs for the INB illustrates, at their x-intercepts, the lower and upper 95% CIs (from Fieller's Theorem) for the ICER (see Results section for examples). We illustrate these points next using net benefit regression results.
Table 1 reports the results of simple linear regressions with dependent variables Effect, Cost and NB regressed on the cetuximab treatment indicator (i.e., the TX variable in the METHODS section). The estimates in Table 1 represent ΔE, ΔC and ΔNB (i.e., INB), respectively. Results when WTP = $0 are reported in the NB ($0) column; results for WTP = $500,000 are reported in the NB ($500 k) column. In this economic analysis, cetuximab showed extra cost of $22,210 and extra effect of 0.0771 QALYs (see the row labeled "ALL" in Table 1) when compared with best supportive care for all patients in the CO.17 trial. This corresponds to an ICER over $288,000 (i.e., 22,210/0.0771), not generally considered economically attractive. However, the results differ by KRAS status. While the extra cost estimate appears larger for patients with wild-type KRAS tumors (ΔC = $30,843, p-value < 0.001) than patients whose tumors do not express wild-type KRAS (ΔC = $13,787, p-value < 0.001), the extra effect estimates tell a much different story. Cetuximab appears more effective than best supportive care for patients with wild-type KRAS tumors (ΔE = 0.1769 QALYs, p-value < 0.001) but less effective than best supportive care for patients whose tumors do not express wild-type KRAS (ΔE = − 0.0172 QALYs, p-value > 0.40).
Table 1 Simple linear regression estimates producing estimates of incremental values (i.e., Δ's)
Table 2 shows the estimates for multiple linear regression; these results further support analyzing the data stratified by KRAS status. The coefficient on the cetuximab treatment indicator, which represents INB for patients with mutant status, is negative for WTP values from $0 to $500,000. However, the interaction term between cetuximab treatment and wild-type KRAS status, which represents difference of INBs between patients with KRAS wild-type and mutant statuses, switches from a negative value (− 738) to a positive value (14,762) as WTP increases from $100,000 to $200,000. This coincides with the INB estimate transitioning from a negative value (− 14,637) to a positive value (1149) over the same WTP range for patients whose tumors express wild-type KRAS (KRAS-WT). For the lowest value of WTP ($0), the interaction term between the cetuximab treatment and KRAS-WT indicator variables is statistically significantly negative (− 16,238, p-value < 0.001). This implies that ΔC is significantly higher for patients with wild-type KRAS than those with mutant KRAS. Conversely, for the highest value of WTP ($500,000), the interaction term is statistically significantly positive (61,262, p-value < 0.05); this suggests INB with this WTP is significantly higher for patients with wild-type KRAS than those with mutant KRAS.
Table 2 Multiple linear regression with Willingness to Pay (WTP) ranging from $0 to $500,000
Figure 1 plots the incremental net benefit estimate (as a solid line) and the pointwise 95% CIs (as dashed lines) in relation to WTP values varying from $0 to $500,000. Vertical values greater than zero indicate when INB is positive and cetuximab is cost-effective. The graphs for the overall sample and the KRAS-WT sub-group (the top and middle graphs in Fig. 1) show a positively sloped INB line that intersects the horizontal axis; for the overall sample this occurs near the WTP value of $300,000 and near $200,000 for the KRAS-WT subgroup. For the patients whose tumors do not express wild-type KRAS (i.e., the KRAS-MUT group), the negatively sloped INB line does not intersect any positive WTP value (on the horizontal axis). Figure 2 communicates the probability that cetuximab is cost-effective as WTP varies. There are three curves: one for KRAS-WT patients (upper solid line), one for all patients (middle dashed line) and one for KRAS-MUT patients (lower hashed line).
Incremental net benefit estimate for all patients (upper graph, dashed line), KRAS-WT (middle graph, solid line), and KRAS-MUT (lower graph, hashed line) and 95% confidence intervals (dashed lines)
Probability that new treatment is cost-effective for KRAS-WT (upper solid line), all patients (middle dashed line) and KRAS-MUT (lower hashed line) by Willingness to Pay threshold values
Typically, ICERs are the metrics reported in economic evaluations; However, in this case study, the ICER for the patients whose tumors do not express wild-type KRAS (KRAS-MUT) is negative. Based on expert recommendations, this means the ICER should not be calculated [21]. This makes it challenging to report the conventional cost-effectiveness statistic (which is negative in this case) and to report its 95% CI (where at least one limit will be negative as well). In contrast, Table 1's negative INB estimates, reported for all WTP values, indicate that cetuximab for KRAS-MUT patients is not economically attractive (at least for WTP values from $0 to $500,000). For KRAS-WT patients, the INB estimate becomes positive (switching from − 13,154 to 4536) as WTP increases from $100,000 to $200,000. This indicates that the ICER falls within this range (ΔC/ΔE = 30,843/0.1769 ≈ 174,350 per QALY). The overall sample demonstrates a similar pattern, switching from − 6805 to 898 as the WTP increases from $200,000 to $300,000 due to the overall ICER being ΔC/ΔE = 22,210/0.0771 ≈ $288,000 per QALY. As noted earlier, when WTP = $0, the INB estimate reduces to −ΔC; this explains the similarity between the coefficients in the Cost Column and those in the NB($0) column in Table 1.
As noted earlier, the findings in Table 2 support stratifying the analysis by KRAS status. Either simple linear or multiple linear regression can be run separately stratifying on a patient's tumor's KRAS status. In this case study, we simplified matters by focusing on simple linear regressions (except for Table 2). The findings of the simple linear regression models were similar to those of the multiple linear regression models for small WTP values; they diverged more for larger WTP values. This suggests that there is important variability in the patient outcome related to the independent variables; however, the variability in cost is not as strongly associated with the independent variables since adjusting for the patient covariates (i.e., all of the Xp's) does not affect the INB estimate for small WTP values. In passing, we note that investigators interested in studying a patient subgroup, defined by a continuous variable (e.g., age, disease severity, etc.), would not be able to stratify and run separate models; a model with a treatment interaction term would be better suited to exploring this type of hypothesis generating question (involving a continuous covariate).
Figure 1 demonstrates the usefulness of an INB by WTP graph. The different shapes of the curves suggest different findings. The upper graph (for the overall sample) and the middle graph (for the KRAS-WT group) show INB lines with negative y-intercepts, positive slopes and x-intercepts at WTP values of approximately $300,000 (for the overall sample) and approximately $200,000 (for the KRAS-WT group). As noted in the Methods section, an INB by WTP graph has a y-intercept equal to -ΔC, a slope of ΔE and an x-intercept of the ICER. Thus, a negative y-intercept means cetuximab is more costly, a positive slope means that cetuximab is more effective, and the WTP value where the INB estimate line intersects is the ICER. Of the three graphs in Fig. 1, the KRAS-WT group has the steepest INB estimate line; therefore, that group enjoys the largest gain from treatment (i.e., has the biggest ΔE). For the KRAS-MUT group, the negative y-intercept means cetuximab is more costly, the slightly negative slope means that cetuximab is slightly less effective than best supportive care, and the WTP value where the INB estimate line looks to intersect indicates a negative ICER.
Figure 1 can also be used to characterize the uncertainty associated with both the INB and the ICER. For the KRAS-WT group (in the middle graph), the upper and lower 95% confidence limits for the INB intersect the horizontal axis over the illustrated WTP range of $0 to $500,000. The two intersection points mark the Fieller's Theorem 95% CI for the ICER. This 95% CI corresponds very closely to the 95% CI of $130,326 to 334,940 reported in the original economic analysis. For the overall sample, Mittmann and colleagues reported a 95% CI of $187,440 to 898,201. This is congruent with the upper graph in Fig. 1; one confidence limit intersects the horizontal axis near a WTP = $200,000 and the other intersection point appears greater than $500,000. The lower graph (for the KRAS-MUT group) suggests a negative ICER with one 95% confidence limit that will be negative.
The cost-effectiveness acceptability curve (CEAC) in Fig. 2 combines parts of the regression results and Fig. 1 to characterize uncertainty [20]. The CEAC shows the probability that cetuximab plus best supportive care is cost-effective compared to best supportive care alone. WTP varies along the horizontal axis reflecting its unknown nature (to the analyst). The vertical axis communicates the portion of the INB distribution that is positive (indicating the probability that cetuximab is cost-effective). The three curves—one for KRAS-WT patients (upper solid line), one for all patients (middle dashed line) and one for KRAS-MUT patients (lower solid line)—support the general conclusions that have been offered. While the CEACs presented in Fig. 2 were made using parametric p-values from Table 1, it is possible to create them using non-parametric bootstrapping methods [20].
We conclude our discussion by reviewing some key limitations in our example involving the analysis of person-level cost-effectiveness data. The usefulness of person-level cost-effectiveness data is diminished when either a relevant outcome is not included in the original study or when the trial is too short in duration to see activity in the outcome of interest. The original clinical trial in our example used overall survival as its primary end point with secondary outcomes that included progression-free survival as well as quality adjusted life years (QALYs). Even with the strength of the trial's design, there is still the critical question of whether "enough" study participants contributed outcome data. Of the randomly assigned 572 patients, a total of 456 deaths occurred by the date of analysis. The median survival was 6.1 months in the cetuximab group and 4.6 months in the supportive-care group. The proportions of patients surviving at 6 and 12 months were 50 and 21%, respectively, in the cetuximab group and 33 and 16%, respectively, in the supportive care group.
Typically, when time-to-event data (e.g., survival) are incomplete, methods for censored data are employed. In contrast to the original economic evaluation which employed two methods to calculate overall survival: the restricted mean survival method (which restricts calculation of mean survival to the longest observed survival time) and the Kaplan – Meier method (which takes into account censoring), we used only the restricted mean survival method. Our simplifications (e.g., ordinary least squares to estimate a simple linear regression without specific methods for censoring) did not appear to make any qualitative difference in this case study; the original economic evaluation reported ICERs of $186,761 (KRAS-WT) and $299,613 (entire study population) compared to ICERs of $174,353 and $288,067 calculated using estimates from our Table 1, respectively. While our simple illustration of net benefit regression is meant to facilitate understanding, there are situations where more advanced methods for the analysis of censored cost-effectiveness data may be desired. Advanced papers by Bang and Tsiatis [22] as well as Chen et al. [23] provide excellent direction in this area. More advanced methods for simultaneous estimation of cost and effect equations are also available [24].
Finally, often analysts do not report INB results for all willingness to pay (WTP) values. When a reader is interested in a WTP value occurring within the range of WTP values that are used (e.g., WTP = $123,456) or a WTP value outside the range (e.g. WTP = $600,000), this may appear to be a concern. This concern can be addressed easily because the formula for INB is linear (i.e., INB = WTP × ΔE − ΔC). For each $1 change in WTP, the INB changes by ΔE. Thus, the INB when WTP = $123,456 is $23,456 × ΔE more than the INB when WTP = $100,000. Using Table 1 and KRAS-WT as an example, INB($123,456) = INB($100,000) + ($23,456 × ΔE) = − 13,154 + ($23,456 × 0.1769) = −$9005. This matches the result from a direct calculation of INB($123,456) = $123,456 × ΔE − ΔC = $123,456 × 0.1769 − $30,843 ≈ −$9005. This method can also be used for WTP values outside of the WTP ranges reported. For example, using the values in Table 1 for KRAS-WT, INB($600,000) = INB($500,000) + ($100,000 × ΔE) = 57,606 + ($100,000 × 0.1769) = $75,296.
Direct calculation verifies this result for KRAS-WT, INB($600,000) = $600,000 × 0.1769 − $30,843 ≈ $75,297.
This article showcases the advantages of the net benefit regression framework [17]. The framework allows incremental cost and incremental effect to be estimated either separately (i.e., using cost or effect as a dependent variable) or together (i.e., using net benefit as a dependent variable). In this paper's case study, there was a straightforward application of OLS. However, more ambitious analytical strategies with more sophisticated techniques can be used (e.g., using regression diagnostics, employing interaction terms and/or using advanced methods for non-randomized data). We were able to adjust our cost-effectiveness analysis for covariates using multiple linear regression and to explore clinically relevant patient subgroups. The incremental net benefit by WTP curve illustrated both our estimate of cost-effectiveness and the associated uncertainty. The INB by WTP graph allows the cost-effectiveness results to reflect the unknown WTP's impact on policy implications. When analyzing a cost-effectiveness dataset, net benefit regression can be a useful starting point for exploring one's data and communicating a new treatment's value.
The datasets analysed during the current study are not publicly available due to the Canadian Cancer Trials Group's policy. However, the results can verified based on this analysis and the separate publication of the original cost-effectiveness article.
CEAC:
Cost-effectiveness acceptability curve
ICER:
Incremental cost-effectiveness ratio
INB:
Incremental net benefit
MLR:
MUT:
Net benefit
OLS:
Ordinary least squares
Quality-adjusted life-year
SLR:
Wild type
WTP:
Willingness to pay
Fojo T, Grady C. How much is life worth: cetuximab, non-small cell lung cancer, and the $440 billion question. J Natl Cancer Inst. 2009;101(15):1044–8.
Hoch JS, Sabharwal M. Informing Canada's cancer drug funding decisions with scientific evidence and patient perspectives: the Pan-Canadian oncology drug review. Curr Oncol. 2013;20(2):121–4.
Thavorn K, Coyle D, Hoch JS, Vandermeer L, Mazzarello S, Wang Z, Dranitsaris G, Fergusson D. Clemons M. a cost-utility analysis of risk model-guided versus physician's choice antiemetic prophylaxis in patients receiving chemotherapy for early-stage breast cancer: a net benefit regression approach. Support Care Cancer. 2017.
Lairson DR, Dicarlo M, Deshmuk AA, Fagan HB, Sifri R, Katurakes N, Cocroft J, Sendecki J, Swan H, Vernon SW, Myers RE. Cost-effectiveness of a standard intervention versus a navigated intervention on colorectal cancer screening use in primary care. Cancer. 2014;120(7):1042–9.
Shih YC, Pan IW, Tsai YW. Information technology facilitates cost-effectiveness analysis in developing countries: an observational study of breast cancer chemotherapy in Taiwan. Pharmacoeconomics. 2009;27(11):947–61.
Mittmann N, Au HJ, Tu D, O'Callaghan CJ, Isogai PK, Karapetis CS, Zalcberg JR, Evans WK, Moore MJ, Siddiqui J, Findlay B, Colwell B, Simes J, Gibbs P, Links M, Tebbutt NC, Jonker DJ. Working group on economic analysis of National Cancer Institute of Canada clinical trials group.; Australasian gastrointestinal interest group. Prospective cost-effectiveness analysis of cetuximab in metastatic colorectal cancer: evaluation of National Cancer Institute of Canada clinical trials group CO.17 trial. J Natl Cancer Inst. 2009;101(17):1182–92.
Jonker DJ, O'Callaghan CJ, Karapetis CS, Zalcberg JR, Tu D, Au HJ, Berry SR, Krahn M, Price T, Simes RJ, Tebbutt NC, van Hazel G, Wierzbicki R, Langer C, Moore MJ. Cetuximab for the treatment of colorectal cancer. N Engl J Med. 2007 Nov 15;357(20):2040–8.
Karapetis CS, Khambata-Ford S, Jonker DJ, O'Callaghan CJ, Tu D, Tebbutt NC, Simes RJ, Chalchal H, Shapiro JD, Robitaille S, Price TJ, Shepherd L, Au HJ, Langer C, Moore MJ, Zalcberg JR. K-ras mutations and benefit from cetuximab in advanced colorectal cancer. N Engl J Med. 2008;359(17):1757–65.
Rocchi A, Menon D, Verma S, Miller E. The role of economic evidence in Canadian oncology reimbursement decision-making: to lambda and beyond. Value Health. 2008;11(4):771–83.
Eichler HG, Kong SX, Gerth WC, Mavros P, Jonsson B. Use of cost-effectiveness analysis in health-care resource allocation decision-making: how are cost-effectiveness thresholds expected to emerge? Value Health. 2004;7(5):518–28.
Rawlins MD, Culyer AJ. National Institute for clinical excellence and its value judgments. BMJ. 2004;329(7459):224–7.
Dakin H, Devlin N, Feng Y, Rice N, O'Neill P, Parkin D. The influence of cost-effectiveness and other factors on nice decisions. Health Econ. 2015;24:1256–71.
Ismail Z, Peacock SJ, Kovacic L, Hoch JS. Cost-effectiveness impacts cancer care funding decisions in British Columbia, Canada, evidence from 1998 to. Int J Technol Assess Health Care. 2008;5:1–6, 2017.
Hoch JS, Dewa CS. Advantages of the net benefit regression framework for economic evaluations of interventions in the workplace: a case study of the cost-effectiveness of a collaborative mental health care program for people receiving short-term disability benefits for psychiatric disorders. J Occup Environ Med. 2014;56(4):441–5.
Hoch JS, Dewa CS. Lessons from trial-based cost-effectiveness analyses of mental health interventions: why uncertainty about the outcome, estimate and willingness to pay matters. Pharmacoeconomics. 2007;25(10):807–16.
Hoch JS. Net benefit regression. In M. Kattan (editor), encyclopedia of. Med Decis Mak. 2009;2:805–11.
Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and cost-effectiveness analysis. Health Econ. 2002;11(5):415–30.
O'Brien BJ, Drummond MF, Labelle RJ, Willan A. In search of power and significance: issues in the design and analysis of stochastic cost-effectiveness studies in health care. Med Care. 1994;32:150–63.
Nixon RM, Wonderling D, Grieve RD. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared. Health Econ. 2010;19(3):316–33.
Hoch JS, Rockx MA, Krahn AD. Using the net benefit regression framework to construct cost-effectiveness acceptability curves: an example using data from a trial of external loop recorders versus Holter monitoring for ambulatory monitoring of "community acquired" syncope. BMC Health Serv Res. 2006;6:68.
Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in cost-effectiveness analysis. Med Decis Mak. 1998 Apr-Jun;18(2 Suppl):S68–80.
Bang H, Tsiatis AA. Estimating medical costs with censored data. Biometrika. 2000;87:329–43.
Chen S, Rolfes J, Zhao H. Estimation of mean health care costs and incremental cost-effectiveness ratios with possibly censored data. Stata J. 2015;15(3):698–711.
Hoch JS, Chaussé P. Econometric considerations when using the net benefit regression framework to conduct cost-effectiveness analysis. B. Baltagi and F. Moscone (editors), health econometrics in contributions to economic analysis: Emerald Publishing; 2018.
This work was supported by funding from the Canadian Centre for Applied Research in Cancer Control (ARCC). ARCC is funded by a grant from the Canadian Cancer Society Research Institute (CCSRI). The Canadian Cancer Trials Group is supported in part by the CCSRI. Funding for this trial used in the examples was provided by Eli Lilly Canada, Hoffman-La Roche Limited, Bristol-Myers Squibb, and ImClone Systems Incorporated. The authors take responsibility for all aspects of the study including design, data acquisition, analysis, interpretation, and drafting of the article. The study funders played no role in the design, analysis, or interpretation of the study.
Division of Health Policy and Management, Department of Public Health Sciences and Associate Director, Center for Healthcare Policy and Research, 2103 Stockton Blvd, Sacramento, CA, 95817, USA
Jeffrey S. Hoch, Annette Hay, Wanrudee Isaranuwatchai, Kednapa Thavorn, Natasha B. Leighl, Dongsheng Tu, Logan Trenaman, Carolyn S. Dewa, Chris O'Callaghan, Joseph Pater, Derek Jonker, Bingshu E. Chen & Nicole Mittmann
Jeffrey S. Hoch
Annette Hay
Wanrudee Isaranuwatchai
Kednapa Thavorn
Natasha B. Leighl
Dongsheng Tu
Logan Trenaman
Carolyn S. Dewa
Chris O'Callaghan
Joseph Pater
Derek Jonker
Bingshu E. Chen
Nicole Mittmann
AH, NBL, DT, CO'C, JP, DJ, BEC, NM designed and conducted the randomized trial that served as the case study for this analysis. JSH, KT and WI led the net benefit regression analysis. JSH, AH, LT, CD and WI collaborated on drafting the manuscript. All authors provided feedback on the manuscript. All authors approved the manuscript for submission.
Correspondence to Jeffrey S. Hoch.
This was a reanalysis of the data from a published paper, so ethics approval was not sought by the research ethics boards of participating institutions.
The authors declare that they have no competing interests. The authors take responsibility for all aspects of the study including design, data acquisition, analysis, interpretation, and drafting of the article.
Hoch, J.S., Hay, A., Isaranuwatchai, W. et al. Advantages of the net benefit regression framework for trial-based economic evaluations of cancer treatments: an example from the Canadian Cancer Trials Group CO.17 trial. BMC Cancer 19, 552 (2019). https://doi.org/10.1186/s12885-019-5779-x
Received: 11 December 2018
Net benefit regression
Epidemiology, prevention and public health
|
CommonCrawl
|
An incremental nonsmooth optimization algorithm for clustering using $ L_1 $ and $ L_\infty $ norms
Xin Jiang 1, , Kam Chuen Yuen 1, and Mi Chen 2,,
Department of Statistics and Actuarial Science, University of Hong Kong, Pokfulam Road, Hong Kong
College of Mathematics and Informatics & FJKLMAA, Fujian Normal University, Fuzhou 350117, China
* Corresponding author: Mi Chen
Fund Project: The research of Xin Jiang and Kam Chuen Yuen was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. HKU17329216). The research of Mi Chen was supported by National Natural Science Foundation of China (Nos. 11701087 and 11701088), Natural Science Foundation of Fujian Province (Nos. 2018J05003, 2019J01673 and JAT160130), Program for Innovative Research Team in Science and Technology in Fujian Province University, and the grant "Probability and Statistics: Theory and Application (No. IRTL1704)" from Fujian Normal University
This paper studies the optimal investment and reinsurance problem for a risk model with premium control. It is assumed that the insurance safety loading and the time-varying claim arrival rate are connected through a monotone decreasing function, and that the insurance and reinsurance safety loadings have a linear relationship. Applying stochastic control theory, we are able to derive the optimal strategy that maximizes the expected exponential utility of terminal wealth. We also provide a few numerical examples to illustrate the impact of the model parameters on the optimal strategy.
Keywords: Exponential utility, investment, optimal strategy, premium control, reinsurance, safety loading.
Mathematics Subject Classification: Primary: 97M30, 93E20; Secondary: 91G80.
Citation: Xin Jiang, Kam Chuen Yuen, Mi Chen. Optimal investment and reinsurance with premium control. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2781-2797. doi: 10.3934/jimo.2019080
S. Asmussen, B. J. Christensen and M. Taksar, Portfolio size as function of the premium: Modelling and optimization, Stochastics, 85 (2013), 575-588. doi: 10.1080/17442508.2013.797426. Google Scholar
L. Bai and J. Guo, Optimal proportional reinsurance and investment with multiple risky assets and no-shorting constraint, Insurance: Mathematics and Economics, 42 (2008), 968-975. doi: 10.1016/j.insmatheco.2007.11.002. Google Scholar
S. Browne, Optimal investment policies for a firm with a random risk process: Exponential utility and minimizing the probability of ruin, Mathematics of Operations Research, 20 (1995), 937-958. doi: 10.1287/moor.20.4.937. Google Scholar
W. Fleming and H. Soner, Controlled Markov Processes and Viscosity Solutions, 2$^{nd}$ edition, Stochastic Modelling and Applied Probability, vol. 25, Springer-Verlag, New York, 2006. Google Scholar
C. Hipp and M. Plum, Optimal investment for insurers, Insurance: Mathematics and Economics, 27 (2000), 215-228. doi: 10.1016/S0167-6687(00)00049-4. Google Scholar
B. Højgaard, Optimal dynamic premium control in non-life insurance. Maximizing dividend pay-outs, Scandinavian Actuarial Journal, 2002,225–245. doi: 10.1080/03461230110106291. Google Scholar
B. Højgaard and M. Taksar, Optimal proportional reinsurance policies for diffusion models, Scandinavian Actuarial Journal, 1998,166–180. doi: 10.1016/S0167-6687(98)00007-9. Google Scholar
S. E. Jabari and H. X. Liu, A stochastic model of traffic flow: Gaussian approximation and estimation, Transportation Research Part B: Methodological, 47 (2013), 15-41. doi: 10.1016/j.trb.2012.09.004. Google Scholar
Z. Liang, L. Bai and J. Guo, Optimal investment and proportional reinsurance with constrained control variables, Optimal Control Applications and Methods, 32 (2011), 587-608. doi: 10.1002/oca.965. Google Scholar
Z. Liang and J. Guo, Optimal proportional reinsurance and ruin probability, Stochastic Models, 23 (2007), 333-350. doi: 10.1080/15326340701300894. Google Scholar
Z. Liang and K. C. Yuen, Optimal dynamic reinsurance with dependent risks: Variance premium principle, Scandinavian Actuarial Journal, 2016, 18–36. doi: 10.1080/03461238.2014.892899. Google Scholar
Z. Liang, K. C. Yuen and K. C. Cheung, Optimal reinsurance and investment problem in a constant elasticity of variance stock market for jump-diffusion risk model, Applied Stochastic Models in Business and Industry, 28 (2012), 585-597. doi: 10.1002/asmb.934. Google Scholar
A. Martin-Löf, Premium control in an insurance system, an approach using linear control theory, Scandinavian Actuarial Journal, 1983, 1–27. doi: 10.1080/03461238.1983.10408686. Google Scholar
X. F. Peng, L. H. Bai and J. Y. Guo, Optimal control with restrictions for a diffusion risk model under constant interest force, Applied Mathematics & Optimization, 73 (2016), 115-136. doi: 10.1007/s00245-015-9295-3. Google Scholar
D. Promislow and V. Young, Minimizing the probability of ruin when claims follow Brownian motion with drift, North American Actuarial Journal, 9 (2005), 109-128. doi: 10.1080/10920277.2005.10596214. Google Scholar
H. Schmidli, Optimal proportional reinsurance policies in a dynamic setting, Scandinavian Actuarial Journal, 2001, 55–68. doi: 10.1080/034612301750077338. Google Scholar
H. Schmidli, On minimizing the ruin probability by investment and reinsurance, Annals of Applied Probability, 12 (2002), 890-907. doi: 10.1214/aoap/1031863173. Google Scholar
J. Thøegersen, Optimal premium as a function of the deductible: Customer analysis and portfolio characteristics, Risks, 4 (2016), 19 pages. Google Scholar
S. Thonhauser, Optimal investment under transaction costs for an insurer, European Actuarial Journal, 3 (2013), 359-383. doi: 10.1007/s13385-013-0078-4. Google Scholar
M. Vandebroek and J. Dhaene, Optimal premium control in a non-life insurance business, Scandinavian Actuarial Journal, 1990, 3–13. doi: 10.1080/03461238.1990.10413869. Google Scholar
H. Yang and L. Zhang, Optimal investment for insurer with jump-diffusion risk process, Insurance: Mathematics and Economics, 37 (2005), 615-634. doi: 10.1016/j.insmatheco.2005.06.009. Google Scholar
K. C. Yuen, Z. Liang and M. Zhou, Optimal proportional reinsurance with common shock dependence, Insurance: Mathematics and Economics, 64 (2015), 1-13. doi: 10.1016/j.insmatheco.2015.04.009. Google Scholar
M. Zhou, K. C. Yuen and C. C. Yin, Optimal investment and premium control in a nonlinear diffusion model, Acta Mathematicae Applicatae Sinica, 33 (2017), 945-958. doi: 10.1007/s10255-017-0709-7. Google Scholar
Figure 1. Effect of $ \sigma^2 $ on $ p^\star_t $
Figure 2. Effect of $ \sigma^2 $ on $ u^\star_t $
Figure 3. Effect of $ \beta $ on $ \pi^\star_t $
Figure 4. Effect of $ a $ on $ u^\star_t $
Figure 5. Effect of $ \eta_{min} $ on $ p^\star_t $
Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019
Xin Zhang, Jie Xiong, Shuaiqi Zhang. Optimal reinsurance-investment and dividends problem with fixed transaction costs. Journal of Industrial & Management Optimization, 2021, 17 (2) : 981-999. doi: 10.3934/jimo.2020008
Shan Liu, Hui Zhao, Ximin Rong. Time-consistent investment-reinsurance strategy with a defaultable security under ambiguous environment. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021015
Haili Yuan, Yijun Hu. Optimal investment for an insurer under liquid reserves. Journal of Industrial & Management Optimization, 2021, 17 (1) : 339-355. doi: 10.3934/jimo.2019114
Zhongbao Zhou, Yanfei Bai, Helu Xiao, Xu Chen. A non-zero-sum reinsurance-investment game with delay and asymmetric information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 909-936. doi: 10.3934/jimo.2020004
M. S. Lee, H. G. Harno, B. S. Goh, K. H. Lim. On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 45-61. doi: 10.3934/naco.2020014
Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107
Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033
Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052
Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179
Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020347
Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213
Stefan Doboszczak, Manil T. Mohan, Sivaguru S. Sritharan. Pontryagin maximum principle for the optimal control of linearized compressible navier-stokes equations with state constraints. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020110
Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051
Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032
Xin Jiang Kam Chuen Yuen Mi Chen
|
CommonCrawl
|
Laurence Wong
Miami and the Keys
Causal Inference in Python
Causal Inference in Python, or Causalinference in short, is a software package that implements various statistical and econometric methods used in the field variously known as Causal Inference, Program Evaluation, or Treatment Effect Analysis.
Through a series of blog posts on this page, I will illustrate the use of Causalinference, as well as provide high-level summaries of the underlying econometric theory with the non-specialist audience in mind. Source code for the package can be found at its GitHub page, and detailed documentation is available at causalinferenceinpython.org.
One way to overcome the problem of excessive extrapolation by least squares involves directly executing on the unconfoundedness assumption and nonparametrically matching subjects with similar covariate values together. As we shall see, least squares still plays an important role under this approach, but its scope is restricted to being a local one.
Recall that unconfoundedness says that conditional on \(X\), treatment assignment is as good as random. This means that conditional on \(X\), we should be able to estimate the conditional average treatment effect \(\mathrm{E}[Y(1)-Y(0)|X]\) by simply computing the difference between the average outcomes of the treated and control subjects that share similar covariate values. Once the conditional average treatment effects have been identified and estimated, we should then be able to recover the unconditional average treatment effect by aggregating them appropriately. This is the matching estimator of Abadie and Imbens (2006) in a nutshell.
More specifically, match each unit \(i\) in the sample with a unit \(m(i)\) in the opposite group, where $$m(i) = \mathrm{argmin}_{j: D_j \neq D_i} \|X_j - X_i\|.$$
Here \(\|X_j - X_i\|\) denotes some measure of distance between the covariate vectors \(X_j\) and \(X_i\). More precisely, it is defined as $$\|X_j - X_i\| = (X_j-X_i)' W (X_j-X_i).$$
By varying the positive-definite weighting matrix \(W\) we can obtain different measures of distance. One reasonable candidate for \(W\) is the inverse variance matrix \(\mathrm{diag}\{\hat{\sigma}_1^{-2}, \ldots, \hat{\sigma}_K^{-2}\}\), where \(\hat{\sigma}_k\) denotes the sample standard deviation of the \(k\)th covariate. Using this weighting matrix ensures that each covariate is put on a comparable scale before being aggregated.
Once the matching is complete, we can estimate the subject-level treatment effect by calculating the difference in observed outcomes between the subject and its matched twin. Averaging over these individual treatment effect estimates gives an estimate of the overall average treatment effect.
In Causalinference, we can implement this matching estimator and display the results by
>>> causal.est_via_matching()
>>> print(causal.estimates)
Treatment Effect Estimates: Matching
Est. S.e. z P>|z| [95% Conf. int.]
ATE 14.245 1.038 13.728 0.000 12.211 16.278
ATC 10.288 1.815 5.669 0.000 6.731 13.845
ATT 16.796 0.940 17.866 0.000 14.953 18.638
While the basic matching estimator is theoretically sound, as we see above its actual performance seems to be lacking, as its ATE estimate of 14.245 still seems quite far from the true value of 10. One reason is that in practice, the matching of one subject to another is rarely perfect. To the extent that a matching discrepancy exists, i.e., that \(X_i\) and \(X_{m(i)}\) are not equal, the matching estimator of the subject-level treatment effect will generally be biased.
It turns out it is possible to correct for this bias. In particular, one can show that the unit-level bias for a treated unit is equal to $$\mathrm{E}[Y(0)|X=X_i] - \mathrm{E}[Y(0)|X=X_{m(i)}].$$
A popular way of adjusting for this bias is to assume a linear specification for the conditional expectation function of \(Y(0)\) given \(X\), and approximate the above term by the inner product of the matching discrepancy and slope coefficient from an ancillary regression. The same principle of course applies for control units.
Although it might seem like we are back to assuming a linear regression function as was the case with OLS, the role played by the linear approximation is quite different here. In the OLS case, we are using the linearity assumption to extrapolate globally across the covariate space. In the current scenario, however, the linear approximation is only applied locally, to matched units whose covariate values were already quite similar.
To invoke bias adjustment in Causalinference, we simply supply True to the optional argument bias_adj, as follows:
>>> causal.est_via_matching(bias_adj=True)
ATE 9.624 0.245 39.354 0.000 9.145 10.103
ATC 9.642 0.270 35.776 0.000 9.114 10.170
ATT 9.606 0.318 30.159 0.000 8.981 10.230
As we can see above, the resulting ATE estimate is now much closer to the true ATE of 10.
In addition to bias adjustments, est_via_matching accepts two other optional parameters worth mentioning. The first is weights, which allows users to supply their own positive-definite weighting matrix to use for calculating distances between covariate vectors. The second is matches, which allows users to implement multiple matching by supplying an integer that is greater than 1. Setting matches=3, for instance, will result in having the three closest units matched to a given subject. In general, increasing this number introduces biases (since less ideal matches are being included), but lowers variance (as the counterfactual estimates are less dependent on any single unit). Typically it is advised that the number of matches be kept under 4, though there are no hard-and-fast rules.
Abadie, A. & Imbens, G. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica, 74, 235-267.
Other Posts...
Least Squares
Initialization
Categories: estimation Tags: math, code, econometrics, python
August 4, 2016 by Laurence Wong
← Propensity Score Least Squares →
|
CommonCrawl
|
Ultrastructural plasticity in the plant-parasitic nematode, Bursaphelenchus xylophilus
Taisuke Ekino1 na1,
Haru Kirino1 na1,
Natsumi Kanzaki2 &
Ryoji Shinya1,3
Parasite biology
Phenotypic plasticity is one of the most important strategies used by organisms with low mobility to survive in fluctuating environments. Phenotypic plasticity plays a vital role in nematodes because they have small bodies and lack wings or legs and thus, cannot move far by themselves. Bursaphelenchus xylophilus, the pathogenic nematode species that causes pine wilt disease, experiences fluctuating conditions throughout their life history; i.e., in both the phytophagous and mycetophagous phases. However, whether the functional morphology changes between the life phases of B. xylophilus remains unknown. Our study revealed differences in the ultrastructure of B. xylophilus between the two phases. Well-developed lateral alae and atrophied intestinal microvilli were observed in the phytophagous phase compared with the mycetophagous phase. The ultrastructure in the phytophagous phase was morphologically similar to that at the dauer stage, which enables the larvae to survive in harsh environments. It suggests that the living tree represents a harsh environment for B. xylophilus, and ultrastructural phenotypic plasticity is a key strategy for B. xylophilus to survive in a living tree. In addition, ultrastructural observations of obligate plant-parasitic species closely related to B. xylophilus revealed that B. xylophilus may be in the process of adapting to feed on plant cells.
Many environments are heterogeneous and change continuously. Most highly mobile organisms are able to move around to find optimal or better habitats. For example, winged insects or birds can fly away from environments that become unfavorable on a seasonal basis. Mobile animals can migrate into a new environment for feeding and/or reproduction. On the other hand, low mobility or sessile organisms cannot escape from their habitat when it becomes unfavorable and often alter their phenotype to adapt to their changing habitat. This ability of a single individual to develop more than one phenotype is termed "phenotypic plasticity." Phenotypic plasticity plays a key role in the survival and propagation of certain organisms1,2 in nature where large environmental fluctuations occur and could be one driver of evolution through the initiation of adaptive divergence, i.e., "plasticity-first" evolution3.
Phenotypic plasticity is well-studied in plants. Plants are sessile and cannot move even if the environment becomes unfavorable, making plasticity very important for their survival4; plants can recognize changes in their environment and alter their forms to adapt without moving. As an example, plant leaves are particularly plastic and exhibit great diversity in shape, size, and color in nature. Environmental factors, such as temperature, light quality and intensity, and humidity, all affect leaf morphology5,6,7. Phenotypic plasticity has also been reported in many animals. Many clones of Daphnia pulex develop thorns as antipredator devices in the presence of chemical signals from predators such as the insect Chaoborus americanus, and clones of D. pulex have been shown to develop thorns, known as neckteeth, until the end of the fourth juvenile instar, when they were exposed to chemical cues emitted from their predator, the Chaoborus flavicans larva8,9,10.
Nematoda is one of the most abundant and diverse groups of organisms on earth11,12. Nematodes have no wings or legs, and their body size is relatively small, resulting in a low dispersal capacity. Phenotypic plasticity is essential in nematodes because of their reduced mobility. The phenotypic plasticity of nematodes has been reported previously; for example, Caenorhabditis elegans enters a stress-resistant dauer stage in response to harsh environmental conditions13. Under optimum growth conditions, second-stage larvae (L2) develop into third-stage larvae, and then into adults. Under unfavorable growth conditions, L2 become dauer larvae, which enable survival and dispersal, and in this stage they are resistant to environmental stresses14.
The stomatal phenotypic plasticity of Diplogastridae nematodes has been well studied. Pristionchus pacificus has two feeding structures: a "wide-mouthed" eurystomatous morph with two large teeth and a "narrow-mouthed" stenostomatous morph with one small tooth. Eurystomatous animals can feed on other nematode species15, whereas stenostomatous animals have difficulty feeding on other nematodes, but are specialized as bacteria feeders when food bacteria are sufficient16.
Recently, stomatal phenotypic plasticity has been reported in another nematode clade, the genus Bursaphelenchus17. Most Bursaphelenchus feed on fungi, but a few species parasitize plants. Bursaphelenchus spp. commonly have a syringe-like feeding structure referred to as a "stylet" to pierce fungal and plant cells for nutrient uptake. However, Bursaphelenchus sinensis, which inhabits in dead pine trees, has a predatory form that has a stylet with a wider lumen than the mycetophagous form, and feeds on other nematodes.
The plant-parasitic nematode, Bursaphelenchus xylophilus, the causal pathogen of pine wilt disease, is thought to inhabit more fluctuating environments than B. sinensis. B. xylophilus mostly experiences two different environmental conditions, living pine trees and dead, moldy pine trees, whereas B. sinensis experiences only dead pine trees. Although no stomatal plasticity was observed in B. xylophilus17, B. xylophilus likely has environmentally sensitive traits. B. xylophilus uses epithelial cells and resin canal parenchyma cells as food sources in a living pine tree18. In this study, we refer to the phase in a living pine tree as the "phytophagous phase." In many cases, infection by B. xylophilus causes pine wilt disease and results in the death of the infected tree. Various bacteria and fungi grow on the dead pine tree, and B. xylophilus feeds on the fungi and propagates rapidly19. Herein, we refer to this phase as the "mycetophagous phase." In the fluctuating environment, B. xylophilus adapts its phenotype to survive. Thus, we compared the phenotypes between the phytophagous and mycetophagous phases, as this was the best method to understand the survival strategy of B. xylophilus in relation to its phenotypic plasticity. Tsai et al.20 showed that the expression of the collagen gene family differed significantly between these two phases, which indicated that the nematode exhibited morphological phenotypic plasticity. Although collagens are essential in structural formation and modification in nematodes21, compound microscopy revealed that the only morphological difference between these two phases was in the tail shape of adult females.
In this study, we used transmission electron microscopy (TEM) to investigate differences in the ultrastructure of the cuticle and intestines of B. xylophilus as these structures consist mainly of collagen22,23. Differences were examined between the phytophagous and mycetophagous phases, and the functional significance of any structural change is discussed. Furthermore, to understand the evolutionary adaptation of B. xylophilus to its host plant, we examined the internal ultrastructure and intestines of an obligate plant-parasitic species (phytophagous phase), Schistonchus sp., which is closely related to B. xylophilus, belongs to the family Aphelenchoididae, and evolved from soil-inhabiting mycetophagous species24. Findings were compared with those observed from the phytophagous phase of B. xylophilus.
Nematodes of B. xylophilus in the phytophagous phase were recovered from inoculated pine trees at 2, 3, and 4 weeks after inoculation. Throughout the sampling time, initial visible symptoms of disease onset were noted. The tips of the pine needles turned yellow, but the other needle parts remained green. No differences were observed in the ultrastructure of B. xylophilus between the three different time points (2, 3, and 4 weeks). Therefore, all the morphological information recorded during the phytophagous phase described hereafter was obtained from B. xylophilus recovered from inoculated pine trees at 2 weeks post-inoculation.
Observations of ultrastructure using TEM
We observed the cuticle, lateral alae, and intestinal ultrastructure of B. xylophilus grown on live pines, dead pines inoculated with fungus, and on fungal cultures.
Tsai et al. showed that the expression of the collagen gene family differed significantly between the mycetophagous phase and phytophagous phases20, which suggests that the cuticle, a structure consisting mainly of collagen, changes between the two phases. However, we observed no qualitative differences in the cuticular structure of B. xylophilus between sexes and phases (Fig. 1). The structure of the cuticle was morphologically the same as that previously reported25. It consisted of three parts: an epicuticle (EPI), cortical and median zones (CZ and MZ), and a basal zone (BZ). The EPI consisted of three layers: an electron-dense outermost layer (surface coat) and two inner layers. However, the triple layer was sometimes indistinct and appeared as a single or double layer. The CZ and MZ were electron-lucent and not distinguishable from each other, forming an amorphous zone. In comparison, the BZ was distinguishable from the MZ due to its radial striation. The thickness of the cuticle was significantly different between the two different phases, i.e., the phytophagous and mycetophagous phases cultured on agar plates for both females and males (P < 0.05).
Cuticular structure in Bursaphelenchus xylophilus adults. (A) Female in the mycetophagous phase cultured on agar; (B) female in the mycetophagous phase cultured on pine stem; (C) female in the phytophagous phase; (D) male in the mycetophagous phase cultured on agar; (E) male in the mycetophagous phase cultured on pine stem; (F) male in the phytophagous phase. EPI epicuticle, CZ & MZ cortical zone and median zone, BZ basal zone. Scale bar = 200 nm.
Lateral alae
Observing the cuticle structure, we found that the structures of the lateral alae differed markedly between the mycetophagous and phytophagous phases in males. The lateral alae were not conspicuous, i.e., were relatively flattened with a smooth surface, in the mycetophagous phase (Fig. 2D, E) in males and in both phases in females (Fig. 2A–C). On the other hand, the lateral alae were very well developed, with each band expanded with a mushroom-like outline in cross section in the phytophagous phase (Fig. 2F). To evaluate the degree of protrusion from cuticle, the protruding area of lateral alae was measured (Table 1). There were significant differences in the protrusion area between the phytophagous and mycetophagous phases in males cultured on agar plates (P < 0.05) and on pine stems (P < 0.01), whereas no qualitative or quantitative differences were observed in the lateral alae of females between phases.
Structure of the lateral alae in B. xylophilus adults. (A) Female in mycetophagous phase cultured on agar; (B) female in mycetophagous phase cultured on pine stem; (C) female in phytophagous phase; (D) male in mycetophagous phase cultured on agar; (E) male in mycetophagous phase cultured on pine stem; (F) male in phytophagous phase. Scale bar = 500 nm.
Table 1 Morphological characteristics associated with the phytophagous and mycetophagous phases of Bursaphelenchus xylophilus.
All individuals had some microvilli on the internal surface of the intestines (Fig. 3). The microvilli were connected to the basement membrane and supported by an axial core of actin filaments. However, morphological and quantitative characteristics differed between the mycetophagous and phytophagous phases. Microvilli on the internal surface of the intestines were longer and more numerous in the mycetophagous phase (cultured on either agar plates or pine stems) than in the phytophagous phase (Fig. 3).
Intestinal structure in B. xylophilus adults. (A) Female in mycetophagous phase cultured on agar; (B) female in mycetophagous phase cultured on pine stem; (C) female in phytophagous phase; (D) male in mycetophagous phase cultured on agar; (E) male in mycetophagous phase cultured on pine stem; (F) male in phytophagous phase. The white arrow indicates a microvillus. Scale bar = 500 nm.
The intestinal structure of Schistonchus sp., an obligate plant parasite, was also observed in this study for comparisons with the structure of the facultative plant parasite B. xylophilus. The intestinal structure of Schistonchus sp. was similar to that of B. xylophilus in the mycetophagous phase; i.e., the microvilli were long and numerous (Fig. 4).
Intestinal structure in Schistonchus sp. (undescribed species) female. The white arrow indicates a microvillus. Scale bar = 500 nm.
We studied the phenotypic plasticity of B. xylophilus to investigate its adaptation strategy in response to the changing environment in pine trees. We focused on its ultrastructure and made comparisons between the mycetophagous and phytophagous phases using TEM. As a result, we revealed that nematodes in the phytophagous phase had more developed lateral alae and more atrophic intestinal microvilli than those in the mycetophagous phase.
The lateral alae of nematodes in the phytophagous phase were mushroom-shaped, whereas those of nematodes in the mycetophagous phase were smooth. Shinya et al.26 observed the surface structure in B. xylophilus in the mycetophagous and phytophagous phases using scanning electron microscopy. Although they did not refer to the various forms of the lateral alae, Fig. 5 in their paper showed that nematodes in the phytophagous phase had well-developed lateral alae compared to nematodes raised on fungus. Alae are thickenings or projections of the cuticle that occur in the lateral or sublateral region. Lateral alae occur in both sexes, two per individual, and run longitudinally along the length of the nematode body27. They enable the nematode to change shape and the cuticle to flex during dorsoventral contractions. The exact function of the lateral alae remains unknown; however, their structure varies considerably, not only among different taxa but also across developmental stages within a species27. Therefore, we hypothesize that the lateral alae are one of the most functionally important structures of the nematode surface features, formed by elaborations of the cuticle27. The alae have a complex structure, which differs from that of the general body cuticle, and they may provide a degree of longitudinal stiffening. Furthermore, because nematodes lie and move on their sides28, the alae are in contact with the substrate, rather like the tread of a car tire27, where they probably assist in locomotion by increasing traction and preventing slipping, although their absence in some forms does not appear to inhibit movement. For example, the lateral alae in the dauer stage in Caenorhabditis elegans are different from that in other developmental stages and are mushroom-shaped29. Dauer larvae display a specific behavior known as "nictation," in which the nematodes lift and wave the anterior part of their bodies13, enabling them to attach to vectors such as insects30. The lateral alae are considered to play an important role in this specific behavior. The alae run longitudinally along the length of the nematode body and are also thought to support the body. It is likely that the alae enable nematodes to undertake three-dimensional activities, such as nictation as well as crawling.
Diagram of the lateral alae illustrating calculation of the degree of protrusion. The area of the lateral alae, indicated by dots, was measured with a polygon-section tool in ImageJ software version 1.52v (https://imagej.nih.gov/ij/)41. The width of the lateral alae, indicated by the black arrow, was measured with the straight tool in ImageJ software. The area indicated by asterisks was calculated from these two values.
Phytophagous individuals of B. xylophilus have well-developed and complex lateral alae. This suggests that B. xylophilus moves more actively in a living pine tree than on a dead pine tree and fungal mat. Nematodes were collected from fungal culture plates or pine trees, and phytophagous individuals exhibited three-dimensional activity in water, whereas mycetophagous individuals did not. From a pathological viewpoint, migration ability in a living tree is an indispensable trait for B. xylophilus to cause pine wilt disease31. Bursaphelenchus xylophilus nematodes migrate to the xylem tissues, spreading throughout the infected pine tree, and parenchymatous cell death and cavitation occur following the nematode migration. Finally, the infected pine tree dies due to a lack of water. Ichihara et al.31 compared the migration between virulent and avirulent strains of B. xylophilus in the tissues of living pine trees. They reported that the avirulent strain barely invaded the xylem resin canals and cortical tissue, whereas the virulent strain did. They concluded that the migration of B. xylophilus is an important factor that increases the severity of disease symptoms in infected pine trees. Considering these results, we hypothesize that well-developed lateral alae are necessary for B. xylophilus to actively move through tree tissues and cause disease.
The intestinal structure of B. xylophilus differed between the mycetophagous and the phytophagous phases. Although the number of microvilli could not be counted for technical reasons, the properties of the microvilli were observed. Both females and males had more developed microvilli in the mycetophagous phase than in the phytophagous phase. The microvilli in the mycetophagous nematodes were thicker and longer than those in the phytophagous nematodes. Generally, animals absorb nutrients through their intestines from their food. Previous reports showed that the intestinal and microvilli structure on the internal surface of the intestine changes according to nutritional conditions in many animal species, e.g., chicks32, pigs33, and pythons34. In terms of nematodes, the intestinal lumen in the dauer larvae of Caenorhabditis elegans and B. xylophilus was small, and the brush border was so compact that individual microvilli were difficult to discern25,35.
In the phytophagous phase, B. xylophilus most likely uses epithelial cells and the resin canal parenchyma cells as food sources in living trees. Given that the intestinal microvilli of B. xylophilus are shrunken in the phytophagous phase, epithelial cells of the living pine tree are not considered suitable food for B. xylophilus. This coincides with the fact that the reproductive rate of B. xylophilus on fungal mats (e.g., Botrytis cinerea) is higher than in pine seedlings19,36.
To understand the intestinal adaptation of B. xylophilus to the host plant in a phylogenetic context, we observed the intestinal ultrastructure of an undescribed Schistonchus sp. This species is an obligate plant-parasitic species and closely related to B. xylophilus, i.e., both species belong to the family Aphelenchoididae24. Interestingly, Schistonchus sp. feeding on plant (fig) tissues had well-developed microvilli on the surface of the intestine. It was also reported that the L2 and L3 of the obligate plant-parasitic nematode Heterodera glycines had a moderate proliferation of microvilli37. These results suggest that plant-parasitic species, including Schistonchus sp., can ingest sufficient nutrients from plant tissues. Schistonchus share an ancestor with B. xylophilus and is physiologically adapted as a plant parasite, indicating that nematodes are genetically plastic. Considering that the ancestral species of B. xylophilus is a mycetophagous species, it appears that B. xylophilus is in the process of adapting to being able to feed on plant cells.
In this study, we showed that B. xylophilus was able to alter its ultrastructure according to the environment. This is the first report of phenotypic plasticity in B. xylophilus at the ultrastructural level. The lateral alae were more developed in the phytophagous phase than in the mycetophagous phase. On the other hand, the intestine was less developed in the phytophagous phase than in the mycetophagous phase. The ultrastructure in the phytophagous phase was similar to that at the dauer stage, enabling the nematode to survive harsh environments. This suggests that ultrastructural phenotypic plasticity in B. xylophilus is a strategy for surviving harsh environments such as those encountered while actively migrating within the living pine tree instead of feeding. This rapid dispersion, regulated by the phenotypic plasticity, could cause the rapid development of symptoms observed in infected pine trees. To date, little research has focused on phenotypic plasticity in B. xylophilus20,38. However, investigating phenotypic plasticity is essential to aid our understanding of the survival strategy of B. xylophilus, and eventually, the survival strategy of Nematoda.
Nematode preparation
Mycetophagous and phytophagous phases were prepared. To understand the effects of growth medium, the mycetophagous phase was cultured on two grown media, i.e., agar plates and dead pine stems.
Mycetophagous phase cultured on agar plates
The B. xylophilus Ka4 C1 strain was cultured with Botrytis cinerea on malt extract agar plates (1.5% malt extract and 4.0% agar) at 25 °C. Nematodes were collected using the Baermann funnel technique overnight. Nematodes were rinsed three times with ion-exchange water (IEW), and females and males were randomly harvested.
Mycetophagous phase cultured on pine stems
Mycetophagous phase growing on pine twigs was prepared following Kanzaki et al.17. The B. xylophilus Ka4 C1 strain was cultured as described above. Ten stems (ca 7 mm in diam. and 4 cm long) of 3-year-old Japanese black pine (Pinus thunbergii) were autoclaved and individually transferred to 15-mL sterile plastic centrifuge tubes. B. cinerea was inoculated onto the stems. One week after inoculation, 500 nematodes were inoculated into the tubes. At 2 weeks after inoculation, the nematodes were extracted from the wood using the Baermann funnel technique overnight. The extracted nematodes were lumped together, transferred to a 15 mL sterile plastic centrifuge tube, and rinsed three times with IEW. Then, females and males were randomly harvested.
Phytophagous phase
The B. xylophilus Ka4 C1 strain was cultured as described above. Nematodes were washed three times with IEW and adjusted to 10,000 individuals per 500 µL of IEW. Two-year-old Japanese black pine trees were each inoculated with 10,000 nematodes on April 25, 2019. Four pine trees were inoculated in all. At 2, 3, and 4 weeks after inoculation, nematodes were extracted from two, one, and one pine tree, respectively, using the Baermann funnel technique overnight. The extracted nematodes were lumped together, and transferred to a separate 15-mL sterile plastic centrifuge tube for each week, rinsed three times with IEW, and females and males were randomly harvested.
Obligate plant-parasitic species closely-related to Bursaphelenchus xylophilus
Fruits of Ficus superba were collected in June 2017 from Ishigaki Island, Okinawa, Japan. A collection permit was not necessary for F. superba because it was not obtained inside a protected area. The fig fruits were identified visually and dissected under a light microscope (S8 Apo; Leica). Parasitic females of an undescribed Schistonchus sp. were identified under the light microscope (Eclipse 80i; Nikon; 200× or 400×) and used for the ultrastructural observations.
Observation of ultrastructure using TEM
Samples were prepared for TEM as described by Ekino et al.39 Adult nematodes were fixed in 1.25% glutaraldehyde and 1.5% picric acid (only 1% glutaraldehyde was used for fixing Schistonchus sp.) in 0.1 mol/L phosphate buffer (pH 7.4) for more than 24 h. Then, the heads and tails were excised from the fixed adults, and the mid-body regions were used; almost all positions were of equal thickness. The nematodes were arranged in a parallel array on a 2% agar pad prepared on a microscope slide40. Molten 2% agarose was dripped onto the pad containing three to five nematodes. After the agarose had solidified, we trimmed it into a cube and dripped molten 2% agarose to cover the surfaces of the agarose cube. It was then trimmed again to form a larger cube. Several cubes were prepared for each nematode species. The cubes were rinsed six times with phosphate buffer (10 min for each rinse) and post-fixed in 1% osmium tetroxide in IEW for 90 min. Then, the samples were dehydrated in a graded series of ethanol baths (one bath each of 50%, 70%, 80%, and 90% ethanol, and three baths of 99.5% ethanol), and cleaned three times with propylene oxide for 5 min. The samples were infiltrated overnight with a mixture of 50% Eponate resin and 50% propylene oxide. On the following night, they were infiltrated with undiluted resin. Finally, the samples were embedded in Epon resin.
We used a diamond knife fitted to an ultramicrotome to section the mid-body region of the nematodes. Sections were only used when the nematodes were cut vertically along the long axis of the body. The sections were collected on Formvar-coated copper grids for electron microscopy. The grid was stained with EM Stainer (Nissin EM, Tokyo, Japan) for 30 min, followed by lead citrate. Grid-mounted sections were photographed at 100 kV using a JEOL JEM-2010 electron microscope (Tokyo, Japan). One section in which the structures were observed clearly in each nematode was used for measurement. The thickness of the total cuticle and the total cross section area were measured from these cross-sections using ImageJ Software version 1.52v (National Institutes of Health, Bethesda, Maryland, USA)41. The cuticle was measured once where the cuticle structure was observed clearly and outside the annuli. The body radius (r) was calculated from the total cross-section area.
To evaluate the degree of protrusion of lateral alae from the cuticle, the protrusion area (S; areas indicated by asterisks in Fig. 5) was calculated. First, the area of the lateral alae (S′; area indicated by dots in Fig. 5) was measured with a polygon-section tool in ImageJ, and their width of (x; arrow in Fig. 5) was measured with the straight-tool in ImageJ. S was calculated using the following formula:
$$ S = S^{\prime} - \left( {\theta x^{2} - \frac{xr\cos \theta }{2}} \right), \theta = \sin^{ - 1} \frac{x}{2r}. $$
Significant differences in cuticle thickness and protrusion area of lateral alae between the phytophagous and mycetophagous phases (cultured on media plates and on pine stems) were identified using Welch's t-test. The significance level was adjusted uisng the Bonferroni method. The procedure was performed using Excel 2013 for Windows (Microsoft Corporation, Redmond, WA, USA). A value of P < 0.05 was considered to indicate statistical significance.
West-Eberhard, M. J. Phenotypic plasticity and the origins of diversity. Annu. Rev. Ecol. Syst. 20, 249–278 (1989).
Bateson, P. et al. Developmental plasticity and human health. Nature 430, 419–421 (2004).
ADS CAS PubMed Google Scholar
Levis, N. A. & Pfennig, D. W. Evaluating 'plasticity-first' evolution in nature: Key criteria and empirical approaches. Trends Ecol. Evol. 31, 563–574 (2016).
Schlichting, C. D. The evolution of phenotypic plasticity in plants. Annu. Rev. Ecol. Syst. 17, 667–693 (1986).
Dengler, N. G. Comparative histological basis of sun and shade leaf dimorphism in Helianthus annuus. Can. J. Bot. 58, 717–730 (1980).
Sultan, S. E. Phenotypic plasticity for plant development, function and life history. Trends Plant Sci. 5, 537–542 (2000).
Rozendaal, D. M. A., Hurtado, V. H. & Poorter, L. Plasticity in leaf traits of 38 tropical tree species in response to light; relationships with light demand and adult stature. Funct. Ecol. 20, 207–216 (2006).
Dodson, S. I. The ecological role of chemical stimuli for the zooplankton: Predator-induced morphology in Daphnia. Oecologia 78, 361–367 (1989).
ADS PubMed Google Scholar
Schwartz, S. S. Predator-induced alterations in Daphnia morphology. J. Plankton Res. 13, 1151–1161 (1991).
ADS Google Scholar
Tollrian, R. Neckteeth formation in Daphnia pulex as an example of continuous phenotypic plasticity: Morphological effects of Chaoborus kairomone concentration and their quantification. J. Plankton Res. 15, 1309–1318 (1993).
Yeates, G. W. Soil nematodes in terrestrial ecosystems. J. Nematol. 11, 213–229 (1979).
CAS PubMed PubMed Central Google Scholar
Bongers, T. & Bongers, M. Functional diversity of nematodes. Appl. Soil Ecol. 10, 239–251 (1998).
Cassada, R. C. & Russell, R. L. The dauerlarva, a post-embryonic developmental variant of the nematode Caenorhabditis elegans. Dev. Biol. 46, 326–342 (1975).
Gutteling, E. W., Riksen, J. A. G., Bakker, J. & Kammenga, J. E. Mapping phenotypic plasticity and genotype–environment interactions affecting life-history traits in Caenorhabditis elegans. Heredity 98, 28–37 (2007).
Serobyan, V., Ragsdale, E. J. & Sommer, R. J. Adaptive value of a predatory mouth-form in a dimorphic nematode. Proc. R. Soc. B Biol. Sci. 281, 20141334 (2014).
Serobyan, V., Ragsdale, E. J., Müller, M. R. & Sommer, R. J. Feeding plasticity in the nematode Pristionchus pacificus is influenced by sex and social context and is linked to developmental speed. Evol. Dev. 15, 161–170 (2013).
Kanzaki, N., Ekino, T. & Giblin-Davis, R. M. Feeding dimorphism in a mycophagous nematode, Bursaphelenchus sinensis. Sci. Rep. 9, 13956 (2019).
ADS PubMed PubMed Central Google Scholar
Mamiya, Y. Pine wilting disease caused by the pine wood nematode, Bursaphelenchus lignicolus in Japan. Jpn. Agric. Res. Q. 10, 206–211 (1976).
Futai, K. Population dynamics of Bursaphelenchus lignicolus (Nematoda: Aphelenchoididae) and B. mucronatus in pine seedlings. Appl. Entomol. Zool. 15, 458–464 (1980).
Tsai, I. J. et al. Transcriptional and morphological changes in the transition from mycetophagous to phytophagous phase in the plant-parasitic nematode Bursaphelenchus xylophilus: How B. xylophilus adapts in the host environment. Mol. Plant Pathol. 17, 77–83 (2016).
Johnstone, I. L. Cuticle collagen genes. Trends Genet. 16, 21–27 (2000).
Cox, G. N., Kusch, M. & Edgar, R. S. Cuticle of Caenorhabditis elegans: Its isolation and partial characterization. J. Cell. Biol. 90, 7–17 (1981).
Graham, P. L. et al. Type IV collagen is detectable in most, but not all, basement membranes of Caenorhabditis elegans and assembles on tissues that do not expressed it. J. Cell. Biol. 137, 1171–1183 (1997).
Davies, K. A. et al. A review of the taxonomy, phylogeny, distribution and co-evolution of Schistonchus Cobb, 1927 with proposal of Ficophagus n. gen. and Martininema n. gen. (Nematoda: Aphelenchoididae). Nematology 17, 761–829 (2015).
Kondo, E. & Ishibashi, N. Ultrastructural differences between the propagative and dispersal forms in pine wood nematode, Bursaphelenchus lignicolus, with reference to the survival. Appl. Entomol. Zool. 13, 1–11 (1978).
Shinya, R., Morisaka, H., Takeuchi, Y., Ueda, M. & Futai, K. Comparison of the surface coat proteins of the pine wood nematode appeared during host pine infection and in vitro culture by a proteomic approach. Phytopathology 100, 1289–1297 (2010).
Bird, A. F. & Bird, J. The Structure of Nematodes (Academic Press, Cambridge, 1991).
Lee, D. L. Changes in adult Nippostrongylus brasiliensis during the development of immunity to this nematode in rats: 1. Changes in ultrastructure. Parasitology 59, 29–39 (1969).
White, J. The nematode Caenorhabditis elegans. Cold Spring Harb. Monogr. Ser. 17, 81–122 (1988).
Campbell, J. F. & Gaugler, R. Nictation behaviour and its ecological implications in the host search strategies of entomopathogenic nematodes (Heterorhabditidae and Steinernematidae). Behaviour 126, 155–169 (1993).
Ichihara, Y., Fukuda, K. & Suzuki, K. Early symptom development and histological changes associated with migration of Bursaphelenchus xylophilus in seedling tissues of Pinus thunbergii. Plant Dis. 84, 675–680 (2000).
Shamoto, K. & Yamauchi, K. Recovery responses of chick intestinal villus morphology to different refeeding procedures. Poult. Sci. 79, 718–723 (2000).
Li, D. F. et al. Effect of fat sources and combinations on starter pig performance, nutrient digestibility and intestinal morphology. J. Anim. Sci. 68, 3694–3704 (1990).
Lignot, J.-H., Helmstetter, C. & Secor, S. M. Postprandial morphological response of the intestinal epithelium of the Burmese python (Python molurus). Comp. Biochem. Physiol. A. Mol. Integr. Physiol. 141, 280–291 (2005).
Popham, J. D. & Webster, J. M. Aspects of the fine structure of the dauer larva of the nematode Caenorhabditis elegans. Can. J. Zool. 57, 794–800 (1979).
Tamura, H. & Mamiya, Y. Reproduction of Bursaphelenchus lignicolus on alfalfa callus tissues. Nematologica 21, 449–454 (1975).
Endo, B. Y. Infrastructure of the intestine of second and third juvenile stages of the soybean cyst nematode, Heterodera glycines. P. Helm. Sci. Wash. 55, 117–131 (1988).
Shinya, R. et al. Secretome analysis of the pine wood nematode Bursaphelenchus xylophilus reveals the tangled roots of parasitism and its potential for molecular mimicry. PLoS ONE 8, e67377 (2013).
ADS CAS PubMed PubMed Central Google Scholar
Ekino, T., Yoshiga, T., Takeuchi-Kaneko, Y., Ichihara, Y. & Kanzaki, N. Sexual dimorphism of the cuticle and body-wall muscle in free-living mycophagous nematodes. Can. J. Zool. 97, 510–515 (2019).
Bargmann, C. I. & Avery, L. Laser killing of cells in Caenorhabditis elegans. Methods Cell Biol. 48, 225–250 (1995).
Abramoff, M. D., Magalhaes, P. J. & Ram, S. J. Image processing with ImageJ. Biophoton. Int. 11, 36–42 (2004) (This article is available online).
We sincerely thank Dr. Michio Sato, Meiji University, for his technical assistance with the TEM observations. This study was funded by grants from JSPS KAKENHI no. 19K23679 (to T.E.), JSPS Grant-in-Aid for Early-Career Scientists JP19K15853 (to R.S.), and JST PRESTO Grant no. JPMJPR17Q5 (to R.S.).
These authors contributed equally: Taisuke Ekino and Haru Kirino.
School of Agriculture, Meiji University, Kawasaki, Kanagawa, 214-8571, Japan
Taisuke Ekino, Haru Kirino & Ryoji Shinya
Kansai Research Center, Forestry and Forest Products Research Institute (FFPRI), Kyoto, Kyoto, 612-0855, Japan
Natsumi Kanzaki
JST PRESTO, Meiji University, Kawasaki, Kanagawa, 214-8571, Japan
Ryoji Shinya
Taisuke Ekino
Haru Kirino
T.E., H.K. and R.S. designed the study; T.E., H.K., N.K. and R.S. performed the research; T.E. and H.K. analyzed the data; and T.E., H.K., N.K. and R.S. wrote the paper.
Correspondence to Ryoji Shinya.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Ekino, T., Kirino, H., Kanzaki, N. et al. Ultrastructural plasticity in the plant-parasitic nematode, Bursaphelenchus xylophilus. Sci Rep 10, 11576 (2020). https://doi.org/10.1038/s41598-020-68503-3
|
CommonCrawl
|
Mapping the Positive and Negative Syndrome Scale scores to EQ-5D-5L and SF-6D utility scores in patients with schizophrenia | mijn-bsl Skip to main content
Mapping the Positive and Negative Syndrome Scale scores to EQ-5D-5L and SF-6D utility scores in patients with schizophrenia
Auteurs: Edimansyah Abdin, Siow Ann Chong, Esmond Seow, Swapna Verma, Kelvin Bryan Tan, Mythily Subramaniam
Gepubliceerd in: Quality of Life Research | Uitgave 1/2019
Mapping on EQ-5D-5L
Mapping on SF-6D
Zoek in dit artikel
The current study aims to map the Positive and Negative Syndrome Scale (PANSS) onto the five-level EuroQol five-dimensional (EQ-5D-5L) and Short Form six-dimensional (SF-6D) utility scores for patients with schizophrenia.
A total of 239 participants with schizophrenia spectrum disorder were recruited from a tertiary psychiatric hospital in Singapore. Ordinary least squares (OLS), censored least absolute deviations and Tobit regression methods were employed to estimate utility scores from the EQ-5D-5L and SF-6D. Model selection of the 18 regression models (three regression methods × six model specifications) was primarily determined by the smallest mean absolute error and mean square error, and the largest R2 and adjusted R2.
The mean age of the sample was 39.7 years (SD = 10.3). The mean EQ-5D-5L and SF-6D utility scores were 0.81 and 0.68, respectively. The EQ-5D-5L utility scores were best predicted by the OLS regression model consisting of three PANSS subscales, i.e. positive, negative and general psychopathology symptoms, and covariates including age and gender. The SF-6D was best predicted by OLS regression model consisting of five PANSS subscales, i.e. positive, negative, excitement, depression and cognitive subscales.
The current study provides important evidence to clinicians and researchers on mapping algorithms for converting PANSS scores into utility scores that can be easily applicable for cost–utility analysis when EQ-5D-5L and SF-6D data are not available for patients with schizophrenia spectrum disorder in Singapore.
Supplementary material 1 (DOCX 37 KB)
The online version of this article (https://doi.org/10.1007/s11136-018-2037-7) contains supplementary material, which is available to authorized users.
Schizophrenia is a severe mental disorder which is highly disabling in nature and results in substantial costs to the patient and their family members [ 1 ]. The global annual cost of the schizophrenia varies between countries and ranged from US$94 million in Puerto Rico to US$102 billion in the US in 2013 [ 2 ]. Although a wide range of interventions have been introduced for the care and treatment of people with schizophrenia, due to scarce healthcare resources, cost–utility analyses have been increasingly used to inform decision making on appropriate resource allocation for interventions for the care and treatment of people with schizophrenia [ 3 ]. The quality-adjusted life-year (QALY) is an important outcome measure in cost–utility analyses as it combines both quality and quantity of life into a single measure which allows a broader comparison not only across treatment strategies but also across patient populations [ 4 , 5 ]. Generic preference-based measures, such as the EuroQoL five-dimensional (EQ-5D) and the Short Form-6D (SF-6D) [ 5 ‐ 7 ] are often recommended to estimate QALY for cost–utility analyses.
In clinical populations, however, the generic preference-based measures are not used as often as clinical instruments. In the absence of generic preference-based instruments, mapping is a useful tool and can be used as an alternative solution to estimate utility scores from clinical instruments [ 5 ‐ 7 ]. This technique is called ''map'', or "crosswalk", as it can produce statistical formulas or algorithms that allow a disease-specific or clinical instrument to predict utility scores from generic preference-based measures and subsequently generate QALY for cost–utility analyses in clinical studies [ 5 , 8 ]. A systematic review has identified 144 studies mapping 110 different source instruments to EQ-5D and it was suggested that the number of mapping studies will continue to increase in the future [ 9 ]. However, we found that there are few mapping studies among patients with schizophrenia. To our knowledge only one study has been conducted so far to map Positive and Negative Syndrome (PANSS) scores onto EQ-5D and Short Form six-dimensional (SF-6D) utility scores using the direct method in the schizophrenia sample [ 10 ]. Findings showed that EQ-5D scores were best predicted by age, gender, general psychopathology and depressive symptoms [ 10 ].
The PANSS [ 11 ] is one of the most widely used clinical instruments to measure symptom severity of schizophrenia in clinical settings. It should be noted that the previous study [ 10 ] used a linear regression or ordinary least square (OLS) model to map utility scores from three PANSS factors (e.g. positive, negative and general psychopathology symptoms). It was reported that the performance of other alternative factor structure of the PANSS such as five-factor model [ 12 ] may be more appropriate for an Asian sample. There is also a growing literature which suggests that OLS model is unable to capture the EQ-5D score distribution which is often skewed and has a larger ceiling effect at value of 1. Given that limited data exist on mapping studies using the PANSS in Asian schizophrenia samples, further research is needed to understand how a mapping study using a different PANSS factor structure and statistical methods actualises in this population. Singapore is an island city-state in Southeast Asia, with a multi-ethnic Asian population of approximately 5.61 million people in 2016. The population comprises Chinese (74.3%), Malays (13.4%), Indians (9.1%) and other ethnic groups (3.2%) [ 13 ]. Thus, a mapping study done in Singapore can provide findings which can be extrapolated to other Asian populations with schizophrenia disorders. Hence, the current study aimed to map the PANSS onto the EQ-5D and SF-6D to inform future cost–utility analyses for treatment of schizophrenia in a multi-ethnic Asian sample.
This is a cross-sectional study that aimed to study generic preference-based measures of health-related quality of life in patients with schizophrenia and depression. The study was conducted at the Institute of Mental Health (IMH) in Singapore between August 2016 and November 2017. IMH is the national tertiary psychiatric care provider which serves a large number of patients with diverse mental needs in Singapore. Participants were patients recruited from outpatient clinics at IMH. Inclusion criteria comprised patients who were Singapore citizens or permanent residents, aged 21 years and above, able to understand and speak English and having a clinical diagnosis of schizophrenia spectrum disorder. Patients who were incapable of doing the interview due to severe physical or mental illnesses and aged less than 21 years were excluded from the study. Prior to the commencement of the study, written informed consent was obtained from all study participants. The study was approved by the relevant institutional ethics review board (National Healthcare Group Domain Specific Review Board). For the purpose of the current study, data on socio-demographic background, EQ-5D-5L, SF-36 and PANSS from 251 participants were included. After removing observations with missing values in key variables, 239 observations were included in the final sample for analysis.
The EQ-5D-5L comprises five items/dimensions on mobility, self-care, pain/discomfort, usual activities, and anxiety/depression with five possible answers for each item (1 = no problems, 2 = slight problems, 3 = moderate problems, 4 = severe problems, 5 = extreme problems) and can generate 3125 possible health states. The utility scores of EQ-5D-5L were obtained using the UK value set estimated using a crosswalk approach. The crosswalk approach was developed by van Hout et al. [ 14 ] using the crosswalk link function between the EQ-5D-3L value sets and the new EQ-5D-5L descriptive system.
The SF-6D is a multidimensional health classification system assessing the six health domains of physical functioning, role limitation, social functioning, pain, mental health and vitality, with 4–6 levels for each domain derived from 11 items of the Short Form 36 item questionnaire. The utility scores of SF-6D were obtained using the UK value set estimated using a SF-6D scoring algorithm. The SF-6D scoring algorithm was developed using the standard gamble (SG) method from a sample of 249 SF-6D health states from a representative sample of the UK population [ 15 ]. A previous study has found that the utility scores derived from English and Chinese versions of the SF-6D have been demonstrated to be equivalent in Singapore [ 16 ].
The PANSS [ 11 ] is a 30-item instrument designed to measure the severity of three dimensions of symptoms [positive (7 items), negative (7 items) and general psychopathology (16 items)] among those with schizophrenia spectrum disorder. The symptom severity was assessed by a trained interviewer following a semi-structured interview with the participant. Each symptom was rated on a seven-point scale representing increasing levels of psychopathology (1 = absent to 7 = extreme) with total scores ranging from 30 to 210. The PANSS total score and the three-factor scores including positive (scores ranging from 7 to 49), negative (scores ranging from 7 to 49) and general psychopathology (scores ranging from 16 to 112) dimensions were obtained by adding scores of the respective items in each subscale [ 11 ]. A previous study [ 12 ] in our local population found that PANSS could be further divided into five factors and reduced into 17 items: positive (scores ranging from 4 to 28), negative (scores ranging from 5 to 35), excitement (scores ranging from 3 to 21), depression (scores ranging from 3 to 21) and cognitive (scores ranging from 2 to 14) factors. The construct validity of five-factor structure has been validated in Singapore [ 12 ]. Hence, the five-factor structure of PANSS was also tested in the current study.
Statistical analyses were carried out using the STATA software version 13 (StataCorp LP, College Station, TX). Since the distribution of utility scores derived from generic preference-based measures such as EQ-5D are often not normally distributed and have higher ceiling effect at value of 1 [ 17 ], we decided to use three regression methods including the OLS, censored least absolute deviations (CLAD) [ 18 ] and Tobit [ 19 ] regression models to predict utility scores from the PANSS. The selection of these regression methods was based on their frequency of use and applicability to estimate the utility scores [ 5 , 20 ‐ 23 ]. The OLS (Eq. 1) is the most widely used regression method which can be expressed as
$$Y_{{\text{i}}} = \beta _{0} + \beta _{1} X_{{1{\text{i}}}} + \cdots + \beta _{{\text{k}}} X_{{k{\text{i}}}} + \varepsilon _{{\text{i}}} ,$$
where \({ Y}_{i}\) is the utility score for subject i, \({\beta }_{0}\) is the intercept, \({\beta }_{1},\dots .{\beta }_{k}\) are the regression coefficients (slopes), \({X}_{1i},\dots {X}_{ki}\) are the independent variables including PANSS total score, PANSS factor scores, age and gender and \({\varepsilon }_{i}\) is the error term. In the OLS model, the slopes and intercept were estimated by minimising the sum of the squares of the differences between the observed and predicted utility scores. This model assumes that the errors \({\varepsilon }_{i}\) are normally distributed with mean zero and constant variance (homoscedasticity) as denoted by \({\varepsilon }_{i}=N(0,{\sigma }^{2})\).
$${\text{Tobit: }}Y_{i}^{*} = \beta _{0} + \beta _{1} X_{1} i + \cdots + \beta _{k} X_{k} i + \varepsilon _{i}$$
The Tobit model (Eq. 2) is a regression model used in the presence of censored data which assumes that if a patient's observed EQ-5D utility score is 1, then \({Y}_{i}^{*}\) is greater than 1 (Eq. 3). It means despite having the same observed score at the ceiling of 1, patients with these responses may be different and that their true health state may vary [ 19 , 24 , 25 ]. This model assumes that there is a latent utility score \({Y}_{i}^{*}\) that represents a valuation of an individual's true health state. Hence, it is the latent utility score \({ Y}_{i}^{*}\), rather than the observed utility score \({ Y}_{i}\) was modelled.
$${Y_i}=Y_{i}^{*}{\text{ for }}Y_{i}^{*}<1\,\,{\text{and}}\,\,{Y_i}=1\,\,{\text{for }}\,Y_{i}^{*}>1.$$
Similar to Tobit model (Eq. 3), the CLAD model assumes that the EQ-5D utility score of 1 has been censored and therefore the latent utility \({Y}_{i}^{*}\) is modelled. However, in contrast to OLS and Tobit model, the CLAD model regresses the median of the latent utility \({Y}_{i}^{*}\) instead of the mean and minimises the sum of absolute deviations instead of minimising the sum of squares of the differences between the observed and predicted utility scores to estimate the regression slopes [ 26 ].
Six different model specifications were tested in each regression method after taking into account total score, the three original factor scores and the five-factor model of the PANSS that was proposed for Asian samples [ 12 ] as well as recent findings from a mapping study by Siani et al. [ 10 ]. The model specifications are outlined in detail in Table 1. Model 1 included only PANSS total score as a main predictor for the utility score; Model 2 included PANSS positive, negative, and general psychopathology symptom scores; Model 3 included PANSS positive, negative, excitement, depression and cognitive scores; Model 4 included PANSS total score, age and gender; Model 5 included PANSS positive, negative, general psychopathology symptom scores, age and gender; Model 6 included PANSS positive, negative, excitement, depression, cognitive scores, age and gender. These similar model specifications were also tested for the SF-6D utility score using OLS, CLAD and Tobit regression models. A number of posteriori specification tests including normality, multicollinearity and homoscedasticity assumptions were conducted to validate the final regression model [ 27 ].
\(= \beta _{0} + \beta _{1} PANSStotal_{{\text{i}}} + \varepsilon _{i}\)
\(={\beta }_{0}+{\beta }_{1}{\text{PANSStotal}}_{\text{i}}+{\varepsilon }_{i}\)
\(= \beta _{0} + \beta _{1} PANSSpositive_{{\text{i}}} + \beta _{2} PANSSnegative_{{\text{i}}} + \beta _{3} PANSSgeneralpsychopathology_{i} + e_{i}\)
\(={\beta }_{0}+{\beta }_{1}{PANSS positive}_{i}+{\beta }_{2}{PANSS negative}_{i}+{\beta }_{3}{PANSSgeneralpsychopathology}_{i}+{e}_{i}\)
\(= \beta _{0} + \beta _{1} PANSSpositive_{{\text{i}}} + \beta _{2} PANSSnegative_{i} + \beta _{3} PANSSexcitement_{{\text{i}}}\)
\(+ \beta _{4} PANSSdepression_{{\text{i}}} + \beta _{5} PANSScognitive_{{\text{i}}} + e_{i}\)
\(= \beta _{0} + \beta _{1} PANSSpositive_{{\text{i}}} + \beta _{2} PANSSnegative_{{\text{i}}} + \beta _{3} PANSSexcitement_{{\text{i}}}\)
\(={\beta }_{0}+{\beta }_{1}{PANSS positive}_{i}+{\beta }_{2}{PANSS negative}_{i}+{\beta }_{3}{PANSS excitement}_{i}\)
\(+{ \beta }_{4}{PANSS depression}_{i}+{\beta }_{5}{PANSS cognitive}_{i}+{e}_{i}\)
\(={\beta }_{0}+{\beta }_{1}{PANSS total}_{i}+{\beta }_{2}{age}_{i}+{\beta }_{3}{gender}_{i}+{\varepsilon }_{i}\)
\(={\beta }_{0}+{\beta }_{1}{PANSS positive}_{i}+{\beta }_{2}{PANSS negative}_{i}+{\beta }_{3}{PANSSgeneralpsychopathology}_{i}\)
\(+{ \beta }_{4}{age}_{i}+{\beta }_{5}{gender}_{i}+{\varepsilon }_{i}\)
\(+{ \beta }_{4}{PANSS depression}_{i}+{\beta }_{5}{PANSS cognitive}_{i}+{ \beta }_{6}{age}_{i}+{\beta }_{7}{gender}_{i}+{\varepsilon }_{i}\)
The best fit model of the 18 regression models (three regression methods X six model modifications) (Table 1) was assessed based on the four goodness-of-fit indices [ 29 ] including mean absolute error (MAE)—the mean of the absolute differences between observed and the predicted utility scores; mean square error (MSE)—the average of the squared differences between the observed and the predicted utility scores; R2 and adjusted R2 [ 7 ]. With R2 and adjusted R2 values, the higher the value, the better the model, and with MAE and MSE values, the lower the value, the better the model fit. The coefficient of determination, R2 and adjusted R2 parameters derived from OLS regression model were not compatible across regression methods as the R2 from OLS regression model was based on coefficient of determination between the observed and the predicted scores, while R2 from the CLAD and Tobit regression model were calculated based on likelihood ratio between the intercept-only model and the full model [ 23 , 28 ]. For purposes of fair comparison, the R2 from three regression methods (OLS, CLAD and Tobit) were calculated by squaring the correlation coefficient of the observed and the predicted utility scores. Adjusted R2 was computed using the following formula after penalising the complexity model [ 23 ]:
$${\text{Adjusted }}{R^2}=1 - \frac{{(n - 1)}}{{(n - p - 1)}}(1 - {R^2}),$$
where n is the sample size and p is the number of parameters in the model.
Lastly, the distributions of the observed and predicted utility values in terms of mean and standard deviation were also compared across models to guide selection of the best prediction model.
The descriptive statistics are presented in Table 2. The sample included 239 participants with schizophrenia spectrum disorder. The mean age of the overall sample was 39.7 years (SD = 10.3), 59.8% were Chinese, 19.3% were Malays, 18.4% were Indians and 2.5% belonged to other ethnicities. The EQ-5D-5L showed a mean (SD) index score of 0.81 (0.2) with minimum and maximum scores of − 0.367 and 1 while the mean (SD) SF-6D index was 0.68 (0.15) with minimum and maximum scores of 0.389 and 1, respectively. An inspection of the distribution of the EQ-5D-5L scores showed a substantial skew to the right, that is, towards better quality of life (Fig. 1). The mean (SD) PANSS total score and its three factors including positive, negative and general psychopathology symptoms were 47.8 (15.4), 12.1 (5.5), 10.8 (5.0) and 24.9 (7.9), respectively. The mean (SD) PANSS five-factor scores including positive, negative, excitement, depression and cognitive factors were 8.1 (5.0), 7.5 (3.6), 4.3 (2.0), 6.1 (3.3) and 2.9 (1.5), respectively.
Characteristics of the sample
Age, mean (SD)
39.70 (10.28)
Primary and below
Post secondary to Pre-University
Observed EQ-5D-5L and SF-6D utility scores
Table 3 shows regression coefficients and goodness-of-fit measures of the three regression methods (OLS, CLAD and Tobit) for mapping PANSS to the EQ-5D-5L and SF-6D utility scores. Among the three regression methods, OLS generally had the largest R2 and adjusted R2, and smallest MSE, regardless of the model specifications. For each regression method, six model specifications were fitted. We found model 5 consisting of the positive, negative, general psychopathology symptoms, age and gender had the largest adjusted R2, and smallest MSE. The model explained 33.8% of the variation with minimal MSE (0.0328) and MAE (0.1348), respectively. A histogram used to examine the normality assumption of the final model showed that the distribution of the residuals was approximately normal (Supplementary Fig. 1). Possible multicollinearity problem between predictors were determined by obtaining the variance inflation factor (VIF). If the VIF value was more than 10, multicollinearity was considered. No significant multicollinearity effect was observed between EQ-5D predictors (VIF values ranging from 1.00 to 2.53) (Supplementary Table 1). The Breusch–Pagan (BP) test was used to detect heteroscedasticity. If homoscedasticity assumption was rejected, heteroscedasticity robust standard error adjustment based on Huber–White sandwich estimator of the variance was used for inference [ 27 ]. The BP test statistic showed that the null hypothesis of homoscedasticity assumption of the model was rejected (Chi-square (degree of freedom): 46.5(5), p value < 0.001). Therefore, heteroscedasticity robust standard error adjustment was used for inference. In this final model, the EQ-5D-5L utility values could be generated using the following mapping algorithm for schizophrenia sample in the absence of EQ-5D data:
$${\text{EQ-5D-5L utility}} = {\text{1}}.{\text{31}}0{\text{3}} - 0.00{\text{44 }} \times {\text{ positive}} + 0.00{\text{25 }} \times {\text{ negative}} - 0.0{\text{146 }} \times {\text{ generalpsychopathology}} - 0.00{\text{29 }} \times {\text{ age}} + 0.0{\text{149 }} \times {\text{ female}}.$$
Regression coefficients and goodness-of-fit measures of three regression methods for mapping PANSS to the EQ-5D-5L and SF-6D utility scores
EQ-5D-5L
SF-6D
Model specification
PANSS total
− 0.0078*
Goodness-of-fit indices
Adjusted R2
− 0.0043
General psychopathology
0.0079*
− 0.00005
− 0.125*
*p value < 0.05
The model revealed that general psychopathology symptoms and age were significantly and inversely associated with EQ-5D-5L utility scores. The observed and predicted EQ-5D-5L and SF-6D utility scores by six different model specifications are compared in Table 3. It reveals that the means of the predicted values based on OLS were similar to the observed EQ-5D-5L values, while the means of the predicted values based on CLAD and Tobit models tended to produce larger predicted values than the observed values (Table 4).
Descriptive statistics of the observed and predicted utility scores by OLS, CLAD and Tobit models
Observed EQ-5D-5L utility scores
Predicted EQ-5D-5L utility scores
Among the three regression methods, OLS generally had slightly larger R2 and adjusted R2, and smaller MSE and MAE than the CLAD and Tobit regression methods. For each regression method, six model specifications were also fitted. We found model 3 consisting of the positive, negative, excitement, depression and cognitive factors had the largest adjusted R2, and smallest MSE and MAE than other model specifications. The distribution of the residuals was approximately normal (Supplementary Fig. 1). No significant multicollinearity effect was observed between SF-6D predictors (VIF values were ranged from 1.17 to 1.53) (Supplementary Table 1). However, BP test statistic showed that the null hypothesis of homoscedasticity assumption of the model was rejected (Chi-square (degree of freedom): 17(5), p value = 0.003). Therefore, heteroscedasticity robust standard error adjustment was used for inference. This model explained 27.2% of the variation with minimal MSE (0.0162) and MAE (0.1056), respectively. Hence, the SF-6D utility scores could be generated using the following mapping algorithm:
$${\text{SF-6D utility}} = 0.{\text{8}}7{\text{12}} - 0.00{\text{57 }} \times {\text{ positive}} - 0.00{\text{76 }} \times {\text{ negative}} - 0.00{\text{5}}0{\text{ }} \times {\text{ excitement}} - 0.0{\text{149 }} \times {\text{ depression}} + 0.0{\text{1}}00{\text{ }} \times {\text{ cognitive}}.$$
In this final model, positive, negative and depression factor scores were significantly and inversely associated with SF-6D utility scores. The means of the predicted values based on OLS were similar to the observed EQ-5D-5L values. The means of the predicted values based on CLAD model tended to produce smaller predicted values than the observed values, while the means of the predicted values based on Tobit model tended to produce larger predicted values than the observed values (Table 4).
This is one of the few studies that has been conducted to map PANSS on two common utility scores, the EQ-5D-5L and SF-6D, in people with schizophrenia spectrum disorder in a multi-ethnic Asian population. In the current study, three different regression methods and 6 model specifications were explored to develop mapping functions for PANSS. The findings provide evidence that different predictive models should be used for mapping EQ-5D-5L and SF-6D in the Asian sample. Our regression analyses showed that the EQ-5D-5L utility scores of schizophrenia spectrum disorder patients in our sample was best predicted by the OLS model consisting of three PANSS factors, i.e. positive, negative and general psychopathology symptoms, and covariates including age and gender (Model 5). The final model explained 33.8% of the variation with minimal MSE (0.0328) and MAE (0.1348), respectively. Our mapping algorithm for SF-6D was best predicted by model 3 consisting of five PANSS factors, i.e. positive, negative, excitement, depression and cognitive. This model explained 27.2% of the variation with minimal MSE (0.0162) and MAE (0.1056), respectively. In predicting EQ-5D-5L utility scores, we note, however, that only PANSS general psychopathology symptoms and age were significantly and inversely associated with EQ-5D-5L utility scores. A previous study [ 10 ] has shown that the PANSS general psychopathology symptoms, age, gender and depressive symptoms as measured by Calgary Depression Scale for Schizophrenia (CDSS) were significantly associated with EQ-5D and SF-6D utility scores. Our results are not directly comparable with those of Siani et al. study [ 10 ] because we only included age and gender in the regression analyses. Apart from that, the differences in the findings between our study and the above study could be also due to the fact that the latter study had included CDSS scale in their regression model and the data were derived from European cohort studies. For this reason, we are unable to make a direct comparison with this study. However, it is important to note that the main purpose of the study was to develop a mapping function that best predicted utility scores derived from EQ-5D-5L and SF-6D, thus the statistical significance of the regression coefficients is of secondary consideration [ 23 ]. In the current study, model selection was primarily determined by four goodness-of-fit indices including R2, adjusted R2, MAE and MSE. Apart from that, the predictive ability of the model in terms of predicted mean scores was also taken into account in the model selection. Generally, our MAE values for the SF-6D were lower than MAE values (up to 0.15) that are typically reported in the literature [ 8 ]. The MAE values that were produced by OLS in our final model were slightly higher than that produced by CLAD model. Cheung et al. [ 23 ] have suggested that the MAE tends to favour the CLAD than the OLS model. Hence, the selection of the best model should not focus exclusively on one fit index but should take into consideration overall goodness-of-fit indices and descriptive statistics of the predicted scores. In the current study, the mean predicted EQ-5D-5L and SF-6D values at the group level based on OLS regression were similar to their mean observed values. These findings may support internal validity of the model and suggest that the mapping algorithm may be more appropriately used at a group level. Among the three regression methods, the means of the predicted values based on Tobit models tended to produce larger predicted values than the observed values. Previous studies have shown that the OLS was superior to Tobit as well as CLAD model [ 23 , 28 ‐ 30 ].
There are some limitations in the current study. First, the utility values for EQ-5D-5L were based on the crosswalk approach that mapped EQ-5D-5L utility scores from the EQ-5D-3L because the Singapore value set estimated from a valuation study has not yet been developed. Hence, results may have been different if the new value set had been used [ 31 ]. Second, the limited sample size did not allow us to test the model equally well in sub-samples of the overall sample. However, it should be noted that a recent set of guidelines issued by the ISPOR Good Practice for Outcomes Research Task Force has not recommended splitting the sample to validate results on part of the sample [ 32 ]. Hence, further validation of the current mapping findings using external dataset is recommended. Nonetheless, this is the first study to compare three regression methods to map a clinical instrument onto widely used generic preference-based measures specifically for schizophrenia spectrum disorder patients. The mapping process has incorporated a schizophrenia-specific clinical instrument and key demographic characteristics (i.e. age and gender) into the model which is feasible for use in economic evaluation of clinical research projects. From a clinical perspective, PANSS, age and gender are the most commonly used data to measure symptoms severity and characteristics of patients with schizophrenia either in trials or intervention programs in Singapore. For example, in Singapore's Early Psychosis Intervention Programme's (EPIP) [ 33 ] long-acting injectable risperidone (LAR) trial [ 34 ], information on symptom severity was routinely captured by case managers to monitor patients as well as to assess the efficacy of the antipsychotic medication but the trial lacked a cost-effectiveness component. The availability of this algorithm will make cost–utility analysis among patients with schizophrenia who are monitored only for symptom severity possible in future trials and program evaluation.
In conclusion, we have provided algorithms for converting PANSS scores into utility scores that is easily applicable in the clinical setting when EQ-5D and SF-6D data are not available. The current study provides important evidence to clinicians and researchers about the mapping algorithms that can be used for economic evaluation of patients with schizophrenia spectrum disorder in a multi-ethnic Asian patient population.
The study was approved by the relevant institutional ethics review board (National Healthcare Group Domain Specific Review Board (DSRB) (Reference No: 2016/00215).
Informed consent was obtained from all individual participants included in the study.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
vorige artikel Scoring the Child Health Utility 9D instrument: estimation of a Chinese child and adolescent-specific tariff
volgende artikel Towards standardized patient reported physical function outcome reporting: linking ten commonly used questionnaires to a common metric
Below is the link to the electronic supplementary material.
go back to reference WHO. (1998). Schizophrenia and public health. Division of mental health and prevention of substance abuse, Geneva: World Health Organization. WHO. (1998). Schizophrenia and public health. Division of mental health and prevention of substance abuse, Geneva: World Health Organization.
go back to reference Chong, H. Y., Teoh, S. L., Wu, D. B., et al. (2016). Global economic burden of schizophrenia: A systematic review. Neuropsychiatric Disease and Treatment, 12, 357–373. PubMedPubMedCentral Chong, H. Y., Teoh, S. L., Wu, D. B., et al. (2016). Global economic burden of schizophrenia: A systematic review. Neuropsychiatric Disease and Treatment, 12, 357–373. PubMedPubMedCentral
go back to reference Andrew, A., Knapp, M., McCrone, P., et al. (2012). Effective interventions in schizophrenia: The economic case, in personal social services research unit. London: London School of Economics and Political Science. Andrew, A., Knapp, M., McCrone, P., et al. (2012). Effective interventions in schizophrenia: The economic case, in personal social services research unit. London: London School of Economics and Political Science.
go back to reference Brazier, J. (2008). Measuring and valuing mental health for use in economic evaluation. The Journal of Health Services Research & Policy, 13(Suppl 3), 70–75. CrossRef Brazier, J. (2008). Measuring and valuing mental health for use in economic evaluation. The Journal of Health Services Research & Policy, 13(Suppl 3), 70–75. CrossRef
go back to reference Brazier, J., Connell, J., Papaioannou, D., et al. (2014). A systematic review, psychometric analysis and qualitative assessment of generic preference-based measures of health in mental health populations and the estimation of mapping functions from widely used specific measures. Health Technology Assessment, 18, 1–188. CrossRefPubMed Brazier, J., Connell, J., Papaioannou, D., et al. (2014). A systematic review, psychometric analysis and qualitative assessment of generic preference-based measures of health in mental health populations and the estimation of mapping functions from widely used specific measures. Health Technology Assessment, 18, 1–188. CrossRefPubMed
go back to reference NICE. (2013). Guide to the methods of technology appraisal 2013, National Institute for Health and Care Excellence, UK. https://www.nice.org.uk/process/pmg9/chapter/foreword. NICE. (2013). Guide to the methods of technology appraisal 2013, National Institute for Health and Care Excellence, UK. https://www.nice.org.uk/process/pmg9/chapter/foreword.
go back to reference Longworth, L., Yang, Y., Young, T., et al. (2014). Use of generic and condition-specific measures of health-related quality of life in NICE decision-making: A systematic review, statistical modelling and survey. Health Technology Assessment, 18, 1–224. CrossRefPubMed Longworth, L., Yang, Y., Young, T., et al. (2014). Use of generic and condition-specific measures of health-related quality of life in NICE decision-making: A systematic review, statistical modelling and survey. Health Technology Assessment, 18, 1–224. CrossRefPubMed
go back to reference Brazier, J. E., Yang, Y., Tsuchiya, A., et al. (2010). A review of studies mapping (or cross walking) non-preference based measures of health to generic preference-based measures. The European Journal of Health Economics, 11, 215–225. CrossRefPubMed Brazier, J. E., Yang, Y., Tsuchiya, A., et al. (2010). A review of studies mapping (or cross walking) non-preference based measures of health to generic preference-based measures. The European Journal of Health Economics, 11, 215–225. CrossRefPubMed
go back to reference Dakin, H., Abel, L., Burns, R., et al. (2018). Review and critical appraisal of studies mapping from quality of life or clinical measures to EQ-5D: An online database and application of the MAPS statement. Health and Quality of Life Outcomes, 16, 31. CrossRefPubMedPubMedCentral Dakin, H., Abel, L., Burns, R., et al. (2018). Review and critical appraisal of studies mapping from quality of life or clinical measures to EQ-5D: An online database and application of the MAPS statement. Health and Quality of Life Outcomes, 16, 31. CrossRefPubMedPubMedCentral
go back to reference Siani, C., de Peretti, C., Millier, A., et al. (2016). Predictive models to estimate utility from clinical questionnaires in schizophrenia: Findings from EuroSC. Quality of Life Research, 25, 925–934. CrossRefPubMed Siani, C., de Peretti, C., Millier, A., et al. (2016). Predictive models to estimate utility from clinical questionnaires in schizophrenia: Findings from EuroSC. Quality of Life Research, 25, 925–934. CrossRefPubMed
go back to reference Kay, S. R., Fiszbein, A., & Opler, L. A. (1987). The positive and negative syndrome scale (PANSS) for schizophrenia. Schizophrenia Bulletin, 13, 261–276. CrossRefPubMed Kay, S. R., Fiszbein, A., & Opler, L. A. (1987). The positive and negative syndrome scale (PANSS) for schizophrenia. Schizophrenia Bulletin, 13, 261–276. CrossRefPubMed
go back to reference Jiang, J., Sim, K., & Lee, J. (2013). Validated five-factor model of positive and negative syndrome scale for schizophrenia in Chinese population. Schizophrenia Research, 143, 38–43. CrossRefPubMed Jiang, J., Sim, K., & Lee, J. (2013). Validated five-factor model of positive and negative syndrome scale for schizophrenia in Chinese population. Schizophrenia Research, 143, 38–43. CrossRefPubMed
go back to reference Singapore Department of Statistics. (2017). Yearbook of statistics Singapore, 2017. Singapore: Ministry of Trade & Industry. Singapore Department of Statistics. (2017). Yearbook of statistics Singapore, 2017. Singapore: Ministry of Trade & Industry.
go back to reference van Hout, B., Janssen, M. F., Feng, Y. S., et al. (2012). Interim scoring for the EQ-5D-5L: Mapping the EQ-5D-5L to EQ-5D-3L value sets. Value in Health, 15, 708–715. CrossRefPubMed van Hout, B., Janssen, M. F., Feng, Y. S., et al. (2012). Interim scoring for the EQ-5D-5L: Mapping the EQ-5D-5L to EQ-5D-3L value sets. Value in Health, 15, 708–715. CrossRefPubMed
go back to reference Brazier, J., Roberts, J., & Deverill, M. (2002). The estimation of a preference-based measure of health from the SF-36. Journal of Health Economics, 21, 271–292. CrossRefPubMed Brazier, J., Roberts, J., & Deverill, M. (2002). The estimation of a preference-based measure of health from the SF-36. Journal of Health Economics, 21, 271–292. CrossRefPubMed
go back to reference Wee, H. L., Cheung, Y. B., Fong, K. Y., et al. (2004). Are English- and Chinese-language versions of the SF-6D equivalent? A comparison from a population-based study. Clinical Therapeutics, 26, 1137–1148. CrossRefPubMed Wee, H. L., Cheung, Y. B., Fong, K. Y., et al. (2004). Are English- and Chinese-language versions of the SF-6D equivalent? A comparison from a population-based study. Clinical Therapeutics, 26, 1137–1148. CrossRefPubMed
go back to reference Xie, F., Pullenayegum, E. M., Li, S. C., et al. (2010). Use of a disease-specific instrument in economic evaluations: Mapping WOMAC onto the EQ-5D utility index. Value in Health, 13, 873–878. CrossRefPubMed Xie, F., Pullenayegum, E. M., Li, S. C., et al. (2010). Use of a disease-specific instrument in economic evaluations: Mapping WOMAC onto the EQ-5D utility index. Value in Health, 13, 873–878. CrossRefPubMed
go back to reference Powell, J. L. (1984). Least absolute deviations estimation for the censored regression model. Journal of Econometrics, 25, 303–325. CrossRef Powell, J. L. (1984). Least absolute deviations estimation for the censored regression model. Journal of Econometrics, 25, 303–325. CrossRef
go back to reference Tobin, J. (1985). Estimation of relationships for limited dependent variables. Econometrica, 26, 24–36. CrossRef Tobin, J. (1985). Estimation of relationships for limited dependent variables. Econometrica, 26, 24–36. CrossRef
go back to reference Payakachat, N., Summers, K. H., Pleil, A. M., et al. (2009). Predicting EQ-5D utility scores from the 25-item National Eye Institute Vision Function Questionnaire (NEI-VFQ 25) in patients with age-related macular degeneration. Quality of Life Research, 18, 801–813. CrossRefPubMed Payakachat, N., Summers, K. H., Pleil, A. M., et al. (2009). Predicting EQ-5D utility scores from the 25-item National Eye Institute Vision Function Questionnaire (NEI-VFQ 25) in patients with age-related macular degeneration. Quality of Life Research, 18, 801–813. CrossRefPubMed
go back to reference Subramaniam, M., Abdin, E., Poon, L. Y., et al. (2014). EQ-5D as a measure of programme outcome: Results from the Singapore early psychosis intervention programme. Psychiatry Research, 215, 46–51. CrossRefPubMed Subramaniam, M., Abdin, E., Poon, L. Y., et al. (2014). EQ-5D as a measure of programme outcome: Results from the Singapore early psychosis intervention programme. Psychiatry Research, 215, 46–51. CrossRefPubMed
go back to reference Subramaniam, M., Abdin, E., Vaingankar, J. A., et al. (2013). Impact of psychiatric disorders and chronic physical conditions on health-related quality of life: Singapore Mental Health Study. Journal of Affective Disorders, 147, 325–330. CrossRefPubMed Subramaniam, M., Abdin, E., Vaingankar, J. A., et al. (2013). Impact of psychiatric disorders and chronic physical conditions on health-related quality of life: Singapore Mental Health Study. Journal of Affective Disorders, 147, 325–330. CrossRefPubMed
go back to reference Cheung, Y. B., Luo, N., Ng, R., et al. (2014). Mapping the functional assessment of cancer therapy-breast (FACT-B) to the 5-level EuroQoL Group's 5-dimension questionnaire (EQ-5D-5L) utility index in a multi-ethnic Asian population. Health and Quality of Life Outcomes, 12, 180. CrossRefPubMedPubMedCentral Cheung, Y. B., Luo, N., Ng, R., et al. (2014). Mapping the functional assessment of cancer therapy-breast (FACT-B) to the 5-level EuroQoL Group's 5-dimension questionnaire (EQ-5D-5L) utility index in a multi-ethnic Asian population. Health and Quality of Life Outcomes, 12, 180. CrossRefPubMedPubMedCentral
go back to reference Wijeysundera, H. C., Tomlinson, A. J., Norris, C. M., et al. (2011). Predicting EQ-5D utility scores from the Seattle Angina Questionnaire in Coronary Artery Disease: A mapping algorithm using a bayesian framework. Medical Decision Making, 31, 481. CrossRefPubMed Wijeysundera, H. C., Tomlinson, A. J., Norris, C. M., et al. (2011). Predicting EQ-5D utility scores from the Seattle Angina Questionnaire in Coronary Artery Disease: A mapping algorithm using a bayesian framework. Medical Decision Making, 31, 481. CrossRefPubMed
go back to reference Austin, P. C. (2002). A comparison of methods for analyzing health-related quality-of-life measures. Value in Health, 5, 329–337. CrossRefPubMed Austin, P. C. (2002). A comparison of methods for analyzing health-related quality-of-life measures. Value in Health, 5, 329–337. CrossRefPubMed
go back to reference Pullenayegum, E. M., Tarride, J. E., Xie, F., et al. (2010). Analysis of health utility data when some subjects attain the upper bound of 1: Are Tobit and CLAD Models appropriate? Value in Health, 13, 487–494. CrossRefPubMed Pullenayegum, E. M., Tarride, J. E., Xie, F., et al. (2010). Analysis of health utility data when some subjects attain the upper bound of 1: Are Tobit and CLAD Models appropriate? Value in Health, 13, 487–494. CrossRefPubMed
go back to reference Baum, C. F. (2006). An introduction to modern econometrics using stata. College Station: A Stata Press Publication. Baum, C. F. (2006). An introduction to modern econometrics using stata. College Station: A Stata Press Publication.
go back to reference Sullivan, P. W., & Ghushchyan, V. (2006). Mapping the EQ-5D index from the SF-12: US general population preferences in a nationally representative sample. Medical Decision Making, 26, 401–409. CrossRefPubMedPubMedCentral Sullivan, P. W., & Ghushchyan, V. (2006). Mapping the EQ-5D index from the SF-12: US general population preferences in a nationally representative sample. Medical Decision Making, 26, 401–409. CrossRefPubMedPubMedCentral
go back to reference Chuang, L. H., & Kind, P. (2009). Converting the SF-12 into the EQ-5D: An empirical comparison of methodologies. Pharmacoeconomics, 27, 491–505. CrossRefPubMed Chuang, L. H., & Kind, P. (2009). Converting the SF-12 into the EQ-5D: An empirical comparison of methodologies. Pharmacoeconomics, 27, 491–505. CrossRefPubMed
go back to reference Cheung, Y. B., Tan, L. C., Lau, P. N., et al. (2008). Mapping the eight-item Parkinson's Disease Questionnaire (PDQ-8) to the EQ-5D utility index. Quality of Life Research, 17, 1173–1181. CrossRefPubMed Cheung, Y. B., Tan, L. C., Lau, P. N., et al. (2008). Mapping the eight-item Parkinson's Disease Questionnaire (PDQ-8) to the EQ-5D utility index. Quality of Life Research, 17, 1173–1181. CrossRefPubMed
go back to reference Wang, P., Luo, N., Tai, E. S., et al. (2016). The EQ-5D-5L is more discriminative than the EQ-5D-3L in patients with diabetes in Singapore. Value in Health Regional Issues, 9, 57–62. CrossRefPubMed Wang, P., Luo, N., Tai, E. S., et al. (2016). The EQ-5D-5L is more discriminative than the EQ-5D-3L in patients with diabetes in Singapore. Value in Health Regional Issues, 9, 57–62. CrossRefPubMed
go back to reference Wailoo, A. J., Hernandez-Alava, M., Manca, A., et al. (2017). Mapping to estimate health-state utility from non-preference-based outcome measures: An ISPOR good practices for outcomes research task force report. Value in Health, 20, 18–27. CrossRefPubMed Wailoo, A. J., Hernandez-Alava, M., Manca, A., et al. (2017). Mapping to estimate health-state utility from non-preference-based outcome measures: An ISPOR good practices for outcomes research task force report. Value in Health, 20, 18–27. CrossRefPubMed
go back to reference Verma, S., Poon, L. Y., Subramaniam, M., et al. (2012). The Singapore Early Psychosis Intervention Programme (EPIP): A programme evaluation. The Asian Journal of Psychiatry, 5, 63–67. CrossRefPubMed Verma, S., Poon, L. Y., Subramaniam, M., et al. (2012). The Singapore Early Psychosis Intervention Programme (EPIP): A programme evaluation. The Asian Journal of Psychiatry, 5, 63–67. CrossRefPubMed
go back to reference Verma, S., Subramaniam, M., Abdin, E., et al. (2010). Safety and efficacy of long-acting injectable risperidone in patients with schizophrenia spectrum disorders: A 6-month open-lable trial in Asian patients. Human Psychopharmacology, 25, 230–235. CrossRefPubMed Verma, S., Subramaniam, M., Abdin, E., et al. (2010). Safety and efficacy of long-acting injectable risperidone in patients with schizophrenia spectrum disorders: A 6-month open-lable trial in Asian patients. Human Psychopharmacology, 25, 230–235. CrossRefPubMed
Edimansyah Abdin
Siow Ann Chong
Esmond Seow
Swapna Verma
Kelvin Bryan Tan
Mythily Subramaniam
Quality of Life Research / Uitgave 1/2019
Normative values for the distress thermometer (DT) and the emotion thermometers (ET), derived from a German general population sample
OriginalPaper
Psychometric performance assessment of Malay and Malaysian English version of EQ-5D-5L in the Malaysian population
Quality of life of persons living with HIV and congruence with surrogate decision-makers
Development and content validity of a hemodialysis symptom patient-reported outcome measure
Changes and determinants of health-related quality of life among people newly diagnosed with HIV in China: a 1-year follow-up study
Validation of two PROMIS item banks for measuring social participation in the Dutch general population
|
CommonCrawl
|
Acta Mechanica Sinica
Elastoplastic homogenization of particulate composites complying with the Mohr–Coulomb criterion and undergoing isotropic loading
D. Yang
Q. C. He
First Online: 29 May 2015
This work aims at determining the overall response of a two-phase elastoplastic composite to isotropic loading. The composite under investigation consists of elastic particles embedded in an elastic perfectly plastic matrix governed by the Mohr–Coulomb yield criterion and a non-associated plastic flow rule. The composite sphere assemblage model is adopted, and closed-form estimates are derived for the effective elastoplastic properties of the composite either under tensile or compressive isotropic loading. In the case when elastic particles reduce to voids, the composite in question degenerates into a porous elastoplastic material. The results obtained in the present work are of interest, in particular, for soil mechanics.
Graphical abstract
Composite Porous medium Elastoplasticity Mohr–Coulomb yield criterion Non-associated flow rule Homogenization
Consider a composite sphere \(\varOmega \) made of phase 2 surrounded by a concentric coating consisting of phase 1. Let the outer surface \(\partial \varOmega \) of \(\varOmega \) be subjected to a monotonous isotropic loading \(p_{0}\) starting from zero. Before \(p_{0}\) reaches the initial yielding load \(p^{\prime }\), both the core and coating remain linearly elastic. The corresponding stress fields in the core and coating can be analytically and explicitly determined as in Ref. [3]. Introducing these results into the M–C yield function of Eq. (5), we obtain the yielding condition:
$$\begin{aligned} f^{(1)}(\varvec{\sigma })\!= & {} \!\frac{p_{0}[\mu _{1}\left( \frac{a}{r}\right) ^{3}(\kappa _{1}\!-\!\kappa _{2})(-\!3\gamma \!-\!\sin \phi )\!+\!\kappa _{1}(4\mu _{1}\!+\!3\kappa _{2})\sin \phi ]}{4\mu _{1}c(\kappa _{1}\!-\!\kappa _{2})\!-\!\kappa _{1}(4\mu _{1}\!+\!3\kappa _{2})}\nonumber \\&-\, C\cos \phi \leqslant 0. \end{aligned}$$
Let us study the question of whether the above yield condition can be fulfilled when accounting for the tensile and compressive loading modes and considering all values of the internal friction angle \(\phi \in (0,\frac{\uppi }{2})\). The following four cases need to be distinguished:
Tensile loading \(p_{0}\leqslant 0\) and \(\kappa _{1}>\kappa _{2},\) leading to \(\gamma =-1\). Then, we can verify that Eq. (92) is satisfied first at the interface \(r=a^{+}\) when \(p_{0}=p^{\prime }\).
Tensile loading \(p_{0}\leqslant 0\), but \(\kappa _{1}<\kappa _{2},\) giving rise to \(\gamma =+1\). The plastification of the matrix takes place first at the interface \(r=a^{+}\) when \(p_{0}=p^{\prime }\).
Compressive loading \(p_{0}\geqslant 0\) and \(\kappa _{1}>\kappa _{2}\). In this case, \(\gamma =+1\). The yield condition can hold only when \(\sin \phi <(2\delta _{1}+1)^{-1}.\) Otherwise, the two constituent phases will remain elastic for any pressure \(p_{0}\).
Compressive loading \(p_{0}\geqslant 0\) but \(\kappa _{1}<\kappa _{2}\). Correspondingly, \(\gamma =-1.\) The initial plastification can be produced only if \(\sin \phi <-(2\delta _{1}+1)^{-1}\).
Once the matrix gets plastified for \(p_{0}=p^{\prime }\) and when \(p_{0}\) increases, an annular plastic zone expands from the inner coating surface \(\rho =a^{+}\) towards the outer coating surface \(\rho =b\). At the current pressure \(p_{0},\) the non-zero displacement, strain, and stress fields in the two elastic zones can be found in Ref. [3]. For the purpose of comparison, the same notation is used as in Ref. [1]:
\(0\leqslant r\leqslant a,\) the elastic core made of phase 2,
$$\begin{aligned} u_{r}^{(2)}= & {} a_{2}r,\ \ \varepsilon _{r}^{(2)}=\varepsilon _{\theta }^{(2)}=\varepsilon _{\varphi }^{(2)}=a_{2}, \end{aligned}$$
$$\begin{aligned} \sigma _{r}^{(2)}= & {} \sigma _{\theta }^{(2)}=\sigma _{\varphi }^{(2)}=3\kappa _{2}a_{2}; \end{aligned}$$
\(\rho \leqslant r\leqslant b,\) the elastic part of the matrix made of phase 1,
$$\begin{aligned}&u_{r}^{(1)} =a_{1}r+\frac{b_{1}}{r^{2}},\quad \varepsilon _{r}^{(1)}=a_{1}- \frac{2b_{1}}{r^{3}},\nonumber \\&\varepsilon _{\theta }^{(1)}=\varepsilon _{\varphi }^{(1)}=a_{1}+\frac{b_{1}}{r^{3}},\end{aligned}$$
$$\begin{aligned}&\sigma _{r}^{(1)} =3\kappa _{1}a_{1}-\frac{4\mu _{1}b_{1}}{r^{3}},\nonumber \\&\sigma _{\theta }^{(1)}=\sigma _{\varphi }^{(1)}=3\kappa _{1}a_{1}+\frac{2\mu _{1}b_{1}}{r^{3}}, \end{aligned}$$
where \(a_{1},b_{1}\) and \(a_{2}\) are constants to be determined by the boundary and interface continuity conditions.
The stress field of the plastic zone in the coating must satisfy the Mohr–Coulomb condition of Eq. (92) and the equilibrium equation
$$\begin{aligned} \frac{\hbox {d}\sigma _{r}^{(1)}}{\hbox {d}r}+\frac{2(\sigma _{r}^{(1)}-\sigma _{\theta }^{(1)})}{r}=0. \end{aligned}$$
The solution to this equation is given by
$$\begin{aligned} \sigma _{r}^{(1)}= & {} -D\zeta ^{-2q}+C\cot \phi , \end{aligned}$$
$$\begin{aligned} \sigma _{\theta }^{(1)}= & {} \sigma _{\varphi }^{(1)}=D(q-1)\zeta ^{-2q}+C\cot \phi , \end{aligned}$$
where \(\zeta =r / \rho \) and D is an unknown constant to be determined below. The strain field of the matrix includes an elastic part specified as
$$\begin{aligned} \varepsilon _{\mathrm{e}r}^{(1)}= & {} \frac{\sigma _{r}^{(1)}}{E_{1}}-\frac{2\nu _{1}\sigma _{\theta }^{(1)}}{E_{1}}, \nonumber \\ \varepsilon _{\mathrm{e}\theta }^{(1)}= & {} \varepsilon _{\mathrm{e}\varphi }^{(1)}=\frac{-\nu _{1}\sigma _{r}^{(1)}}{ E_{1}}+\frac{(1-\nu _{1})\sigma _{\theta }^{(1)}}{E_{1}}, \end{aligned}$$
and a plastic part written in the form
$$\begin{aligned} \varepsilon _{\mathrm{p}r}^{(1)}=\omega \beta _{1}^{\prime },\quad \varepsilon _{\mathrm{p}\theta }^{(1)}=\varepsilon _{\mathrm{p}\varphi }^{(1)}=-\frac{\omega \beta _{2}^{\prime }}{2}, \end{aligned}$$
where \(\beta _{1}^{\prime }\) and \(\beta _{2}^{\prime }\) are given in Eq. (28) and \(\omega \) represents an unknown scalar function of r. Thus, the strain components of the matrix have the expressions
$$\begin{aligned} \varepsilon _{r}^{(1)}= & {} \frac{\sigma _{r}^{(1)}}{E_{1}}-\frac{2\nu _{1}\sigma _{\theta }^{(1)}}{E_{1}}+\omega \beta _{1}^{\prime }, \end{aligned}$$
$$\begin{aligned} \varepsilon _{\theta }^{(1)}= & {} \varepsilon _{\varphi }^{(1)}=\frac{-\nu _{1}\sigma _{r}^{(1)}}{E_{1}}+\frac{(1-\nu _{1})\sigma _{\theta }^{(1)}}{ E_{1}}-\frac{\omega \beta _{2}^{\prime }}{2}. \end{aligned}$$
Substituting the above strain components into the compatibility equation
$$\begin{aligned} \frac{\hbox {d}\varepsilon _{\theta }^{(1)}}{\hbox {d}r}+\frac{\varepsilon _{\theta }^{(1)}-\varepsilon _{r}^{(1)}}{r}=0, \end{aligned}$$
we obtain
$$\begin{aligned} \omega (r)=\frac{LD}{E_{1}}(\zeta ^{-2q}-\zeta ^{-s}), \end{aligned}$$
with s and L defined in Eq. (49)-2 and Eq. (49)-3. Finally, the strain components of the coating read
$$\begin{aligned} \varepsilon _{r}^{(1)}= & {} \frac{1}{E_{1}}\left[ D\left( 2\nu _{1}(1-q)+L\beta _{1}^{\prime }-1\right) \zeta ^{-2q}\right. \nonumber \\&\left. \quad -LD\beta _{1}^{\prime }\zeta ^{-s}+\left( 1-2\nu _{1}\right) C\cot \phi \right] , \end{aligned}$$
$$\begin{aligned} \varepsilon _{\theta }^{(1)}= & {} \varepsilon _{\varphi }^{(1)}=\frac{1}{E_{1}} \left[ D\left( q-1+2\nu _{1}-\nu _{1}q-\frac{1}{2}L\beta _{2}^{\prime }\right) \zeta ^{-2q}\right. \nonumber \\&\left. \quad -\frac{LD\beta _{2}^{\prime }}{2}\zeta ^{-s}+\left( 1-2\nu _{1}\right) C\cot \phi \right] . \end{aligned}$$
The radial displacement component can be determined as follows:
$$\begin{aligned} u_{r}^{(1)}=\varepsilon _{\theta }^{(1)}r= & {} \frac{r}{E_{1}}\left[ D\left( q-1+2\nu _{1}-\nu _{1}q-\frac{1}{2}L\beta _{2}^{\prime }\right) \zeta ^{-2q}\right. \nonumber \\&\left. -\frac{LD\beta _{2}^{\prime }}{2}\zeta ^{-s}+\left( 1-2\nu _{1}\right) C\cot \phi \right] . \end{aligned}$$
The above unknowns \(a_{1},\) \(a_{2},\) \(b_{1}\) and D, calculated by using the interface continuity conditions, are specified by
$$\begin{aligned} a_{1}= & {} \frac{D(2q-3)}{9\kappa _{1}}+\frac{C\cot \phi }{3\kappa _{1}}, \end{aligned}$$
$$\begin{aligned} a_{2}= & {} \frac{-D\xi ^{2q}+C\cot \phi }{3\kappa _{2}}, \end{aligned}$$
$$\begin{aligned} b_{1}= & {} \frac{Dq\rho ^{3}}{6\mu _{1}}, \end{aligned}$$
and Eq. (51).
By the boundary condition at \(r=b,\) the relationship between the pressure \( p_{0},\) the isotropic strain volume average \(\varepsilon _{0}\) in terms of \(\xi =\rho /a\) is derived as in Eqs. (46) and (58). From Eqs. (45), (46), (47), and (59) the expressions of \( \kappa _\mathrm{s}^{*}\) and \(\kappa _\mathrm{t}^{*}\) can be obtained as in Eqs. (60) and (61).
Finally, when the annular plastic zone extends to the outer surface or, in other words, when the matrix gets totally plastified, the pressure applied on \( r=b \) is equal to \(p^{\prime \prime }\). The scalar function \(\omega (r)\) takes the following form
$$\begin{aligned} \omega (r)=\frac{LD}{E_{1}}\varsigma ^{-2q}+A\varsigma ^{-s}, \end{aligned}$$
where \(\varsigma =r/b\) and D, A are unknown constants to be specified below. Following the similar calculation procedure from Eq. (105) to Eq. (108), together with the interface continuity conditions at \(r=a\) as well as the boundary condition of Eq. (42), the expressions of D, A and elastic parameter \(a_{2}\) can be obtained as follows:
$$\begin{aligned} D= & {} p_{0}+C\cot \phi ,\end{aligned}$$
$$\begin{aligned} a_{2}= & {} -\frac{1}{3\kappa _{2}}\left[ c^{-2q/3}p_{0}+\left( c^{-2q/3}-1\right) C\cot \phi \right] , \end{aligned}$$
$$\begin{aligned} A= & {} -\frac{1}{3\beta _{2}^{\prime }\kappa _{2}E_{1}}\left\{ c^{(s-2q)/3}\left( C\cot \phi +p_{0}\right) \left[ \kappa _{2}/\kappa _{1}\left( 3\kappa _{1}-E_{1}\right) \right. \right. \nonumber \\&\left. (q-2)+6\kappa _{2}(1-q)+3\beta _{2}^{\prime }\kappa _{2}L-2E_{1}\right] +c^{s/3}C\cot \phi \nonumber \\&\left. \left[ 2\kappa _{2}/\kappa _{1}\left( 3\kappa _{1}-E_{1}\right) -6\kappa _{2}+2E_{1}\right] \right\} . \end{aligned}$$
The isotropic strain and strain-rate averages over the coated sphere are given in Eqs. (62) and (63).
Le Quang, H., He, Q.C.: Effective pressure-sensitive elastoplastic behavior of particle-reinforced composites and porous media under isotropic loading. Int. J. Plast. 24, 343–370 (2008)zbMATHCrossRefGoogle Scholar
Dvorak, G.: Micromechanics of Composite Materials. Springer, Netherlands (2012)Google Scholar
Hashin, Z.: The elastic moduli of heterogeneous materials. ASME J. Appl. Mech. 29, 143–150 (1962)zbMATHMathSciNetCrossRefGoogle Scholar
Hashin, Z., Shtrikman, S.: A variational approach to the theory of the effective magnetic permeability of multiphase materials. J. Appl. Phys. 33, 3125–3131 (1962)zbMATHCrossRefGoogle Scholar
Chen, W.F.: Limit Analysis and Soil Mechanics. Elsevier, New York (1975)Google Scholar
Van Tiel, J.: Convex Analysis: An Introduction Text. Wiley, New York (1984)Google Scholar
Chen, W.F., Han, D.J.: Plasticity for Structural Engineers. Springer, New York (1988)zbMATHCrossRefGoogle Scholar
Bousshine, L., Chaaba, A., De Saxce, G.: Softening in stress-strain curve for Drucker–Prager non-associated plasticity. Int. J. Plast. 17, 21–46 (2001)zbMATHCrossRefGoogle Scholar
Hjiaj, M., Fortin, J., de Sacre, G.: A complete stress update algorithm for the non-assiciated Drucker–Prager model including treatment of the apex. Int. J. Eng. Sci. 41, 1109–1143 (2003)CrossRefGoogle Scholar
Cheng, L., Jia, Y., Oueslati, A., et al.: Plastic limit state of the hollow sphere model with non-associated Drucker–Prager material under isotropic loading. Comput. Mater. Sci. 62, 210–215 (2012)Google Scholar
He, Q.C., Vallée, C., Lerintiu, C.: Explicit expressions for the plastic normality-flow rule associated to the Tresca yield criterion. Z. Angew. Math. Phys. 56, 357–366 (2005)zbMATHMathSciNetCrossRefGoogle Scholar
Vallée, C., He, Q.C., Lerintiu, C.: Convex analysis of the eigenvalues of a 3D second-order symmetric tensor. J. Elast. 83, 191–204 (2006)zbMATHCrossRefGoogle Scholar
Chu, T.Y., Hashin, Z.: Plastic behavior of composites and porous media under isotropic stress. Int. J. Eng. Sci. 9, 971–994 (1971)zbMATHCrossRefGoogle Scholar
Yin, Z.Y., Chang, C.S., Hicher, P.Y., et al.: Micromechanical analysis of the behavior of stiff clay. Acta Mech. Sin. 27, 1013–1022 (2011)Google Scholar
© The Chinese Society of Theoretical and Applied Mechanics; Institute of Mechanics, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg 2015
1.School of Mechanical EngineeringSouthwest Jiaotong UniversityChengduChina
2.Université Paris-Est, Laboratoire Modélisation et Simulation Multi Echelle UMR 8208 CNRSMarne-la-ValléeFrance
Yang, D. & He, Q.C. Acta Mech Sin (2015) 31: 392. https://doi.org/10.1007/s10409-015-0456-z
First Online 29 May 2015
DOI https://doi.org/10.1007/s10409-015-0456-z
Publisher Name The Chinese Society of Theoretical and Applied Mechanics; Institute of Mechanics, Chinese Academy of Sciences
|
CommonCrawl
|
What are the reasons to expect that gravity should be quantized?
What I am interested to see are specific examples/reasons why gravity should be quantized. Something more than "well, everything else is, so why not gravity too". For example, isn't it possible that a quantum field theory on curved space-time would be the way treat QFT and gravity in questions where the effects of neither can be ignored?
gravity quantization
MBNMBN
$\begingroup$ Hasn't this been covered in previous question(s)? $\endgroup$ – user346 Mar 15 '11 at 19:07
$\begingroup$ Possibly, I couldn't find it. I can delete this one if someone can show me the questions. $\endgroup$ – MBN Mar 15 '11 at 19:09
$\begingroup$ I didn't bother looking either, so I'll take your word for it ;) Right now, its the retrodiction question that's hot and heavy, so back to it! And back to yours later I'm sure. $\endgroup$ – user346 Mar 15 '11 at 19:17
$\begingroup$ LOL. Then we are equally lazy. Cool. I'll try to give an answer. The question itself is a very good one and one whose answer far too many people take for granted. Consequently both the string theory and lqg people have these "quantization blinders" on, which prevent them from seeing ways out of their respective problems - for ST that of finding a more natural description of nature, i.e. one without extra dimensions and compactification; and for LQG the questions of how to include matter and interactions. As Jacobson has noted, quantizing GR might be as helpful as quantizing hydrodynamics. $\endgroup$ – user346 Mar 15 '11 at 19:36
$\begingroup$ I think it is precisely because "everything else is" ;) As soon as one accepts that our world is inherently quantum, there is just no other way. And I think this has been accepted for quite some time now (well, by scientists at least)... $\endgroup$ – Marek Mar 15 '11 at 19:37
Gravity has to be subject to quantum mechanics because everything else is quantum, too. The question seems to prohibit this answer but that can't change the fact that it is the only correct answer. This proposition is no vague speculation but a logically indisputable proof of the quantumness.
Consider a simple thought experiment. Install a detector of a decaying nucleus, connected to a Schrödinger cat. The cat is connected to a bomb that divides the Earth into two rocks when it explodes. The gravitational field of the two half-Earths differs from the gravitational field of the single planet we know and love.
The nucleus is evolving into a superposition of several states, inevitably doing the same thing with the cat and with the Earth, too. Consequently, the value of the gravitational field of the place previously occupied by the Earth will also be found in a superposition of several states corresponding to several values - because there is some probability amplitude for the Earth to have exploded and some probability amplitude for it to have survived.
If it were possible to "objectively" say whether the gravitational field is that of one Earth or two half-Earths, it would also be possible to "objectively" say whether the nucleus has decayed or not. More generally, one could make "objective" or classical statements about any quantum system, so the microscopic systems would have to follow the logic of classical physics, too. Clearly, they don't, so it must be impossible for the gravitational field to be "just classical".
This is just an explicit proof. However, one may present thousands of related inconsistencies that would follow from any attempt to combine quantum objects with the classical ones in a single theory. Such a combination is simply logically impossible - it is mathematically inconsistent.
In particular, it would be impossible for the "classical objects" in the hybrid theory to evolve according to expectation values of some quantum operators. If this were the case, the "collapse of the wave function" would become a physical process - because it changes the expectation values, and that would be reflected in the classical quantities describing the classical sector of the would-be world (e.g. if the gravitational field depended on expectation values of the energy density only).
Such a physicality of the collapse would lead to violations of locality, Lorentz invariance, and therefore causality as well. One could superluminally transmit the information about the collapse of a wave function, and so on. It is totally essential for the consistency of quantum mechanics - and its compatibility with relativity - to keep the "collapse" of a wave function as an unphysical process. That prohibits observable quantities to depend on expectation values of others. In particular, it prohibits classical dynamical observables mutually interacting with quantum observables.
Luboš MotlLuboš Motl
$\begingroup$ Wow, that's a great answer. I never thought about it that way. $\endgroup$ – Keenan Pepper Mar 16 '11 at 0:15
$\begingroup$ This is just an explicit proof ... it is no such thing. In fact the line of reasoning that you use has been used previously by Penrose and is that at basis of his proposal on wavefunction collapse due to gravitational effects. It is one thing to create a superposition of a state of a single, or even multiple, qubits. It is quite another thing altogether to claim that you can create a superposition of a gravitationally massive body such as the earth. In fact I spoke to Penrose once (lucky me) and as he said this is precisely the situation where the argument fails ... $\endgroup$ – user346 Mar 16 '11 at 0:24
$\begingroup$ @MBN: apologies, but again, there can't be any inequivalent answer because the incompatibility of classical evolution with the quantum evolution is the only (but very important) possible reason why classical gravity can't be added to a quantum world. If you wanted to avoid infinite exchanges with Deepak, you should have therefore avoided asking this question altogether. $\endgroup$ – Luboš Motl Mar 16 '11 at 6:57
$\begingroup$ @Deepak: your bold proposition - that macroscopic objects could avoid quantum mechanics - is even much worse than the proposition in this very question, namely that gravitational fields could avoid quantum mechanics. Arbitrarily large pieces of a solid (e.g. crystal or metal), to pick an example, follow the laws of quantum mechanics. Ask any condensed matter physicist who study these very questions all the time. You may try to defend your nonsensical propositions by the authority of a British mathematician but because he has no clue how QM works, this ad hominem argument is very weak. $\endgroup$ – Luboš Motl Mar 16 '11 at 7:00
$\begingroup$ QMarek, no not exactly. I know that 1+1 is 2 in the ring of integers, I am asking does it have to be the case in any ring. The analogy is not great. $\endgroup$ – MBN Mar 16 '11 at 13:45
Reasons for why gravity should be amenable to "quantization":
Because everything else or as @Marek puts it because "the world is inherently quantum". This in itself is more an article of faith than an argument per se.
Because QFT on curved spacetime (in its traditional avatar) is only valid as long as backreaction is neglected. In other words if you have a field theory then this contributes to $T_{\mu\nu}$ and by Einstein's equations this must in turn affect the background via:
$$ G_{\mu\nu} = 8\pi G T_{\mu\nu} $$
Consequently the QFTonCS approach is valid only as long as we consider field strengths which do not appreciable affect the background. As such there is no technical handle on how to incorporate backreaction for arbitrary matter distributions. For instance Hawking's calculation for BH radiation breaks down for matter densities $\gt M_{planck}$ per unit volume and possibly much sooner. Keep in mind that $M_{planck}$ is not some astronomical number but is $\sim 21 \, \mu g$, i.e. about the mass of a colony of bacteria!
The vast majority of astrophysical processes occur in strong gravitational fields with high enough densities of matter for us to distrust such semiclassical calculations in those regimes.
Well there isn't really a good third reason I can think of, other than "it gives you something to put on a grant proposal" ;)
So the justification for why boils down to a). because it is mandatory and/or would be mathematically elegant and satisfying, and b). because our other methods fail in the interesting regimes.
In the face of the "inherently quantum" nature of the world we need strong arguments for why not. Here are a couple:
The world is not only "inherently quantum" but it is also "inherently geometric" as embodied by the equivalence principle. We know of no proper formulation of QM which can naturally incorporate the background independence at the core of GR. Or at least this was the case before LQG was developed. But LQG's detractors claim that in the absence of satisfactory resolutions of some foundational questions (see a recent paper by Alexandrov and Roche, Critical overview of Loops and Foams). Also despite recent successes it remains unknown as to how to incorporate matter into this picture. It would appear that topological preons are the most natural candidates for matter given the geometric structure of LQG. But there does not appear to be any simple way of obtaining these braided states without stepping out of the normal LQG framework. A valiant attempt is made in this paper but it remains to be seen if this line of thought will bear sweet, delicious fruit and not worm-ridden garbage!
Starting with Jacobson (AFAIK) (Thermodynamics of Spacetime: The Einstein Equation of State, PRL, 1995) there exists the demonstration that Einstein's equations arise naturally once one imposes the laws of thermodynamics ($dQ = TdS$) on the radiation emitted by the local Rindler horizons as experienced by any accelerated observer. This proof seems to suggest that the physics of horizons is more fundamental than Einstein's equations, which can be seen as an equation of state. This is analogous to saying that one can derive the ideal gas law from the assumption that an ideal gas should satisfy the first and second laws of thermodynamics in a suitable thermodynamical limit ($N, V \rightarrow \infty$, $N/V \rightarrow$ constant). And the final reason for why not ...
Because the other, direct approaches to "quantizing" gravity appear to have failed or at best reached a stalemate.
On balance, it would seem that one can find more compelling reasons for why not to quantize gravity than for why we should do so. Whereas there is no stand-alone justification for why (apart from the null results that I mention above), the reasons for why not have only begun to multiply. I mention Jacobson's work but that was only the beginning. Work by Jacobson's student (?) Christopher Eling (refs) along with Jacobson and a few others has extended Jacobson's original argument to the case where the horizon is in a non-equilibrium state. The basic result being that whereas the assumption of equilibrium leads to the Einstein equations (or equivalently the Einstein-Hilbert action), the assumption of deviations from equilibrium yields the Einstein-Hilbert action plus higher-order terms such as $R^2$, which would also arise as quantum corrections from any complete quantum gravity theory.
In addition there are the papers by Padmanabhan and Verlinde which set the physics world aflutter with cries of "entropic gravity". Then there is the holographic principle/covariant entropy bound/ads-cft which also suggest a thermodynamic interpretation of GR. As a simple illustration a black-hole in $AdS_5$ with horizon temperature $T$ encodes a boundary CFT state which describes a quark-gluon plasma at equilibrium at temperature ... $T$!
To top it all there is the very recent work Bredberg, Keeler, Lysov and Strominger - From Navier-Stokes To Einstein which shows an (apparently) exact correspondence between the solutions of the incompressible Navier-Stokes equation in $p+1$ dimensions with solutions of the vacuum Einstein equations in $p+2$ dimensions. According to the abstract:
The construction is a mathematically precise realization of suggestions of a holographic duality relating fluids and horizons which began with the membrane paradigm in the 70's and resurfaced recently in studies of the AdS/CFT correspondence.
To sum it all up let me quote from Jacobson's seminal 1995 paper:
Since the sound field is only a statistically defined observable on the fundamental phase space of the multiparticle sys- tem, it should not be canonically quantized as if it were a fundamental field, even though there is no question that the individual molecules are quantum mechanical. By analogy, the viewpoint developed here suggests that it may not be correct to canonically quantize the Einstein equations, even if they describe a phenomenon that is ultimately quantum mechanical. (emph. mine)
Standard Disclaimer: The author retains the rights to the above work among which are the right to include the above content in his research publications with the commitment to always cite back to the original SE question.
$\begingroup$ Thanks for the effort, and it is probably morally wrong to complain, but it doesn't really answer the question the way I asked it. It seems that you only elaborate on the part I asked not be given as an answer. As I said in the comment above, there are reasons why electrodynamics should be quantized. Otherwise it leads to contradictions. And am hoping to see something along those lines. About QFTonCS you are right, but is there a reason to suspect that there cannot be a satisfactory formulation? Don't take this as a negative reaction I do like your not-exactly-answer, it's just as ... $\endgroup$ – MBN Mar 15 '11 at 21:21
$\begingroup$ @Marek as I explain in comments to @Lubos' answer, his thought experiment regarding the superposition of two massive objects leads to the conclusion that gravity should trigger wavefunction collapse. Therefore, instead of providing support for the notion of "quantizing" gravity, this thought experiment requires us to answer why gravity should not be a factor in wavefunction collapse. That is one simple (on-its-face) argument that leads to a contradiction but not the sort you were hoping for :/ @MBN - LOL. Complaining is never morally wrong! The simplest reason for why the standard QFTonCS $\endgroup$ – user346 Mar 16 '11 at 0:34
$\begingroup$ @Marek, gedanken experiments would be fine. I am really interested only in the theory. What actually happens in reality is a concern for the physicists:)) $\endgroup$ – MBN Mar 16 '11 at 3:56
$\begingroup$ @Deepak: I am afraid I don't understand your argument. Collapse is unphysical and so can't really be a base for any contradiction. $\endgroup$ – Marek Mar 16 '11 at 7:30
$\begingroup$ Collapse is unphysical - right. And what should replace it then? I agree that one can have stable quantum states and superpositions of macroscopic objects. The reason I cite Penrose's argument is because I don't think that we will observe the sort of gravitationally-induced decoherence he predicts. But neither will we observe conventional quantum behavior in such systems. After all, there is no reason to believe that many-body quantum systems should obey the same rules that qubits do as regards to superposition, etc. I think we will find something more subtle than these either/or options. $\endgroup$ – user346 Mar 16 '11 at 7:48
I am very much surprised to see that apart from all the valid reasons (specially the argument, since everything else is quantum hence gravity should also be the same otherwise many inconsistencies will develop) mentioned by Lubos et. al. no body pointed out that one of the other main motivations to quantize gravity was that classical GR predicted singularities in extreme situations like big bang or black holes. It was kind of like the the instability of the Ratherford atomic model where electrons should have been spiral inward the nuclus as per the classical electrodynamics. Quantum theory saved physics from this obvious failure of classical physics. Naturally it occured to physicists that quantum theory should be the answer of the singularity problem of classical GR too. However experiences in the last 40 years have been different. Far from removing singularities it appears that our best quantum gravity theory is saying that some of the singularities are damn real. So obviously the motivation of quantization of gravity has changed to an extent and it is unification which is now driving the QG program in my humble opinion.
Some additional comments: @Mbn, There are strong reasons to believe that the uncertainty principle is more fundamental than most other principles. It is such an inescapable property of the universe that all sane physicists imho will try their best, to make every part of their world view including gravity, consistent with the uncertainty principle. All of the fundamental physics has already been successfully combined with it except gravity. That's why we need to quantize gravity.
$\begingroup$ @sb1 that is a very good point. +1. $\endgroup$ – user346 Mar 16 '11 at 11:57
$\begingroup$ A good point, but why do you take that as 'gravity needs to be quantized' and not as QFT needs to be done on a curved spacetime. $\endgroup$ – MBN Mar 16 '11 at 15:09
$\begingroup$ @MBN: The bottom line is that there is gravity which should have a quantum description for consistency with all other phenomena in nature and which must produce finite (divergence free)answers. $\endgroup$ – user1355 Mar 16 '11 at 15:24
$\begingroup$ That is exactly my question. What are there reasons to think that for consistency gravity has to be quantized? Saying it is the bottom line isn't enough for me. I would like to see the lines above the bottom. $\endgroup$ – MBN Mar 16 '11 at 15:58
$\begingroup$ @MBN: I don't understand your comment at all. Either I am not understanding you or you are just playing with words without any specific goal. $\endgroup$ – user1355 Mar 16 '11 at 16:09
For the sake of argument, I might offer up a plausible alternative. We might have some quantum underpinning to gravitation, but we might in fact not really have quantum gravity. It is possible that gravitation is an emergent phenomenon from a quantum field theoretic substratum, where the continuity of spacetime might be similar to the large scale observation of superconductivity or superfluidity. The AdS/CFT is a matter of classical geometry and its relationship to a quantum field theory. So the $AdS_4/QFT$ suggests a continuity of spacetime which has a correspondence with the quark-gluon plasma, which has a Bjorken hydrodynamic scaling. The fluid dynamics of QCD, currently apparent in some LHC and RHIC heavy ion physics, might hint at this sort of connection.
So we might not really have a quantum gravity as such. or if there are quantum spacetime effects it might be more in the way of quantum corrections to fluctuations with some underlying quantum field. Currently there are models which give quantum gravity up to 7 loop corrections, or 8 orders of quantization. Of course the tree level of quantum gravity is formally the same as classical gravity.
This is suggested not as some theory I am offering up, but as a possible way to think about things.
Lawrence B. CrowellLawrence B. Crowell
$\begingroup$ This is interesting. $\endgroup$ – MBN Mar 16 '11 at 16:16
I have seen two converging paths as compelling reasons for quantizing gravity, both dependent on experimental observations.
One is the success of gauge theories in particle physics the past decades, theories that organized knowledge mathematically economically and elegantly. Gravitational equations are very tempting since they look like a gauge theory.
The other is the Big Bang theory of the beginning of the universe that perforce has to evolve the generation of particles and interactions from a unified model, as the microseconds grow. It is attractive and elegant that the whole is unified in a quantum theory that evolves into all the known interactions, including gravity.
$\begingroup$ The question didn't talk about unification of forces. Just about quantization of gravity. Whereas your answer doesn't... $\endgroup$ – Marek Mar 15 '11 at 22:26
$\begingroup$ @Marek I would think it obvious that one cannot unify a quantum theory with a non quantum one using the same mathematical descriptions . $\endgroup$ – anna v Mar 16 '11 at 4:53
$\begingroup$ @anna: so what? You are talking about unification again. The question doesn't... $\endgroup$ – Marek Mar 16 '11 at 7:28
$\begingroup$ I think @anna is trying to say that the expectation (or requirement) is that the four forces unify at some scale, along with the fact that (at least) three of these are QFTs. So the unified theory would also, presumably, be a QFT. And the logic of grand unification then implies that gravity, which is one sector of this big theory, should also have a quantum description. $\endgroup$ – user346 Mar 16 '11 at 11:54
$\begingroup$ @Deepak Vaid. Yes. My use of the english language must be at fault. @Marek the question up top asked for "specific examples/reasons why gravity should be quantized", and I gave two of them, imo. $\endgroup$ – anna v Mar 16 '11 at 14:36
I will take a very simplistic view here. This is a good question and was carefully phrased: «gravity ... be quantised ... ». Unification is not quite an answer to this particular question. If GenRel produces singularities, as it does, then one can wonder if those singularities can really be the exact truth. Since singularities have been smoothed over by QM in some other contexts, this is a motivation for doing that to GenREl which was done to classical mechanics and E&M. But not necessarily for « quantising gravity ». According to GenRel, gravity is not a force. It is simply the effect of the curvature of space-time... In classical mechanics, the Coulomb force was a real force... So if we are going to be motivated to do to GenRel that which was done to classical mechanics, it would not be natural to quantise gravity, but rather to formulate QM in a curved space-time (with the appropriate back-reaction---and that, of course, is the killer since probably some totally new and original idea is necessary here, so that the result will be essentially quantum enough to be a unification). MBN has explicitly contrasted these two different options: quantising gravity versus doing QM or QFT in curved space-time. Either approach addresses pretty much every issue raised here: either would provide unification. Both would offer hopes of smoothing out the singularities.
So, to sum up the answer
IMHO there is no compelling reason to prefer quantising gravity over developing QFT in curved space-time, but neither is easy and the Physics community is not yet convinced by any of the proposals.
joseph f. johnsonjoseph f. johnson
$\begingroup$ -1: QM in curved space doesn't work, because quantum stuff is not just responding to gravity, it is also creating gravity. So if you make a superposition of masses, you need a superpostion of gravity fields. Further semiclassical gravity suffers from the same consistency problems that plague semiclassical electromagnetic interactions--- this is the BKS theory which fails to conserve energy. When you don't have gravitons, a gravitational wave cannot interact with matter in a way that conserves energy graviton by graviton, because a single graviton gravity wave can only excite one position. $\endgroup$ – Ron Maimon Dec 15 '11 at 10:02
$\begingroup$ >it is also creating gravity.## ## I think that that is what I was referring to by the appropriate back reaction being needed.## ## >When you don't have gravitons, a gravitational wave cannot interact with matter in a way that conserves energy## ##@Ron I would appreciate a reference for this $\endgroup$ – joseph f. johnson Dec 15 '11 at 14:43
$\begingroup$ So if you have a particle which is in a superposition with probability 1/2 to be here and 1/2 to be there, where does it's gravitational field come from? From here? From there? From half way in between? It's clear that the field is superposed. There is no way to treat matter as quantum and a field as classical. It is impossible, it is discredited, it's BKS. $\endgroup$ – Ron Maimon Dec 15 '11 at 18:58
$\begingroup$ I am sympathetic to the idea that quantum mechanics might not be exact, I often toss and turn at night over this question. But a semiclassical gravity field interacting with quantum matter is certainly not the answer. The arguments for energy nonconservation are in BKS paper, where they analyze semiclassical EM field interacting with a quantum atom (before full QM, but the arguments are the same). The later Bohr Rosenfeld analysis is a famous argument that field quantization is required, and it applies mutatis mutandis to gravity. $\endgroup$ – Ron Maimon Dec 16 '11 at 1:38
$\begingroup$ I don't think you have noticed that the axioms of QM could remain exactly true even if one adjusted notions of particle and superposition. The axioms say use a Hilbert space, they do not impose which one. They say use a Hamiltonian, they do not say which one. They do not tell you how to interpret superposition of states and do not tell you how Hamiltonians of measurement apparati are correlated with Quantum Observable. All of that is `adjustable'. Linearity, I suppose, is not. $\endgroup$ – joseph f. johnson Dec 16 '11 at 2:09
There are two questions here. The first is not so much whether we expect a unifying theory to be "quantum" as much as whether we expect a unifying theory to be probabilistic/statistical. I suppose that at or within 5 or 10 orders of magnitude of the Planck scale we can expect that we will still have to work with a statistical theory. Insofar as Hilbert space methods are the simplest effective mathematics for generating probability measures that can then be compared with the statistics of measurements, it's likely we will keep using this mathematics until some sort of no-go theorem proves that we have to use more sophisticated and harder to use mathematical tools (non-associative algebras of observables, etc., etc., etc., none of which most of us will choose to use unless we really have to).
The arguably more characteristic feature of quantum theory is a scale of action, Planck's constant, which determines, inter alia, the scale of quantum fluctuations and the minimal incompatibilities of idealized measurements. From this we have the Planck length scale, given the other fundamental constants, the speed of light and the gravitational constant. From this point of view, to say that we wish to "quantize" gravity is to assume that the Planck scale is not superseded in dynamical significance at very small scales by some other length scale.
The lack of detailed experimental data and an analysis that adequately indicates a natural form for an ansatz for which we would fit parameters to the experimental data is problematic for QG. There is also a larger problem, unification of the standard model with gravity, not just the quantization of gravity, which introduces other questions. In this wider context, we can construct any length scale we like by multiplying the Planck length by arbitrary powers of the fine structure constant, any of which might be natural given whatever we use to model the dynamics effectively. The natural length for electro-geometrodynamics might be $\ell_P\alpha^{-20.172}$ (or whatever, $\ell_P e^{\alpha^{-1}}$ isn't natural in current mathematics, but something as remarkable might be in the future), depending on the effective dynamics, and presumably we should also consider the length scales of QCD.
Notwithstanding all this, it is reasonable to extrapolate the current mathematics and effective dynamics to discover what signatures we should expect on that basis. We have reason to think that determining and studying in detail how experimental data is different from the expected signatures will ultimately suggest to someone an ansatz that fits the experimental data well with relatively few parameters. Presumably it will be conic sections instead of circles.
Peter MorganPeter Morgan
$\begingroup$ Well, i was not asking about unification and related matters. $\endgroup$ – MBN Mar 16 '11 at 15:11
$\begingroup$ @MBN Unification in some form or another is at least part of the pressure to quantize gravity, so that gravity could then be unified with the standard model of particle physics. I think this isn't a strong argument that quantization is necessary, but it isn't a bad reason for trying. I'd take this to underlie Luboš' Answer, insofar as he effectively worries about contradictions in the wider context that includes gravity and quantum theory. $\endgroup$ – Peter Morgan Mar 16 '11 at 15:36
$\begingroup$ That's true. (two more characters) $\endgroup$ – MBN Mar 16 '11 at 15:56
$\begingroup$ Trying to built the simplest possible model for electric potential, for magnetic dipole and for photons I came to the conclusion that we need only two quanta and clusters from this two quanta (summary see here). I'm convinced that the quantization of electromagnetic interactions results from the existence of this two quanta and the continuous sequence of cluster sizes from this two quanta. ... $\endgroup$ – HolgerFiedler Apr 5 '15 at 20:02
$\begingroup$ ... It is very likely that gravitation is made from gravitons but only from one kind and the density of this monopols is responsible for the curvature of space. Even though gravitation is made from particles (gravitons) too there isn't a structure with continuous sequence. For this reason it is not possible to quantize the gravitational field. $\endgroup$ – HolgerFiedler Apr 5 '15 at 20:02
I will answer recasting the question as a thought experiment, based on the example proposed by Lubos;
1) a quantum object A in a superposition of two states separated by a distance $X$ somewhere in empty space
2) A has an associated gravity, with associated space-time curvature
3) now system B, will approach the region where A is found, and measure space-time curvature, but will not interact directly with A or its non-gravitational fields
4) now the system M (aka Measuring Apparatus) approaches the region where both A and B are found, and it will try to measure state correlation between A and B states
"gravity is quantum" potential outcome:
A and B are statistically correlated (entangled), supporting that B coupled with a linear superposition of gravitational fields
"gravity is classical" potential outcome:
A and B are uncorrelated quantum mechanically (a direct product of both densities), supporting that any substantial gravity field will collapse (this is basically what Penrose proposes as a mechanism for measurement collapse)
lurscherlurscher
$\begingroup$ +1 for mentioning Penrose and the fact that this is (originally) his argument! $\endgroup$ – user346 Mar 16 '11 at 0:40
$\begingroup$ So you (that is Penrose) are proposing a way to test if gravity needs to be quantized or not? That's nice but until it is performed we will not know. $\endgroup$ – MBN Mar 16 '11 at 4:02
$\begingroup$ Dear Deepak, this is an extremely, extremely lousy reason for giving an answer thumbs up. And by the way, this sequence of thoughts denies not only that gravity is quantum but that anything in the world is quantum. It's OK for a schoolkid from an elementary school but I don't think that it's appropriate for SE. $\endgroup$ – Luboš Motl Mar 16 '11 at 7:07
$\begingroup$ This also happens to be perfectly compatible with the existence of macroscopic quantum states. In fact, this approach would allow us greater control of the quantum properties of gravitationally non-negligible mass distributions. But if we don't understand what the true microscopic d.o.f are - strings, loops, etc. - and keep trying to "quantize" the Einstein-Hilbert action, it would be analogous to trying to understand what the microscopic d.o.f of an ideal gas as by quantizing the equation of state $ PV=nRT$! $\endgroup$ – user346 Mar 16 '11 at 7:40
$\begingroup$ @Lubos, it sounds like you have evidence that the above proposed experiment will have certain outcome rather than the other one. But the fact that "all the other things are quantum" does not per se prove that a certain outcome in the above experiment will be unavoidable. Both are logically possible, even if we all agree that it would be more aesthetically pleasing that gravity would be as quantum as "everything else" $\endgroup$ – lurscher Mar 16 '11 at 14:47
protected by Qmechanic♦ Sep 2 '17 at 15:04
Not the answer you're looking for? Browse other questions tagged gravity quantization or ask your own question.
We Don't NEED Quantum Gravity because Gravity isn't Even A Force!
Why does gravity need a graviton?
Why are gravitons needed to explain gravitational attraction in quantum gravity?
If gravity arises from the curvature of spacetime, why is there a need for gravitons?
What if quantum mechanics and general relativity are not connected?
Does the existence of graviton contradict gravity being spacetime curvature?
Gravity and graviton
What´s the physical foundation of the assumption that the curvature of spacetime can be quantised?
Why should we want to quantize Gravity?
Does the universe need Quantum Gravity?
Could gravity be an emergent property of nature?
Is the quantization of gravity necessary for a quantum theory of gravity?
What are we all falling towards?
Is the quantization of gravity necessary for a quantum theory of gravity? Part II
What is "momentum density" and why it important to QFT?
How are gravity and the strong nuclear force related?
Does rotation modify the effects of gravity,(does it flatten/narrow the curvature to the plane of rotation)?
Computation of scattering cross sections in quantum/classical (field) theory
What is the problem of non-pertubative quantisation?
|
CommonCrawl
|
Examples of Mathematical Slang
Unless you have taught highschool algebra in Iran, you could not make sense of the phrase: Elephant and Teacup Identity! This is what teachers use to refer to the following identities:
$ (a+b)(a^2-ab+b^2)=a^3+b^3$ and $ (a-b)(a^2+ab+b^2)=a^3-b^3$
Such reference is so common that today a colleague of mine (in a discussion about students' algebraic difficulty) referred to it assuming that I know what she is referring to. Whether or not such references would be of any help to students is an important question, but not my question now. For this post, the question is:
Do you know any of these linguistic references for communicating mathematics? It could be something that you use in your own class, or you have heard that someone else uses. Thus, it doesn't matter whether its usage is limited to just one class, or is as popular as the one I gave.
Edit. The first attempt to clarify the question. The question is looking for "non-mathematical" terms or phrases that are used to refer to mathematical objects (of any kind) mainly for educational purposes.
Edit. The second attempt to clarify the question. Admittedly the question is a bit vague. Do examples like "continuity", "saddle point", "horseshoe map", or "hairy ball theorem" count? I guess not. They are now formal terms belonging to Mathematics culture here, there and everywhere. What if we call what this question is looking for "mathematical slang". Here is a dictionary definition of slang:
A type of language consisting of words and phrases that are regarded as very informal, are more common in speech than writing, and are typically restricted to a particular context or group of people.
Interesting, after coming with the term, I found this paper "The blight of mathematical slang", that gives the expression "cross-multiply" as an example.
Edit. Following a number of suggestions for using a more informative title (see comments below), I changed it in a way that also better reflects the final version of the question (previous edit).
examples terminology language-use
Joel Reyes Noche
Amir AsghariAmir Asghari
$\begingroup$ For instance, the "Socks-Shoes Property" for $(AB)^{-1} = B^{-1}A^{-1}$? $\endgroup$ – J W Jan 7 '16 at 14:50
$\begingroup$ Are you looking for a collection of these terms, or just individual examples? E.g., does FOIL qualify? As to your Edit, I wonder whether that would be a more helpful title for the question? As you remark in the first sentence of your post, the title is not sensible to many. $\endgroup$ – Benjamin Dickman Jan 7 '16 at 19:52
$\begingroup$ Personally, I dislike and avoid these "cutesy" identities; proper mathematical names are more universal, descriptive, and extensible. For example, the given formulas are better described as the "sum of cubes" and "difference of cubes". Now we can build on these patterns: Is a "sum of 4th powers" factorable in real numbers? Is a "sum of 5th powers" so factorable? Etc. $\endgroup$ – Daniel R. Collins Jan 7 '16 at 21:00
$\begingroup$ Does "binomial" expansion count, since it tells you how to expand the power of a sum of two terms? But more seriously, I am curious to know the etymology of "elephant and teacup." Maybe I am missing a good joke, or maybe it is a cultural gap that I just won't get. $\endgroup$ – user52817 Jan 7 '16 at 21:10
$\begingroup$ @user52817 Elephant refers to the "bigger" parenthesis and Teacup to the "smaller" one. It is also called "fat and slim identity"! Both expressions are considered to be funny. Thus, you are right in both of your "maybes", you are missing a good joke because a cultural gap :) $\endgroup$ – Amir Asghari Jan 8 '16 at 9:57
One of the most colorful names I have heard is the Chicken Mc Nugget theorem:
for any two relatively prime positive integers $m,n$, the greatest integer that cannot be written in the form $am + bn$ for nonnegative integers $a, b$ is $mn-m-n$.
link1, link2.
From the links:
The story goes that the Chicken McNugget Theorem got its name because in McDonalds, people bought Chicken McNuggets in 9 and 20 piece packages. Somebody wondered what the largest amount you could never buy was, assuming that you did not eat or take away any McNuggets. They found the answer to be 151 McNuggets, thus creating the Chicken McNugget Theorem.
The McNuggets version of the coin problem was introduced by Henri Picciotto, who included it in his algebra textbook co-authored with Anita Wah. Picciotto thought of the application in the 1980s while dining with his son at McDonald's, working the problem out on a napkin.
Federico PoloniFederico Poloni
$\begingroup$ That's usually called Sylvester's theorem, which is the base case of the Frobenius coin problem. The McNugget number is actually 43 and can't be directly computed using the theorem. (There is a small size of 6 nuggets as well) en.wikipedia.org/wiki/Coin_problem $\endgroup$ – Adam Aug 8 '16 at 14:13
In Central Mexico, the expression \begin{equation} x_{\pm} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} \end{equation} that solves quadratic equations of the form $ax^2 + bx + c = 0$ is called "fórmula del chicharronero" (formula of the chicharronero).
The chicharronero is the guy who sells salty snacks made of wheat (called chicharrones). Outside most schools there is always a chicharronero selling snacks to kids (here is a photo of a chicharronero; to the right of the picture there are chips; to the left the famous chicharrones).
It is said that the formula is so famous that even the chicharronero knows about it; thus the name.
Rodrigo ZepedaRodrigo Zepeda
$\begingroup$ Amazing example. By the way, do you generally write x (plus/minus) or you just wrote it that way? Also, shouldn't -b be +b in the equation? $\endgroup$ – Amir Asghari Jan 9 '16 at 1:20
$\begingroup$ In Uruguay it's called Bhaskara after the Indian mathematician who wrote about it. $\endgroup$ – ncr Jan 11 '16 at 6:12
$\begingroup$ When I was 16, my classmates put this formula on my birthday cake. They knew I was a math geek and this was the only formula they knew. $\endgroup$ – Amy B Jan 13 '16 at 13:42
$\begingroup$ In some parts of Germany, it's called the "Mitternachtsformel" ("midnight's formula"). One explanation for the name is that the formula is so important, that you need to know it even if your teacher wakes you up in the middle of the night. $\endgroup$ – Ingo Blechschmidt Feb 7 '16 at 14:48
$\begingroup$ We have both: the pork skins (chicharrón de cerdo) and this wheat thing (chicharron de harina) which according to Wikipedia are known as Duros in other places $\endgroup$ – Rodrigo Zepeda Mar 10 '17 at 0:27
How about the shoelace formula for the area of an arbitrary simple polygon?
(Image from Wikipedia.)
The formula computes the area from the coordinates of the vertices, essentially by a cross product to compute (signed) areas of triangles.
Joseph O'RourkeJoseph O'Rourke
$\begingroup$ Neat. I have derived this via Green's Theorem as an example in Calculus III in multiple semesters, but, I never had a name for it or this neat mnemonic to remember the pattern of the formula. Nice. $\endgroup$ – James S. Cook Jan 15 '16 at 7:19
$\begingroup$ How many kids nowadays have never seen shoelaces? $\endgroup$ – Gerald Edgar Oct 24 '16 at 12:56
I often refer to the identities $(AB)^{-1} = B^{-1}A^{-1}$ or $(AB)^T = B^TA^T$ as the socks-shoes identity. I'm not sure how wide-spread this is, I certainly did not invent it and I'm pretty sure I've read at least one of these in at least one text.
James S. CookJames S. Cook
$\begingroup$ I had to google this to get the meaning. Check this spiked math comic. It refers to "Contemporary Abstract Algebra" by Gallian. $\endgroup$ – Dirk Feb 1 '16 at 8:02
If you simplify a term by adding and subtracting something you call this a "nahrhafte Null" in German (probably translates to "nutritious null"?).
DirkDirk
$\begingroup$ It's called "adding a well-chosen zero" in English. $\endgroup$ – Daniel McLaury Jan 11 '16 at 7:48
$\begingroup$ Not to be confused with a narrhafte Null. $\endgroup$ – s.harp Nov 4 '16 at 22:02
My elementary students always wanted to know the name of the symbol shown here:
We called it the division house as did many of my colleagues, but my students wanted a mathematical name. We therefore wrote to Dr. Math at Drexel. We were told there is no name and were referred to this paper.
I subsequently held a contest (on election day) and the winner was a sixth grade girl, who named it the "parenticulum" because it is a contraction of parentheses and vinculm which is what the symbol is name for. After the contest, we all called it the "parenticulum"
For more about the origin of our name, see the following from Dr. Math:
"You might be able to call the horizontal line in the division symbol a vinculum, but I don't think there is a name for the whole thing. In fact, in the following page about the history of symbols, it is not named, but drawn, and the alternate text in the HTML calls it "a close parenthesis attached to a vinculum"See Jeff Miller's "Earliest Uses of Symbols of Operation""
Amy BAmy B
$\begingroup$ The left-hand part of the radical sign is different than this. √ compared to ) $\endgroup$ – Gerald Edgar Jan 7 '16 at 22:11
$\begingroup$ @SixWingedSeraph The division house is the name that most people I know in elementary ed use. No need to invent the divided-into-symbol. My students often would want to know how to know which number belonged in the house for a given word problem. Asking which number belongs inside the divided into symbol is more cumbersome. Furthermore there are many division symbols and this is the only one without a name. $\endgroup$ – Amy B Jan 8 '16 at 1:37
$\begingroup$ @SixWingedSeraph My students wanted a more mathematical name since they were taught great respect for math vocabulary. We created one which suited our purposes and was used in the classroom repeatedly. It doesn't matter that it was invented since it was really just for us. I shared it here to answer the question with a personal anecdote. $\endgroup$ – Amy B Jan 8 '16 at 1:40
$\begingroup$ I have heard this symbol called (slangily, but that seems appropriate for this thread) a "gozinta" -- as in "5 gozinta 15", a phonetic representation of what was one actually says when reading such expressions (i.e. "5 goes into 15"). $\endgroup$ – mweiss Jan 8 '16 at 15:20
$\begingroup$ I would like to mention that the division symbol above is not universal. I learned about it several years after my Ph.D. $\endgroup$ – Martin Argerami Jan 9 '16 at 12:12
I've just remembered that "Donkey Theorem" is used to refer to triangle inequality in geometry textbooks in Iran. The name implies that even a donkey which is on one corner of a triangle chooses the straight path (rather than the broken one) to get to the other corner where there is some hay to eat.
I checked to see if it is used elsewhere and I learned from this MSE post that it is also used in Turkey.
In Russian, the Squeeze Theorem (a.k.a. The Pinching Theorem) is called "Теорема о двух милиционерах" — "Two Policemen Theorem". The idea is that if two policemen are holding a criminal between them, the bad guy is going to the same place, probably jail or precinct, where the policemen are going.
zipirovichzipirovich
$\begingroup$ Thank you for sharing this fun interpretation of the Squeeze Theorem :) $\endgroup$ – Amir Asghari Oct 24 '16 at 8:28
$\begingroup$ In Soviet Russia limit squeeze you. $\endgroup$ – James S. Cook Oct 25 '16 at 0:03
$\begingroup$ Same in French and Italian: theorème des gendarmes, teorema dei carabinieri. And Spanish uses the equally colorful "teorema del sándwich". $\endgroup$ – Federico Poloni Mar 11 '18 at 19:41
I first heard the term "stars and bars" a few years ago in Mathematics Stack Exchange. From Wikipedia:
In the context of combinatorial mathematics, stars and bars is a graphical aid for deriving certain combinatorial theorems. It was popularized by William Feller in his classic book on probability. It can be used to solve many simple counting problems, such as how many ways there are to put $n$ indistinguishable balls into $k$ distinguishable bins.
Joel Reyes NocheJoel Reyes Noche
$\begingroup$ See the Wikipedia article I linked to for more information. $\endgroup$ – Joel Reyes Noche Jan 8 '16 at 23:08
What is called "fórmula del chicharronero" in Central Mexico (see the answer by Rodrigo Zepeda) is called "Mitternachtsformel" ("midnight formula") in middle school in some parts of Germany. This is because, if someone wakes you up at midnight and asks what are the roots of a parabola you have to know this in a second.
After second thought, I think that the Mitternachtformel is $$ x_{1/2} = -\tfrac{p}2\pm\sqrt{\tfrac{p^2}{4}-q} $$ for the roots of $$ x^2+px+q=0. $$
$\begingroup$ Is it your personal way to write $x_{1/2}$ or a common way in those parts of Germany? $\endgroup$ – Amir Asghari Jan 11 '16 at 17:13
$\begingroup$ Writing $x_{1/2}$ is common in middle school in Germany (at least it was when I was in school). $\endgroup$ – Dirk Jan 11 '16 at 17:57
$\begingroup$ When I went to school in Germany (in the 70s), we wrote $x_{1,2}$ instead. $\endgroup$ – Frunobulax Feb 8 '16 at 13:19
I was about to write a comment to @Amy B division house, saying that in Iran we use
to denote $a$ divided by $b$, that a colleague entered the room asking me what I am doing. I explained and she told me that in their primary school (somewhere in Leicestershire in England) they called it (the symbol drawn by Amy B) "bus stop" where the bigger number needs to be covered and the smaller number remains outside! I thought it is worth mentioning as a separate answer.
Reading a paper from an English student in England I came across this sentence:
To expand brackets in Algebra, they were taught the "crab claw" method, a method that I was used to.
Since I had personally never heard of the term "crab claw" in Algebra, I thought it would be beneficial to add it here.
$\begingroup$ In Dutch, students call this the "parrot beak" or "bird beak" method: wisfaq.nl/bestanden/q8350img3.gif $\endgroup$ – rchard2scout Aug 21 '19 at 10:43
The identities $(a + b) (a - b) = a^2 - b^2$ and $(a + b)^2 = a^2 + 2 a b + b^2$ (and sometimes $(x - a) (x - b) = x^2 - (a + b) x + a b$) are called "productos notables" (notable products) in Spanish. This is sometimes extended to the products mentioned in the question, and even higher order ones.
vonbrandvonbrand
$\begingroup$ In Dutch, they are known as "merkwaardige producten" (also notable/noteworthy products). $\endgroup$ – J W Jan 8 '16 at 14:57
$\begingroup$ This would be the category of "special products" for most English texts (which includes the given "difference of squares" and "binomial square", as well as the OP's "difference of cubes" and "sum of cubes"). I don't think that's really "slang", as it's properly-descriptive, and used widely in the literature. $\endgroup$ – Daniel R. Collins Jan 8 '16 at 17:42
$\begingroup$ @vonbrand: you might want to add a geographical restriction to your statement. I did all my schooling (from kindergarten to PhD) in Spanish and I never heard that expression. $\endgroup$ – Martin Argerami Jan 9 '16 at 12:15
$\begingroup$ In Russian they are called "Формулы сокращенного умножения" -- "formulas for reduced/condensed multiplication" (not sure which is a better translation). $\endgroup$ – zipirovich Mar 10 '17 at 3:10
My school talked about sausages and cocktail-sticks, meaning questions where you have to find the formula for the $n$th term in the sequence. I never worked out what the point was.
Jessica BJessica B
$\begingroup$ I guess it somehow refers to using natural numbers (cocktail sticks) one to one to pick the numbers (sausages) of the second set. Funny :) $\endgroup$ – Amir Asghari Jan 8 '16 at 15:26
$\begingroup$ The sausages are the two lists of numbers in ovals, the cocktail stick is the line between, that somehow represents the rule for getting from one to the other. But I have no idea why drawing it that way helps. ... Actually, looking at it now, I wonder whether it's a corruption of a standard function picture, with an arrow from the domain to the range. $\endgroup$ – Jessica B Jan 8 '16 at 16:20
$\begingroup$ Aha! Silly me. That is why you included the picture in your answer $\endgroup$ – Amir Asghari Jan 8 '16 at 17:13
The phrase chain rule in Calculus is a mathematical slang.
The rule goes like this: If $f(x) = g(h(x))$, then $f'(x) = g'(h(x))h'(x)$.
When I learned the rule at school, my teachers just called it "the function of a function rule", which describes precisely the situation in which you use it. When I got to university in a different state, they kept mentioning this "chain rule" and I had not the slightest clue what they were talking about.
DavidButlerUofADavidButlerUofA
$\begingroup$ I do not agree that this is slang. It may have German roots: Composition of two functions is called "Verkettung" in German, hence "Kettenregel" for the derivative of a "Verkettung" and this translates to "chain rule". $\endgroup$ – Dirk Jan 12 '16 at 8:22
$\begingroup$ But to an English-speaking student with no knowledge of German, it is completely unrelated to the rule it describes, so it might as well be slang. $\endgroup$ – DavidButlerUofA Jan 12 '16 at 9:06
$\begingroup$ Since the English Wikipedia does not mention any other name for the chain rule I guess that this is the common name (I see the same from my limited knowledge of English textbooks). On a different matter: Would "Nullstellensatz" count as slang? $\endgroup$ – Dirk Jan 12 '16 at 9:42
$\begingroup$ Hmm. My interpretation of the question was that it was seeking terminology that doesn't seem to relate to the thing it names. So I thought my answer fit. Rereading the definition of slang given in the question, maybe my answer doesn't fit after all. $\endgroup$ – DavidButlerUofA Jan 12 '16 at 13:25
$\begingroup$ Yeah, it seems quite hard to pin down what "slang" is… Anyway, interesting to see that "chain rule" does appear strange without some background in German! $\endgroup$ – Dirk Jan 12 '16 at 14:09
In Uruguay (at least) famous formulas are often referred to using the name of a person who has been associated to their initial discovery. For instance (see my comment above), the quadratic formula is called Bhaskara and the corollary to the first part of the Fundamental Theorem of Calculus $\int_a^b f(t) dt = F(b)-F(a)$ is called Barrow. Mathematicians there say "by Barrow" or "by Bhaskara" which I find much more interesting than "by the Corollary to the first part of the Fundamental Theorem of Calculus" or "by the quadratic formula". It feels more human to me.
ncrncr
$\begingroup$ While an interesting fact, this does not really sound like slang but like normal mathematical language. $\endgroup$ – Dirk Jan 11 '16 at 8:28
$\begingroup$ As an Iranian, I certainly wish you used "by al-Khwārizmī" instead of "by Bhaskara" :) $\endgroup$ – Amir Asghari Jan 11 '16 at 17:18
Thanks for contributing an answer to Mathematics Educators Stack Exchange!
Not the answer you're looking for? Browse other questions tagged examples terminology language-use or ask your own question.
Mathematical education slang
When and Why are different division symbols taught?
Rigorous proofs vs. illustrative examples
Term and reference for the problem of students "overassociating" concepts with each other
Where do you find math tasks?
Mathematics dictionary use for English Language Learners
Specific examples (like elementary proofs,or simple problems) which appear rich in abstractions when observed through the lens of abstraction
Alternative terms for 'mathematical understanding'
Can we use "specific" and "particular" interchangeably all the time?
In teaching mathematics, should one always follow some international standards such as ISO 80000-2?
Should high school teachers say "real numbers" before teaching complex numbers?
|
CommonCrawl
|
Introduction to vectors and scalars
High marks in science are the key to your success and future plans. Test yourself and learn more on Siyavula Practice.
Sign up and test yourself
Exercise 19.8
Write only the word/term for each of the following descriptions:
the mass of one mole of a substance
the number of particles in one mole of a substance
Solution not yet available
\(\text{5}\) \(\text{g}\) of magnesium chloride is formed as the product of a chemical reaction. Select the true statement from the answers below:
\(\text{0,08}\) moles of magnesium chloride are formed in the reaction
the number of atoms of \(\text{Cl}\) in the product is \(\text{0,6022} \times \text{10}^{\text{23}}\)
the number of atoms of \(\text{Mg}\) is \(\text{0,05}\)
the atomic ratio of \(\text{Mg}\) atoms to \(\text{Cl}\) atoms in the product is \(1:1\)
2 moles of oxygen gas react with hydrogen. What is the mass of oxygen in the reactants?
\(\text{32}\) \(\text{g}\)
\(\text{0,125}\) \(\text{g}\)
In the compound potassium sulphate (\(\text{K}_{2}\text{SO}_{4}\)), oxygen makes up \(x\%\) of the mass of the compound. \(x = ?\)
\(\text{36,8}\)
\(\text{9,2}\)
The concentration of a \(\text{150}\) \(\text{cm$^{3}$}\) solution, containing \(\text{5}\) \(\text{g}\) of \(\text{NaCl}\) is:
\(\text{0,09}\) \(\text{mol·dm$^{-3}$}\)
\(\text{5,7} \times \text{10}^{-\text{4}}\) \(\text{mol·dm$^{-3}$}\)
Calculate the number of moles in:
\(\text{5}\) \(\text{g}\) of methane (\(\text{CH}_{4}\))
\(\text{3,4}\) \(\text{g}\) of hydrochloric acid
\(\text{6,2}\) \(\text{g}\) of potassium permanganate (\(\text{KMnO}_{4}\))
\(\text{4}\) \(\text{g}\) of neon
\(\text{9,6}\) \(\text{kg}\) of titanium tetrachloride (\(\text{TiCl}_{4}\))
Calculate the mass of:
\(\text{0,2}\) \(\text{mol}\) of potassium hydroxide (\(\text{KOH}\))
\(\text{0,47}\) \(\text{mol}\) of nitrogen dioxide
\(\text{5,2}\) \(\text{mol}\) of helium
\(\text{0,05}\) \(\text{mol}\) of copper (II) chloride (\(\text{CuCl}_{2}\))
\(\text{31,31} \times \text{10}^{\text{23}}\) molecules of carbon monoxide (CO)
Calculate the percentage that each element contributes to the overall mass of:
Chloro-benzene (\(\text{C}_{6}\text{H}_{5}\text{Cl}\))
Lithium hydroxide (\(\text{LiOH}\))
CFC's (chlorofluorocarbons) are one of the gases that contribute to the depletion of the ozone layer. A chemist analysed a CFC and found that it contained \(\text{58,64}\%\) chlorine, \(\text{31,43}\%\) fluorine and \(\text{9,93}\%\) carbon. What is the empirical formula?
\(\text{14}\) \(\text{g}\) of nitrogen combines with oxygen to form \(\text{46}\) \(\text{g}\) of a nitrogen oxide. Use this information to work out the formula of the oxide.
Iodine can exist as one of three oxides (\(\text{I}_{2}\text{O}_{4}\); \(\text{I}_{2}\text{O}_{5}\); \(\text{I}_{4}\text{O}_{9}\)). A chemist has produced one of these oxides and wishes to know which one they have. If he started with \(\text{508}\) \(\text{g}\) of iodine and formed \(\text{652}\) \(\text{g}\) of the oxide, which oxide has he produced?
A fluorinated hydrocarbon (a hydrocarbon is a chemical compound containing hydrogen and carbon) was analysed and found to contain \(\text{8,57}\%\) \(\text{H}\), \(\text{51,05}\%\) \(\text{C}\) and \(\text{40,38}\%\) \(\text{F}\).
What is its empirical formula?
What is the molecular formula if the molar mass is \(\text{94,1}\) \(\text{g·mol$^{-1}$}\)?
Copper sulphate crystals often include water. A chemist is trying to determine the number of moles of water in the copper sulphate crystals. She weighs out \(\text{3}\) \(\text{g}\) of copper sulphate and heats this. After heating, she finds that the mass is \(\text{1,9}\) \(\text{g}\). What is the number of moles of water in the crystals? (Copper sulphate is represented by \(\text{CuSO}_{4}.\text{xH}_{2}\text{O}\)).
\(\text{300}\) \(\text{cm$^{3}$}\) of a \(\text{0,1}\) \(\text{mol·dm$^{-3}$}\) solution of sulphuric acid is added to \(\text{200}\) \(\text{cm$^{3}$}\) of a \(\text{0,5}\) \(\text{mol·dm$^{-3}$}\) solution of sodium hydroxide.
Write down a balanced equation for the reaction which takes place when these two solutions are mixed.
Calculate the number of moles of sulphuric acid which were added to the sodium hydroxide solution.
Is the number of moles of sulphuric acid enough to fully neutralise the sodium hydroxide solution? Support your answer by showing all relevant calculations.
A learner is asked to make \(\text{200}\) \(\text{cm$^{3}$}\) of sodium hydroxide (\(\text{NaOH}\)) solution of concentration \(\text{0,5}\) \(\text{mol·dm$^{-3}$}\).
Determine the mass of sodium hydroxide pellets he needs to use to do this.
Using an accurate balance the learner accurately measures the correct mass of the NaOH pellets. To the pellets he now adds exactly \(\text{200}\) \(\text{cm$^{3}$}\) of pure water. Will his solution have the correct concentration? Explain your answer.
The learner then takes \(\text{300}\) \(\text{cm$^{3}$}\) of a \(\text{0,1}\) \(\text{mol·dm$^{-3}$}\) solution of sulphuric acid (\(\text{H}_{2}\text{SO}_{4}\)) and adds it to \(\text{200}\) \(\text{cm$^{3}$}\) of a \(\text{0,5}\) \(\text{mol·dm$^{-3}$}\) solution of \(\text{NaOH}\) at \(\text{25}\) \(\text{℃}\).
Calculate the number of moles of \(\text{H}_{2}\text{SO}_{4}\) which were added to the \(\text{NaOH}\) solution.
\(\text{96,2}\) \(\text{g}\) sulphur reacts with an unknown quantity of zinc according to the following equation: \(\text{Zn} + \text{S} \rightarrow \text{ZnS}\)
What mass of zinc will you need for the reaction, if all the sulphur is to be used up?
Calculate the theoretical yield for this reaction.
It is found that \(\text{275}\) \(\text{g}\) of zinc sulphide was produced. Calculate the % yield.
Calcium chloride reacts with carbonic acid to produce calcium carbonate and hydrochloric acid according to the following equation:
\[\text{CaCl}_{2} + \text{H}_{2}\text{CO}_{3} \rightarrow \text{CaCO}_{3} + \text{HCl}\]
If you want to produce \(\text{10}\) \(\text{g}\) of calcium carbonate through this chemical reaction, what quantity (in g) of calcium chloride will you need at the start of the reaction?
|
CommonCrawl
|
Taylor Series Formula
The Taylor series formula is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point. The concept of a Taylor series was formulated by the Scottish mathematician James Gregory and formally introduced by the English mathematician Brook Taylor in 1715.
A function can be approximated by using a finite number of terms of its Taylor series. Taylor's theorem gives quantitative estimates on the error introduced by the use of such an approximation. The polynomial formed by taking some initial terms of the Taylor series is called a Taylor polynomial.
The Taylor series of a function is the limit of that function's Taylor polynomials as the degree increases, provided that the limit exists. A function may not be equal to its Taylor series, even if its Taylor series converges at every point. A function that is equal to its Taylor series in an open interval (or a disc in the complex plane) is known as an analytic function in that interval.
Formula for Taylor Series
The Taylor series of a real or complex-valued function f(x) that is infinitely differentiable at a real or complex number "a" is the power series.
f(x) = f(a) (x − a) + [f'(a)/2!(x−a)2] + [f'(a)/3!(x − a)3] + ….. + [f'(a)/n!(x − a)n]
Or, you can make this formula simple and more compact by using the sigma notation.
\(\LARGE \sum_{n=0}^{\infty}\frac{f^{n}a}{n!}\left(x-a\right)^{n}\)
Also Check: Taylor Series Calculator
Solved Examples Using Taylor Series Formula
Example: Find the Taylor series with center $x_{0}=0 for the hyperbolic cosine function f(x) = cosh x by using the fact that cosh x is the derivative of the hyperbolic sine function sinh x, which has as its Taylor series expansion.
$\large sinh: x=\sum_{n=0}^{\infty}\frac{x^{2n+1}}{\left(2n+1\right)!}$
(Note: If you remember the Taylor expansions for sin x and cos x, you get an indication, why their hyperbolic counterparts might deserve the names "sine" and "cosine").
Since,
$\large sinh: x=\sum_{n=0}^{\infty}\frac{x^{2n+1}}{\left(2n+1\right)!}=x+\frac{x^{3}}{3!}+\frac{x^{5}}{5!}+…$
Its derivative cosh x has the Taylor expansion.
$\large cosh : x=\sum_{n=0}^{\infty}\frac{\left ( 2n+1 \right )x^{2n}}{\left ( 2n+1 \right )!}=\sum_{n=0}^{\infty}\frac{x^{2n}}{\left ( 2n \right )!}$
The last step follows from the fact that.
$\large \frac{2n+1}{\left ( 2n+1 \right )!}=\frac{2n+1}{1\cdot2\cdot\cdot\cdot\left (2n\right)\cdot\left (2n+1\right)}=\frac{1}{\left (2n\right)!}$
Note: The hyperbolic functions differ from their trigonometric counterparts in that they do not sport alternating signs.
Circumference To Diameter Formula Simpsons Rule Formula
Profit Margin Formula What Is The Formula Of A Cube Minus B Cube
Regular Tetrahedron Formula Force Of Attraction
Newton Raphson Formula Percentage Increase
Polynomial Formula Class 10 Chemical Formula Of Oxalic Acid
|
CommonCrawl
|
Sample records for born-green-yvon equation
Strain Induced Adatom Correlations
Kappus, Wolfgang
A Born-Green-Yvon type model for adatom density correlations is combined with a model for adatom interactions mediated by the strain in elastic anisotropic substrates. The resulting nonlinear integral equation is solved numerically for coverages from zero to a limit given by stability constraints. W, Nb, Ta and Au surfaces are taken as examples to show the effects of different elastic anisotropy regions. Results of the calculation are shown by appropriate plots and discussed. A mapping to sup...
A Born-Green-Yvon type model for adatom density correlations is combined with a model for adatom interactions mediated by the strain in elastic anisotropic substrates. The resulting nonlinear integral equation is solved numerically for coverages from zero to a limit given by stability constraints. W, Nb, Ta and Au surfaces are taken as examples to show the effects of different elastic anisotropy regions. Results of the calculation are shown by appropriate plots and discussed. A mapping to superstructures is tried. Corresponding adatom configurations from Monte Carlo simulations are shown.
Integral equations
Moiseiwitsch, B L
Two distinct but related approaches hold the solutions to many mathematical problems--the forms of expression known as differential and integral equations. The method employed by the integral equation approach specifically includes the boundary conditions, which confers a valuable advantage. In addition, the integral equation approach leads naturally to the solution of the problem--under suitable conditions--in the form of an infinite series.Geared toward upper-level undergraduate students, this text focuses chiefly upon linear integral equations. It begins with a straightforward account, acco
Tricomi, FG
Based on his extensive experience as an educator, F. G. Tricomi wrote this practical and concise teaching text to offer a clear idea of the problems and methods of the theory of differential equations. The treatment is geared toward advanced undergraduates and graduate students and addresses only questions that can be resolved with rigor and simplicity.Starting with a consideration of the existence and uniqueness theorem, the text advances to the behavior of the characteristics of a first-order equation, boundary problems for second-order linear equations, asymptotic methods, and diff
Barbu, Viorel
This textbook is a comprehensive treatment of ordinary differential equations, concisely presenting basic and essential results in a rigorous manner. Including various examples from physics, mechanics, natural sciences, engineering and automatic theory, Differential Equations is a bridge between the abstract theory of differential equations and applied systems theory. Particular attention is given to the existence and uniqueness of the Cauchy problem, linear differential systems, stability theory and applications to first-order partial differential equations. Upper undergraduate students and researchers in applied mathematics and systems theory with a background in advanced calculus will find this book particularly useful. Supplementary topics are covered in an appendix enabling the book to be completely self-contained.
Bernoulli's Equation
regarding nature of forces hold equally for liquids, even though the ... particle. Figure A. A fluid particle is a very small imaginary blob of fluid, here shown sche- matically in .... picture gives important information about the flow field. ... Bernoulli's equation is derived assuming ideal flow, .... weight acting in the flow direction S is.
Relativistic equations
Gross, F.
Relativistic equations for two and three body scattering are discussed. Particular attention is paid to relativistic three body kinetics because of recent form factor measurements of the Helium 3 - Hydrogen 3 system recently completed at Saclay and Bates and the accompanying speculation that relativistic effects are important for understanding the three nucleon system. 16 refs., 4 figs
Differential Equations Compatible with KZ Equations
Felder, G.; Markov, Y.; Tarasov, V.; Varchenko, A.
We define a system of 'dynamical' differential equations compatible with the KZ differential equations. The KZ differential equations are associated to a complex simple Lie algebra g. These are equations on a function of n complex variables z i taking values in the tensor product of n finite dimensional g-modules. The KZ equations depend on the 'dual' variable in the Cartan subalgebra of g. The dynamical differential equations are differential equations with respect to the dual variable. We prove that the standard hypergeometric solutions of the KZ equations also satisfy the dynamical equations. As an application we give a new determinant formula for the coordinates of a basis of hypergeometric solutions
Extended rate equations
Shore, B.W.
The equations of motion are discussed which describe time dependent population flows in an N-level system, reviewing the relationship between incoherent (rate) equations, coherent (Schrodinger) equations, and more general partially coherent (Bloch) equations. Approximations are discussed which replace the elaborate Bloch equations by simpler rate equations whose coefficients incorporate long-time consequences of coherence
Partial Differential Equations
The volume contains a selection of papers presented at the 7th Symposium on differential geometry and differential equations (DD7) held at the Nankai Institute of Mathematics, Tianjin, China, in 1986. Most of the contributions are original research papers on topics including elliptic equations, hyperbolic equations, evolution equations, non-linear equations from differential geometry and mechanics, micro-local analysis.
Equating error in observed-score equating
van der Linden, Willem J.
Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of
Chemical Equation Balancing.
Blakley, G. R.
Reviews mathematical techniques for solving systems of homogeneous linear equations and demonstrates that the algebraic method of balancing chemical equations is a matter of solving a system of homogeneous linear equations. FORTRAN programs using this matrix method to chemical equation balancing are available from the author. (JN)
Handbook of integral equations
Polyanin, Andrei D
This handbook contains over 2,500 integral equations with solutions as well as analytical and numerical methods for solving linear and nonlinear equations. It explores Volterra, Fredholm, WienerHopf, Hammerstein, Uryson, and other equations that arise in mathematics, physics, engineering, the sciences, and economics. This second edition includes new chapters on mixed multidimensional equations and methods of integral equations for ODEs and PDEs, along with over 400 new equations with exact solutions. With many examples added for illustrative purposes, it presents new material on Volterra, Fredholm, singular, hypersingular, dual, and nonlinear integral equations, integral transforms, and special functions.
Introduction to differential equations
Taylor, Michael E
The mathematical formulations of problems in physics, economics, biology, and other sciences are usually embodied in differential equations. The analysis of the resulting equations then provides new insight into the original problems. This book describes the tools for performing that analysis. The first chapter treats single differential equations, emphasizing linear and nonlinear first order equations, linear second order equations, and a class of nonlinear second order equations arising from Newton's laws. The first order linear theory starts with a self-contained presentation of the exponen
Nonlinear evolution equations
Uraltseva, N N
This collection focuses on nonlinear problems in partial differential equations. Most of the papers are based on lectures presented at the seminar on partial differential equations and mathematical physics at St. Petersburg University. Among the topics explored are the existence and properties of solutions of various classes of nonlinear evolution equations, nonlinear imbedding theorems, bifurcations of solutions, and equations of mathematical physics (Navier-Stokes type equations and the nonlinear Schrödinger equation). The book will be useful to researchers and graduate students working in p
Benney's long wave equations
Lebedev, D.R.
Benney's equations of motion of incompressible nonviscous fluid with free surface in the approximation of long waves are analyzed. The connection between the Lie algebra of Hamilton plane vector fields and the Benney's momentum equations is shown
Fractional Schroedinger equation
Laskin, Nick
Some properties of the fractional Schroedinger equation are studied. We prove the Hermiticity of the fractional Hamilton operator and establish the parity conservation law for fractional quantum mechanics. As physical applications of the fractional Schroedinger equation we find the energy spectra of a hydrogenlike atom (fractional 'Bohr atom') and of a fractional oscillator in the semiclassical approximation. An equation for the fractional probability current density is developed and discussed. We also discuss the relationships between the fractional and standard Schroedinger equations
Ordinary differential equations
Greenberg, Michael D
Features a balance between theory, proofs, and examples and provides applications across diverse fields of study Ordinary Differential Equations presents a thorough discussion of first-order differential equations and progresses to equations of higher order. The book transitions smoothly from first-order to higher-order equations, allowing readers to develop a complete understanding of the related theory. Featuring diverse and interesting applications from engineering, bioengineering, ecology, and biology, the book anticipates potential difficulties in understanding the various solution steps
Beginning partial differential equations
O'Neil, Peter V
A broad introduction to PDEs with an emphasis on specialized topics and applications occurring in a variety of fields Featuring a thoroughly revised presentation of topics, Beginning Partial Differential Equations, Third Edition provides a challenging, yet accessible,combination of techniques, applications, and introductory theory on the subjectof partial differential equations. The new edition offers nonstandard coverageon material including Burger's equation, the telegraph equation, damped wavemotion, and the use of characteristics to solve nonhomogeneous problems. The Third Edition is or
Averaged RMHD equations
Ichiguchi, Katsuji
A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)
Singular stochastic differential equations
Cherny, Alexander S
The authors introduce, in this research monograph on stochastic differential equations, a class of points termed isolated singular points. Stochastic differential equations possessing such points (called singular stochastic differential equations here) arise often in theory and in applications. However, known conditions for the existence and uniqueness of a solution typically fail for such equations. The book concentrates on the study of the existence, the uniqueness, and, what is most important, on the qualitative behaviour of solutions of singular stochastic differential equations. This is done by providing a qualitative classification of isolated singular points, into 48 possible types.
On separable Pauli equations
Zhalij, Alexander
We classify (1+3)-dimensional Pauli equations for a spin-(1/2) particle interacting with the electro-magnetic field, that are solvable by the method of separation of variables. As a result, we obtain the 11 classes of vector-potentials of the electro-magnetic field A(t,x(vector sign))=(A 0 (t,x(vector sign)), A(vector sign)(t,x(vector sign))) providing separability of the corresponding Pauli equations. It is established, in particular, that the necessary condition for the Pauli equation to be separable into second-order matrix ordinary differential equations is its equivalence to the system of two uncoupled Schroedinger equations. In addition, the magnetic field has to be independent of spatial variables. We prove that coordinate systems and the vector-potentials of the electro-magnetic field providing the separability of the corresponding Pauli equations coincide with those for the Schroedinger equations. Furthermore, an efficient algorithm for constructing all coordinate systems providing the separability of Pauli equation with a fixed vector-potential of the electro-magnetic field is developed. Finally, we describe all vector-potentials A(t,x(vector sign)) that (a) provide the separability of Pauli equation, (b) satisfy vacuum Maxwell equations without currents, and (c) describe non-zero magnetic field
Functional equations with causal operators
Corduneanu, C
Functional equations encompass most of the equations used in applied science and engineering: ordinary differential equations, integral equations of the Volterra type, equations with delayed argument, and integro-differential equations of the Volterra type. The basic theory of functional equations includes functional differential equations with causal operators. Functional Equations with Causal Operators explains the connection between equations with causal operators and the classical types of functional equations encountered by mathematicians and engineers. It details the fundamentals of linear equations and stability theory and provides several applications and examples.
Evans, Lawrence C
This text gives a comprehensive survey of modern techniques in the theoretical study of partial differential equations (PDEs) with particular emphasis on nonlinear equations. The exposition is divided into three parts: representation formulas for solutions; theory for linear partial differential equations; and theory for nonlinear partial differential equations. Included are complete treatments of the method of characteristics; energy methods within Sobolev spaces; regularity for second-order elliptic, parabolic, and hyperbolic equations; maximum principles; the multidimensional calculus of variations; viscosity solutions of Hamilton-Jacobi equations; shock waves and entropy criteria for conservation laws; and, much more.The author summarizes the relevant mathematics required to understand current research in PDEs, especially nonlinear PDEs. While he has reworked and simplified much of the classical theory (particularly the method of characteristics), he primarily emphasizes the modern interplay between funct...
Nonlinear Dirac Equations
Wei Khim Ng
Full Text Available We construct nonlinear extensions of Dirac's relativistic electron equation that preserve its other desirable properties such as locality, separability, conservation of probability and Poincaré invariance. We determine the constraints that the nonlinear term must obey and classify the resultant non-polynomial nonlinearities in a double expansion in the degree of nonlinearity and number of derivatives. We give explicit examples of such nonlinear equations, studying their discrete symmetries and other properties. Motivated by some previously suggested applications we then consider nonlinear terms that simultaneously violate Lorentz covariance and again study various explicit examples. We contrast our equations and construction procedure with others in the literature and also show that our equations are not gauge equivalent to the linear Dirac equation. Finally we outline various physical applications for these equations.
Differential equations for dummies
Holzner, Steven
The fun and easy way to understand and solve complex equations Many of the fundamental laws of physics, chemistry, biology, and economics can be formulated as differential equations. This plain-English guide explores the many applications of this mathematical tool and shows how differential equations can help us understand the world around us. Differential Equations For Dummies is the perfect companion for a college differential equations course and is an ideal supplemental resource for other calculus classes as well as science and engineering courses. It offers step-by-step techniques, practical tips, numerous exercises, and clear, concise examples to help readers improve their differential equation-solving skills and boost their test scores.
Degenerate nonlinear diffusion equations
Favini, Angelo
The aim of these notes is to include in a uniform presentation style several topics related to the theory of degenerate nonlinear diffusion equations, treated in the mathematical framework of evolution equations with multivalued m-accretive operators in Hilbert spaces. The problems concern nonlinear parabolic equations involving two cases of degeneracy. More precisely, one case is due to the vanishing of the time derivative coefficient and the other is provided by the vanishing of the diffusion coefficient on subsets of positive measure of the domain. From the mathematical point of view the results presented in these notes can be considered as general results in the theory of degenerate nonlinear diffusion equations. However, this work does not seek to present an exhaustive study of degenerate diffusion equations, but rather to emphasize some rigorous and efficient techniques for approaching various problems involving degenerate nonlinear diffusion equations, such as well-posedness, periodic solutions, asympt...
Drift-Diffusion Equation
K. Banoo
equation in the discrete momentum space. This is shown to be similar to the conventional drift-diffusion equation except that it is a more rigorous solution to the Boltzmann equation because the current and carrier densities are resolved into M×1 vectors, where M is the number of modes in the discrete momentum space. The mobility and diffusion coefficient become M×M matrices which connect the M momentum space modes. This approach is demonstrated by simulating electron transport in bulk silicon.
Solving Ordinary Differential Equations
Krogh, F. T.
Initial-value ordinary differential equation solution via variable order Adams method (SIVA/DIVA) package is collection of subroutines for solution of nonstiff ordinary differential equations. There are versions for single-precision and double-precision arithmetic. Requires fewer evaluations of derivatives than other variable-order Adams predictor/ corrector methods. Option for direct integration of second-order equations makes integration of trajectory problems significantly more efficient. Written in FORTRAN 77.
Reactimeter dispersion equation
A.G. Yuferov
The aim of this work is to derive and analyze a reactimeter metrological model in the form of the dispersion equation which connects reactimeter input/output signal dispersions with superimposed random noise at the inlet. It is proposed to standardize the reactimeter equation form, presenting the main reactimeter computing unit by a convolution equation. Hence, the reactimeter metrological characteristics are completely determined by this unit hardware function which represents a transient re...
Differential equations I essentials
REA, Editors of
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Differential Equations I covers first- and second-order equations, series solutions, higher-order linear equations, and the Laplace transform.
A new evolution equation
Laenen, E.
We propose a new evolution equation for the gluon density relevant for the region of small x B . It generalizes the GLR equation and allows deeper penetration in dense parton systems than the GLR equation does. This generalization consists of taking shadowing effects more comprehensively into account by including multigluon correlations, and allowing for an arbitrary initial gluon distribution in a hadron. We solve the new equation for fixed α s . We find that the effects of multigluon correlations on the deep-inelastic structure function are small. (orig.)
Equational type logic
Manca, V.; Salibra, A.; Scollo, Giuseppe
Equational type logic is an extension of (conditional) equational logic, that enables one to deal in a single, unified framework with diverse phenomena such as partiality, type polymorphism and dependent types. In this logic, terms may denote types as well as elements, and atomic formulae are either
Alternative equations of gravitation
Pinto Neto, N.
It is shown, trough a new formalism, that the quantum fluctuation effects of the gravitational field in Einstein's equations are analogs to the effects of a continuum medium in Maxwell's Electrodynamics. Following, a real example of the applications of these equations is studied. Qunatum fluctuations effects as perturbation sources in Minkowski and Friedmann Universes are examined. (L.C.) [pt
Reduced Braginskii equations
Yagi, M. [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment; Horton, W. [Texas Univ., Austin, TX (United States). Inst. for Fusion Studies
A set of reduced Braginskii equations is derived without assuming flute ordering and the Boussinesq approximation. These model equations conserve the physical energy. It is crucial at finite {beta} that we solve the perpendicular component of Ohm`s law to conserve the physical energy while ensuring the relation {del} {center_dot} j = 0.
Yagi, M.; Horton, W.
A set of reduced Braginskii equations is derived without assuming flute ordering and the Boussinesq approximation. These model equations conserve the physical energy. It is crucial at finite β that we solve the perpendicular component of Ohm's law to conserve the physical energy while ensuring the relation ∇ · j = 0
A set of reduced Braginskii equations is derived without assuming flute ordering and the Boussinesq approximation. These model equations conserve the physical energy. It is crucial at finite β that the perpendicular component of Ohm's law be solved to ensure ∇·j=0 for energy conservation
Model Compaction Equation
The currently proposed model compaction equation was derived from data sourced from the. Niger Delta and it relates porosity to depth for sandstones under hydrostatic pressure condition. The equation is useful in predicting porosity and compaction trend in hydrostatic sands of the. Niger Delta. GEOLOGICAL SETTING OFÂ ...
The Wouthuysen equation
M. Hazewinkel (Michiel)
textabstractDedication: I dedicate this paper to Prof. P.C. Baayen, at the occasion of his retirement on 20 December 1994. The beautiful equation which forms the subject matter of this paper was invented by Wouthuysen after he retired. The four complex variable Wouthuysen equation arises from an
The generalized Fermat equation
Beukers, F.
This article will be devoted to generalisations of Fermat's equation xn + yn = zn. Very soon after the Wiles and Taylor proof of Fermat's Last Theorem, it was wondered what would happen if the exponents in the three term equation would be chosen differently. Or if coefficients other than 1 would
Applied partial differential equations
Logan, J David
This primer on elementary partial differential equations presents the standard material usually covered in a one-semester, undergraduate course on boundary value problems and PDEs. What makes this book unique is that it is a brief treatment, yet it covers all the major ideas: the wave equation, the diffusion equation, the Laplace equation, and the advection equation on bounded and unbounded domains. Methods include eigenfunction expansions, integral transforms, and characteristics. Mathematical ideas are motivated from physical problems, and the exposition is presented in a concise style accessible to science and engineering students; emphasis is on motivation, concepts, methods, and interpretation, rather than formal theory. This second edition contains new and additional exercises, and it includes a new chapter on the applications of PDEs to biology: age structured models, pattern formation; epidemic wave fronts, and advection-diffusion processes. The student who reads through this book and solves many of t...
Hyperbolic partial differential equations
Witten, Matthew
Hyperbolic Partial Differential Equations III is a refereed journal issue that explores the applications, theory, and/or applied methods related to hyperbolic partial differential equations, or problems arising out of hyperbolic partial differential equations, in any area of research. This journal issue is interested in all types of articles in terms of review, mini-monograph, standard study, or short communication. Some studies presented in this journal include discretization of ideal fluid dynamics in the Eulerian representation; a Riemann problem in gas dynamics with bifurcation; periodic M
Nonlinear diffusion equations
Wu Zhuo Qun; Li Hui Lai; Zhao Jun Ning
Nonlinear diffusion equations, an important class of parabolic equations, come from a variety of diffusion phenomena which appear widely in nature. They are suggested as mathematical models of physical problems in many fields, such as filtration, phase transition, biochemistry and dynamics of biological groups. In many cases, the equations possess degeneracy or singularity. The appearance of degeneracy or singularity makes the study more involved and challenging. Many new ideas and methods have been developed to overcome the special difficulties caused by the degeneracy and singularity, which
Differential equations problem solver
Arterburn, David R
REA's Problem Solvers is a series of useful, practical, and informative study guides. Each title in the series is complete step-by-step solution guide. The Differential Equations Problem Solver enables students to solve difficult problems by showing them step-by-step solutions to Differential Equations problems. The Problem Solvers cover material ranging from the elementary to the advanced and make excellent review books and textbook companions. They're perfect for undergraduate and graduate studies.The Differential Equations Problem Solver is the perfect resource for any class, any exam, and
Supersymmetric quasipotential equations
Zaikov, R.P.
A supersymmetric extension of the Logunov-Tavkhelidze quasipotential approach is suggested. The supersymmetric Bethe- Salpeter equation is an initial equation. The transition from the four-time to the two-time Green function is made in the super- center-of-mass system. The two-time Green function has no inverse function in the whole spinor space. The resolvent operator if found using the Majorana character of the spinor wave function. The supersymmetric quasipotential equation is written. The consideration is carried out in the framework of the theory of chiral scalar superfields [ru
Local instant conservation equations
Delaje, Dzh.
Local instant conservation equations for two-phase flow are derived. Derivation of the equation starts from the recording of integral laws of conservation for a fixed reference volume, containing both phases. Transformation of the laws, using the Leibniz rule and Gauss theory permits to obtain the sum of two integrals as to the volume and integral as to the surface. Integrals as to the volume result in local instant differential equations, in particular derivatives for each phase, and integrals as to the surface reflect local instant conditions of a jump on interface surface
A rigorous, yet accessible, introduction to partial differential equations-updated in a valuable new edition Beginning Partial Differential Equations, Second Edition provides a comprehensive introduction to partial differential equations (PDEs) with a special focus on the significance of characteristics, solutions by Fourier series, integrals and transforms, properties and physical interpretations of solutions, and a transition to the modern function space approach to PDEs. With its breadth of coverage, this new edition continues to present a broad introduction to the field, while also addres
Miller, Richard K
Ordinary Differential Equations is an outgrowth of courses taught for a number of years at Iowa State University in the mathematics and the electrical engineering departments. It is intended as a text for a first graduate course in differential equations for students in mathematics, engineering, and the sciences. Although differential equations is an old, traditional, and well-established subject, the diverse backgrounds and interests of the students in a typical modern-day course cause problems in the selection and method of presentation of material. In order to compensate for this diversity,
Uncertain differential equations
Yao, Kai
This book introduces readers to the basic concepts of and latest findings in the area of differential equations with uncertain factors. It covers the analytic method and numerical method for solving uncertain differential equations, as well as their applications in the field of finance. Furthermore, the book provides a number of new potential research directions for uncertain differential equation. It will be of interest to researchers, engineers and students in the fields of mathematics, information science, operations research, industrial engineering, computer science, artificial intelligence, automation, economics, and management science.
This text presents the standard material usually covered in a one-semester, undergraduate course on boundary value problems and PDEs. Emphasis is placed on motivation, concepts, methods, and interpretation, rather than on formal theory. The concise treatment of the subject is maintained in this third edition covering all the major ideas: the wave equation, the diffusion equation, the Laplace equation, and the advection equation on bounded and unbounded domains. Methods include eigenfunction expansions, integral transforms, and characteristics. In this third edition, text remains intimately tied to applications in heat transfer, wave motion, biological systems, and a variety other topics in pure and applied science. The text offers flexibility to instructors who, for example, may wish to insert topics from biology or numerical methods at any time in the course. The exposition is presented in a friendly, easy-to-read, style, with mathematical ideas motivated from physical problems. Many exercises and worked e...
Nonlinear differential equations
Dresner, L.
This report is the text of a graduate course on nonlinear differential equations given by the author at the University of Wisconsin-Madison during the summer of 1987. The topics covered are: direction fields of first-order differential equations; the Lie (group) theory of ordinary differential equations; similarity solutions of second-order partial differential equations; maximum principles and differential inequalities; monotone operators and iteration; complementary variational principles; and stability of numerical methods. The report should be of interest to graduate students, faculty, and practicing scientists and engineers. No prior knowledge is required beyond a good working knowledge of the calculus. The emphasis is on practical results. Most of the illustrative examples are taken from the fields of nonlinear diffusion, heat and mass transfer, applied superconductivity, and helium cryogenics.
On Dust Charging Equation
Tsintsadze, Nodar L.; Tsintsadze, Levan N.
A general derivation of the charging equation of a dust grain is presented, and indicated where and when it can be used. A problem of linear fluctuations of charges on the surface of the dust grain is discussed.
Equations For Rotary Transformers
Salomon, Phil M.; Wiktor, Peter J.; Marchetto, Carl A.
Equations derived for input impedance, input power, and ratio of secondary current to primary current of rotary transformer. Used for quick analysis of transformer designs. Circuit model commonly used in textbooks on theory of ac circuits.
Problems in differential equations
Brenner, J L
More than 900 problems and answers explore applications of differential equations to vibrations, electrical engineering, mechanics, and physics. Problem types include both routine and nonroutine, and stars indicate advanced problems. 1963 edition.
DuChateau, Paul
Book focuses mainly on boundary-value and initial-boundary-value problems on spatially bounded and on unbounded domains; integral transforms; uniqueness and continuous dependence on data, first-order equations, and more. Numerous exercises included.
This report is the text of a graduate course on nonlinear differential equations given by the author at the University of Wisconsin-Madison during the summer of 1987. The topics covered are: direction fields of first-order differential equations; the Lie (group) theory of ordinary differential equations; similarity solutions of second-order partial differential equations; maximum principles and differential inequalities; monotone operators and iteration; complementary variational principles; and stability of numerical methods. The report should be of interest to graduate students, faculty, and practicing scientists and engineers. No prior knowledge is required beyond a good working knowledge of the calculus. The emphasis is on practical results. Most of the illustrative examples are taken from the fields of nonlinear diffusion, heat and mass transfer, applied superconductivity, and helium cryogenics
Modern nonlinear equations
Saaty, Thomas L
Covers major types of classical equations: operator, functional, difference, integro-differential, and more. Suitable for graduate students as well as scientists, technologists, and mathematicians. "A welcome contribution." - Math Reviews. 1964 edition.
SIMULTANEOUS DIFFERENTIAL EQUATION COMPUTER
Collier, D.M.; Meeks, L.A.; Palmer, J.P.
A description is given for an electronic simulator for a system of simultaneous differential equations, including nonlinear equations. As a specific example, a homogeneous nuclear reactor system including a reactor fluid, heat exchanger, and a steam boiler may be simulated, with the nonlinearity resulting from a consideration of temperature effects taken into account. The simulator includes three operational amplifiers, a multiplier, appropriate potential sources, and interconnecting R-C networks.
Structural Equations and Causation
Hall, Ned
Structural equations have become increasingly popular in recent years as tools for understanding causation. But standard structural equations approaches to causation face deep problems. The most philosophically interesting of these consists in their failure to incorporate a distinction between default states of an object or system, and deviations therefrom. Exploring this problem, and how to fix it, helps to illuminate the central role this distinction plays in our causal thinking.
Equations of radiation hydrodynamics
Mihalas, D.
The purpose of this paper is to give an overview of the role of radiation in the transport of energy and momentum in a combined matter-radiation fluid. The transport equation for a moving radiating fluid is presented in both a fully Eulerian and a fully Lagrangian formulation, along with conservation equations describing the dynamics of the fluid. Special attention is paid to the problem of deriving equations that are mutually consistent in each frame, and between frames, to 0(v/c). A detailed analysis is made to show that in situations of broad interest, terms that are formally of 0(v/c) actually dominate the solution, demonstrating that it is esential (1) to pay scrupulous attention to the question of the frame dependence in formulating the equations; and (2) to solve the equations to 0(v/c) in quite general circumstances. These points are illustrated in the context of the nonequilibrium radiation diffusion limit, and a sketch of how the Lagrangian equations are to be solved will be presented
Quantum linear Boltzmann equation
Vacchini, Bassano; Hornberger, Klaus
We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.
Covariant field equations in supergravity
Vanhecke, Bram [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium); Ghent University, Faculty of Physics, Gent (Belgium); Proeyen, Antoine van [KU Leuven, Institute for Theoretical Physics, Leuven (Belgium)
Covariance is a useful property for handling supergravity theories. In this paper, we prove a covariance property of supergravity field equations: under reasonable conditions, field equations of supergravity are covariant modulo other field equations. We prove that for any supergravity there exist such covariant equations of motion, other than the regular equations of motion, that are equivalent to the latter. The relations that we find between field equations and their covariant form can be used to obtain multiplets of field equations. In practice, the covariant field equations are easily found by simply covariantizing the ordinary field equations. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Vanhecke, Bram; Proeyen, Antoine van
Differential Equation over Banach Algebra
Kleyn, Aleks
In the book, I considered differential equations of order $1$ over Banach $D$-algebra: differential equation solved with respect to the derivative; exact differential equation; linear homogeneous equation. In noncommutative Banach algebra, initial value problem for linear homogeneous equation has infinitely many solutions.
Transport equation solving methods
Granjean, P.M.
This work is mainly devoted to Csub(N) and Fsub(N) methods. CN method: starting from a lemma stated by Placzek, an equivalence is established between two problems: the first one is defined in a finite medium bounded by a surface S, the second one is defined in the whole space. In the first problem the angular flux on the surface S is shown to be the solution of an integral equation. This equation is solved by Galerkin's method. The Csub(N) method is applied here to one-velocity problems: in plane geometry, slab albedo and transmission with Rayleigh scattering, calculation of the extrapolation length; in cylindrical geometry, albedo and extrapolation length calculation with linear scattering. Fsub(N) method: the basic integral transport equation of the Csub(N) method is integrated on Case's elementary distributions; another integral transport equation is obtained: this equation is solved by a collocation method. The plane problems solved by the Csub(N) method are also solved by the Fsub(N) method. The Fsub(N) method is extended to any polynomial scattering law. Some simple spherical problems are also studied. Chandrasekhar's method, collision probability method, Case's method are presented for comparison with Csub(N) and Fsub(N) methods. This comparison shows the respective advantages of the two methods: a) fast convergence and possible extension to various geometries for Csub(N) method; b) easy calculations and easy extension to polynomial scattering for Fsub(N) method [fr
Introduction to partial differential equations
Greenspan, Donald
Designed for use in a one-semester course by seniors and beginning graduate students, this rigorous presentation explores practical methods of solving differential equations, plus the unifying theory underlying the mathematical superstructure. Topics include basic concepts, Fourier series, second-order partial differential equations, wave equation, potential equation, heat equation, approximate solution of partial differential equations, and more. Exercises appear at the ends of most chapters. 1961 edition.
Quadratic Diophantine equations
This monograph treats the classical theory of quadratic Diophantine equations and guides the reader through the last two decades of computational techniques and progress in the area. These new techniques combined with the latest increases in computational power shed new light on important open problems. The authors motivate the study of quadratic Diophantine equations with excellent examples, open problems, and applications. Moreover, the exposition aptly demonstrates many applications of results and techniques from the study of Pell-type equations to other problems in number theory. The book is intended for advanced undergraduate and graduate students as well as researchers. It challenges the reader to apply not only specific techniques and strategies, but also to employ methods and tools from other areas of mathematics, such as algebra and analysis.
Stochastic porous media equations
Barbu, Viorel; Röckner, Michael
Focusing on stochastic porous media equations, this book places an emphasis on existence theorems, asymptotic behavior and ergodic properties of the associated transition semigroup. Stochastic perturbations of the porous media equation have reviously been considered by physicists, but rigorous mathematical existence results have only recently been found. The porous media equation models a number of different physical phenomena, including the flow of an ideal gas and the diffusion of a compressible fluid through porous media, and also thermal propagation in plasma and plasma radiation. Another important application is to a model of the standard self-organized criticality process, called the "sand-pile model" or the "Bak-Tang-Wiesenfeld model". The book will be of interest to PhD students and researchers in mathematics, physics and biology.
Boussinesq evolution equations
Bredmose, Henrik; Schaffer, H.; Madsen, Per A.
This paper deals with the possibility of using methods and ideas from time domain Boussinesq formulations in the corresponding frequency domain formulations. We term such frequency domain models "evolution equations". First, we demonstrate that the numerical efficiency of the deterministic...... Boussinesq evolution equations of Madsen and Sorensen [Madsen, P.A., Sorensen, O.R., 1993. Bound waves and triad interactions in shallow water. Ocean Eng. 20 359-388] can be improved by using Fast Fourier Transforms to evaluate the nonlinear terms. For a practical example of irregular waves propagating over...... a submerged bar, it is demonstrated that evolution equations utilising FFT can be solved around 100 times faster than the corresponding time domain model. Use of FFT provides an efficient bridge between the frequency domain and the time domain. We utilise this by adapting the surface roller model for wave...
Equations of mathematical physics
Tikhonov, A N
Mathematical physics plays an important role in the study of many physical processes - hydrodynamics, elasticity, and electrodynamics, to name just a few. Because of the enormous range and variety of problems dealt with by mathematical physics, this thorough advanced-undergraduate or graduate-level text considers only those problems leading to partial differential equations. The authors - two well-known Russian mathematicians - have focused on typical physical processes and the principal types of equations deailing with them. Special attention is paid throughout to mathematical formulation, ri
Iteration of adjoint equations
Lewins, J.D.
Adjoint functions are the basis of variational methods and now widely used for perturbation theory and its extension to higher order theory as used, for example, in modelling fuel burnup and optimization. In such models, the adjoint equation is to be solved in a critical system with an adjoint source distribution that is not zero but has special properties related to ratios of interest in critical systems. Consequently the methods of solving equations by iteration and accumulation are reviewed to show how conventional methods may be utilized in these circumstances with adequate accuracy. (author). 3 refs., 6 figs., 3 tabs
Systematic Equation Formulation
Lindberg, Erik
A tutorial giving a very simple introduction to the set-up of the equations used as a model for an electrical/electronic circuit. The aim is to find a method which is as simple and general as possible with respect to implementation in a computer program. The "Modified Nodal Approach�, MNA, and th......, and the "Controlled Source Approach�, CSA, for systematic equation formulation are investigated. It is suggested that the kernel of the P Spice program based on MNA is reprogrammed....
Agranovich, M S
Mark Vishik's Partial Differential Equations seminar held at Moscow State University was one of the world's leading seminars in PDEs for over 40 years. This book celebrates Vishik's eightieth birthday. It comprises new results and survey papers written by many renowned specialists who actively participated over the years in Vishik's seminars. Contributions include original developments and methods in PDEs and related fields, such as mathematical physics, tomography, and symplectic geometry. Papers discuss linear and nonlinear equations, particularly linear elliptic problems in angles and gener
Generalized estimating equations
Hardin, James W
Although powerful and flexible, the method of generalized linear models (GLM) is limited in its ability to accurately deal with longitudinal and clustered data. Developed specifically to accommodate these data types, the method of Generalized Estimating Equations (GEE) extends the GLM algorithm to accommodate the correlated data encountered in health research, social science, biology, and other related fields.Generalized Estimating Equations provides the first complete treatment of GEE methodology in all of its variations. After introducing the subject and reviewing GLM, the authors examine th
Nonlinear wave equations
Li, Tatsien
This book focuses on nonlinear wave equations, which are of considerable significance from both physical and theoretical perspectives. It also presents complete results on the lower bound estimates of lifespan (including the global existence), which are established for classical solutions to the Cauchy problem of nonlinear wave equations with small initial data in all possible space dimensions and with all possible integer powers of nonlinear terms. Further, the book proposes the global iteration method, which offers a unified and straightforward approach for treating these kinds of problems. Purely based on the properties of solut ions to the corresponding linear problems, the method simply applies the contraction mapping principle.
Analysis of wave equation in electromagnetic field by Proca equation
Pamungkas, Oky Rio; Soeparmi; Cari
This research is aimed to analyze wave equation for the electric and magnetic field, vector and scalar potential, and continuity equation using Proca equation. Then, also analyze comparison of the solution on Maxwell and Proca equation for scalar potential and electric field, both as a function of distance and constant wave number. (paper)
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Test equating methods and practices
Kolen, Michael J
In recent years, many researchers in the psychology and statistical communities have paid increasing attention to test equating as issues of using multiple test forms have arisen and in response to criticisms of traditional testing techniques This book provides a practically oriented introduction to test equating which both discusses the most frequently used equating methodologies and covers many of the practical issues involved The main themes are - the purpose of equating - distinguishing between equating and related methodologies - the importance of test equating to test development and quality control - the differences between equating properties, equating designs, and equating methods - equating error, and the underlying statistical assumptions for equating The authors are acknowledged experts in the field, and the book is based on numerous courses and seminars they have presented As a result, educators, psychometricians, professionals in measurement, statisticians, and students coming to the subject for...
On the Raychaudhuri equation
The Raychaudhuri equation is central to the understanding of gravitational attraction in ... of K Gödel on the ideas of shear and vorticity in cosmology (he defines the shear. (eq. (8) in [1]) .... which follows from the definition of the scale factor l.
Generalized reduced magnetohydrodynamic equations
Kruger, S.E.
A new derivation of reduced magnetohydrodynamic (MHD) equations is presented. A multiple-time-scale expansion is employed. It has the advantage of clearly separating the three time scales of the problem associated with (1) MHD equilibrium, (2) fluctuations whose wave vector is aligned perpendicular to the magnetic field, and (3) those aligned parallel to the magnetic field. The derivation is carried out without relying on a large aspect ratio assumption; therefore this model can be applied to any general configuration. By accounting for the MHD equilibrium and constraints to eliminate the fast perpendicular waves, equations are derived to evolve scalar potential quantities on a time scale associated with the parallel wave vector (shear-Alfven wave time scale), which is the time scale of interest for MHD instability studies. Careful attention is given in the derivation to satisfy energy conservation and to have manifestly divergence-free magnetic fields to all orders in the expansion parameter. Additionally, neoclassical closures and equilibrium shear flow effects are easily accounted for in this model. Equations for the inner resistive layer are derived which reproduce the linear ideal and resistive stability criterion of Glasser, Greene, and Johnson. The equations have been programmed into a spectral initial value code and run with shear flow that is consistent with the equilibrium input into the code. Linear results of tearing modes with shear flow are presented which differentiate the effects of shear flow gradients in the layer with the effects of the shear flow decoupling multiple harmonics
Calculus & ordinary differential equations
Pearson, David
Professor Pearson's book starts with an introduction to the area and an explanation of the most commonly used functions. It then moves on through differentiation, special functions, derivatives, integrals and onto full differential equations. As with other books in the series the emphasis is on using worked examples and tutorial-based problem solving to gain the confidence of students.
The Freudenstein Equation
research, teaching and practice related to the analysis and design ... its variants, are present in a large number of ma- chines used in daily ... with advanced electronics, sensors, control systems and computing ... ted perfectly well with the rapidly developing comput- .... velopment of the Freudenstein equation using Figure 3.
Differential Equation of Equilibrium
ABSTRACT. Analysis of underground circular cylindrical shell is carried out in this work. The forth order differential equation of equilibrium, comparable to that of beam on elastic foundation, was derived from static principles on the assumptions of P. L Pasternak. Laplace transformation was used to solve the governing ...
Equational binary decision diagrams
J.F. Groote (Jan Friso); J.C. van de Pol (Jaco)
textabstractWe incorporate equations in binary decision diagrams (BDD). The resulting objects are called EQ-BDDs. A straightforward notion of ordered EQ-BDDs (EQ-OBDD) is defined, and it is proved that each EQ-BDD is logically equivalent to an EQ-OBDD. Moreover, on EQ-OBDDs satisfiability and
Dunkl Hyperbolic Equations
Hatem Mejjaoli
Full Text Available We introduce and study the Dunkl symmetric systems. We prove the well-posedness results for the Cauchy problem for these systems. Eventually we describe the finite speed of it. Next the semi-linear Dunkl-wave equations are also studied.
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
ANTHROPOMETRIC PREDICTIVE EQUATIONS FOR ...
Keywords: Anthropometry, Predictive Equations, Percentage Body Fat, Nigerian Women, Bioelectric Impedance ... such as Asians and Indians (Pranav et al., 2009), ... size (n) of at least 3o is adjudged as sufficient for the ..... of people, gender and age (Vogel eta/., 1984). .... Fish Sold at Ile-Ife Main Market, South West Nigeria.
dimensional Fokas equation
However, one can associate the term with any solution of nonlinear partial differential equations (PDEs) which (i) represents a wave of permanent form, (ii) is localized ... In the past several decades, many methods have been proposed for solving nonlinear PDEs, such as ... space–time fractional derivative form of eq. (1) and ...
A Quadratic Spring Equation
Fay, Temple H.
Through numerical investigations, we study examples of the forced quadratic spring equation [image omitted]. By performing trial-and-error numerical experiments, we demonstrate the existence of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions, investigate the resonance boundary in the [omega]…
Guiding center drift equations
Boozer, A.H.
The quations for particle guiding center drift orbits are given in a new magnetic coordinate system. This form of the equations not only separates the fast motion along the lines from the slow motion across, but also requires less information about the magnetic field than many other formulations of the problem
dimensional nonlinear evolution equations
in real-life situations, it is important to find their exact solutions. Further, in ... But only little work is done on the high-dimensional equations. .... Similarly, to determine the values of d and q, we balance the linear term of the lowest order in eq.
Stochastic nonlinear beam equations
Brzezniak, Z.; Maslowski, Bohdan; Seidler, Jan
Ro�. 132, �. 1 (2005), s. 119-149 ISSN 0178-8051 R&D Projects: GA ČR(CZ) GA201/01/1197 Institutional research plan: CEZ:AV0Z10190503 Keywords : stochastic beam equation * stability Subject RIV: BA - General Mathematics Impact factor: 0.896, year: 2005
Balancing Chemical Equations.
Savoy, L. G.
Describes a study of students' ability to balance equations. Answers to a test on this topic were analyzed to determine the level of understanding and processes used by the students. Presented is a method to teach this skill to high school chemistry students. (CW)
Lectures on partial differential equations
Petrovsky, I G
Graduate-level exposition by noted Russian mathematician offers rigorous, transparent, highly readable coverage of classification of equations, hyperbolic equations, elliptic equations and parabolic equations. Wealth of commentary and insight invaluable for deepening understanding of problems considered in text. Translated from the Russian by A. Shenitzer.
Quantum equations from Brownian motions
Rajput, B.S.
Classical Schrodinger and Dirac equations have been derived from Brownian motions of a particle, it has been shown that the classical Schrodinger equation can be transformed to usual Schrodinger Quantum equation on applying Heisenberg uncertainty principle between position and momentum while Dirac Quantum equation follows it's classical counter part on applying Heisenberg uncertainly principle between energy and time without applying any analytical continuation. (author)
Elements of partial differential equations
Sneddon, Ian Naismith
Geared toward students of applied rather than pure mathematics, this volume introduces elements of partial differential equations. Its focus is primarily upon finding solutions to particular equations rather than general theory.Topics include ordinary differential equations in more than two variables, partial differential equations of the first and second orders, Laplace's equation, the wave equation, and the diffusion equation. A helpful Appendix offers information on systems of surfaces, and solutions to the odd-numbered problems appear at the end of the book. Readers pursuing independent st
On generalized fractional vibration equation
Dai, Hongzhe; Zheng, Zhibao; Wang, Wei
Highlights: • The paper presents a generalized fractional vibration equation for arbitrary viscoelastically damped system. • Some classical vibration equations can be derived from the developed equation. • The analytic solution of developed equation is derived under some special cases. • The generalized equation is particularly useful for developing new fractional equivalent linearization method. - Abstract: In this paper, a generalized fractional vibration equation with multi-terms of fractional dissipation is developed to describe the dynamical response of an arbitrary viscoelastically damped system. It is shown that many classical equations of motion, e.g., the Bagley–Torvik equation, can be derived from the developed equation. The Laplace transform is utilized to solve the generalized equation and the analytic solution under some special cases is derived. Example demonstrates the generalized transfer function of an arbitrary viscoelastic system.
Methods for Equating Mental Tests.
1983) compared conventional and IRT methods for equating the Test of English as a Foreign Language ( TOEFL ) after chaining. Three conventional and...three IRT equating methods were examined in this study; two sections of TOEFL were each (separately) equated. The IRT methods included the following: (a...group. A separate base form was established for each of the six equating methods. Instead of equating the base-form TOEFL to itself, the last (eighth
equateIRT: An R Package for IRT Test Equating
Michela Battauz
Full Text Available The R package equateIRT implements item response theory (IRT methods for equating different forms composed of dichotomous items. In particular, the IRT models included are the three-parameter logistic model, the two-parameter logistic model, the one-parameter logistic model and the Rasch model. Forms can be equated when they present common items (direct equating or when they can be linked through a chain of forms that present common items in pairs (indirect or chain equating. When two forms can be equated through different paths, a single conversion can be obtained by averaging the equating coefficients. The package calculates direct and chain equating coefficients. The averaging of direct and chain coefficients that link the same two forms is performed through the bisector method. Furthermore, the package provides analytic standard errors of direct, chain and average equating coefficients.
Lattice Wigner equation
Solórzano, S.; Mendoza, M.; Succi, S.; Herrmann, H. J.
We present a numerical scheme to solve the Wigner equation, based on a lattice discretization of momentum space. The moments of the Wigner function are recovered exactly, up to the desired order given by the number of discrete momenta retained in the discretization, which also determines the accuracy of the method. The Wigner equation is equipped with an additional collision operator, designed in such a way as to ensure numerical stability without affecting the evolution of the relevant moments of the Wigner function. The lattice Wigner scheme is validated for the case of quantum harmonic and anharmonic potentials, showing good agreement with theoretical results. It is further applied to the study of the transport properties of one- and two-dimensional open quantum systems with potential barriers. Finally, the computational viability of the scheme for the case of three-dimensional open systems is also illustrated.
Energy master equation
Dyre, Jeppe
energies chosen randomly according to a Gaussian. The random-walk model is here derived from Newton's laws by making a number of simplifying assumptions. In the second part of the paper an approximate low-temperature description of energy fluctuations in the random-walk model—the energy master equation...... (EME)—is arrived at. The EME is one dimensional and involves only energy; it is derived by arguing that percolation dominates the relaxational properties of the random-walk model at low temperatures. The approximate EME description of the random-walk model is expected to be valid at low temperatures...... of the random-walk model. The EME allows a calculation of the energy probability distribution at realistic laboratory time scales for an arbitrarily varying temperature as function of time. The EME is probably the only realistic equation available today with this property that is also explicitly consistent...
Classical Diophantine equations
The author had initiated a revision and translation of "Classical Diophantine Equations" prior to his death. Given the rapid advances in transcendence theory and diophantine approximation over recent years, one might fear that the present work, originally published in Russian in 1982, is mostly superseded. That is not so. A certain amount of updating had been prepared by the author himself before his untimely death. Some further revision was prepared by close colleagues. The first seven chapters provide a detailed, virtually exhaustive, discussion of the theory of lower bounds for linear forms in the logarithms of algebraic numbers and its applications to obtaining upper bounds for solutions to the eponymous classical diophantine equations. The detail may seem stark--- the author fears that the reader may react much as does the tourist on first seeing the centre Pompidou; notwithstanding that, Sprind zuk maintainsa pleasant and chatty approach, full of wise and interesting remarks. His emphases well warrant, ...
Flavored quantum Boltzmann equations
Cirigliano, Vincenzo; Lee, Christopher; Ramsey-Musolf, Michael J.; Tulin, Sean
We derive from first principles, using nonequilibrium field theory, the quantum Boltzmann equations that describe the dynamics of flavor oscillations, collisions, and a time-dependent mass matrix in the early universe. Working to leading nontrivial order in ratios of relevant time scales, we study in detail a toy model for weak-scale baryogenesis: two scalar species that mix through a slowly varying time-dependent and CP-violating mass matrix, and interact with a thermal bath. This model clearly illustrates how the CP asymmetry arises through coherent flavor oscillations in a nontrivial background. We solve the Boltzmann equations numerically for the density matrices, investigating the impact of collisions in various regimes.
Causal electromagnetic interaction equations
Zinoviev, Yury M.
For the electromagnetic interaction of two particles the relativistic causal quantum mechanics equations are proposed. These equations are solved for the case when the second particle moves freely. The initial wave functions are supposed to be smooth and rapidly decreasing at the infinity. This condition is important for the convergence of the integrals similar to the integrals of quantum electrodynamics. We also consider the singular initial wave functions in the particular case when the second particle mass is equal to zero. The discrete energy spectrum of the first particle wave function is defined by the initial wave function of the free-moving second particle. Choosing the initial wave functions of the free-moving second particle it is possible to obtain a practically arbitrary discrete energy spectrum.
Numerical Solution of Heun Equation Via Linear Stochastic Differential Equation
Hamidreza Rezazadeh
Full Text Available In this paper, we intend to solve special kind of ordinary differential equations which is called Heun equations, by converting to a corresponding stochastic differential equation(S.D.E.. So, we construct a stochastic linear equation system from this equation which its solution is based on computing fundamental matrix of this system and then, this S.D.E. is solved by numerically methods. Moreover, its asymptotic stability and statistical concepts like expectation and variance of solutions are discussed. Finally, the attained solutions of these S.D.E.s compared with exact solution of corresponding differential equations.
Equations of multiparticle dynamics
Chao, A.W.
The description of the motion of charged-particle beams in an accelerator proceeds in steps of increasing complexity. The first step is to consider a single-particle picture in which the beam is represented as a collection on non-interacting test particles moving in a prescribed external electromagnetic field. Knowing the external field, it is then possible to calculate the beam motion to a high accuracy. The real beam consists of a large number of particles, typically 10 11 per beam bunch. It is sometimes inconvenient, or even impossible, to treat the real beam behavior using the single particle approach. One way to approach this problem is to supplement the single particle by another qualitatively different picture. The commonly used tools in accelerator physics for this purpose are the Vlasov and the Fokker-Planck equations. These equations assume smooth beam distributions and are therefore strictly valid in the limit of infinite number of micro-particles, each carrying an infinitesimal charge. The hope is that by studying the two extremes -- the single particle picture and the picture of smooth beam distributions -- we will be able to describe the behavior of our 10 11 -particle system. As mentioned, the most notable use of the smooth distribution picture is the study of collective beam instabilities. However, the purpose of this lecture is not to address this more advanced subject. Rather, it has the limited goal to familiarize the reader with the analytical tools, namely the Vlasov and the Fokker-Planck equations, as a preparation for dealing with the more advanced problems at later times. We will first derive these equations and then illustrate their applications by several examples which allow exact solutions
Electroweak evolution equations
Ciafaloni, Paolo; Comelli, Denis
Enlarging a previous analysis, where only fermions and transverse gauge bosons were taken into account, we write down infrared-collinear evolution equations for the Standard Model of electroweak interactions computing the full set of splitting functions. Due to the presence of double logs which are characteristic of electroweak interactions (Bloch-Nordsieck violation), new infrared singular splitting functions have to be introduced. We also include corrections related to the third generation Yukawa couplings
Differential equations with Mathematica
Abell, Martha L
The Third Edition of the Differential Equations with Mathematica integrates new applications from a variety of fields,especially biology, physics, and engineering. The new handbook is also completely compatible with recent versions of Mathematica and is a perfect introduction for Mathematica beginners.* Focuses on the most often used features of Mathematica for the beginning Mathematica user* New applications from a variety of fields, including engineering, biology, and physics* All applications were completed using recent versions of Mathematica
Damped nonlinear Schrodinger equation
Nicholson, D.R.; Goldman, M.V.
High frequency electrostatic plasma oscillations described by the nonlinear Schrodinger equation in the presence of damping, collisional or Landau, are considered. At early times, Landau damping of an initial soliton profile results in a broader, but smaller amplitude soliton, while collisional damping reduces the soliton size everywhere; soliton speeds at early times are unchanged by either kind of damping. For collisional damping, soliton speeds are unchanged for all time
Fun with Differential Equations
IAS Admin
tion of ® with ¼=2. One can use the uniqueness of solutions of differential equations to prove the addition formulae for sin(t1 +t2), etc. But instead of continuing with this thought process, let us do something more interesting. Now we shall consider another system. Fix 0 < < 1. I am looking for three real-valued functions x(t), ...
Mathematics and Maxwell's equations
Boozer, Allen H
The universality of mathematics and Maxwell's equations is not shared by specific plasma models. Computations become more reliable, efficient and transparent if specific plasma models are used to obtain only the information that would otherwise be missing. Constraints of high universality, such as those from mathematics and Maxwell's equations, can be obscured or lost by integrated computations. Recognition of subtle constraints of high universality is important for (1) focusing the design of control systems for magnetic field errors in tokamaks from perturbations that have little effect on the plasma to those that do, (2) clarifying the limits of applicability to astrophysics of computations of magnetic reconnection in fields that have a double periodicity or have B-vector =0 on a surface, as in a Harris sheet. Both require a degree of symmetry not expected in natural systems. Mathematics and Maxwell's equations imply that neighboring magnetic field lines characteristically separate exponentially with distance along a line. This remarkably universal phenomenon has been largely ignored, though it defines a trigger for reconnection through a critical magnitude of exponentiation. These and other examples of the importance of making distinctions and understanding constraints of high universality are explained.
Information Equation of State
M. Paul Gough
Full Text Available Landauer's principle is applied to information in the universe. Once stars began forming there was a constant information energy density as the increasing proportion of matter at high stellar temperatures exactly compensated for the expanding universe. The information equation of state was close to the dark energy value, w = -1, for a wide range of redshifts, 10 > z > 0.8, over one half of cosmic time. A reasonable universe information bit content of only 1087 bits is sufficient for information energy to account for all dark energy. A time varying equation of state with a direct link between dark energy and matter, and linked to star formation in particular, is clearly relevant to the cosmic coincidence problem. In answering the 'Why now?' question we wonder 'What next?' as we expect the information equation of state to tend towards w = 0 in the future.c
Generalized reduced MHD equations
Kruger, S.E.; Hegna, C.C.; Callen, J.D.
A new derivation of reduced magnetohydrodynamic (MHD) equations is presented. A multiple-time-scale expansion is employed. It has the advantage of clearly separating the three time scales of the problem associated with (1) MHD equilibrium, (2) fluctuations whose wave vector is aligned perpendicular to the magnetic field, and (3) those aligned parallel to the magnetic field. The derivation is carried out without relying on a large aspect ratio assumption; therefore this model can be applied to any general toroidal configuration. By accounting for the MHD equilibrium and constraints to eliminate the fast perpendicular waves, equations are derived to evolve scalar potential quantities on a time scale associated with the parallel wave vector (shear-alfven wave time scale), which is the time scale of interest for MHD instability studies. Careful attention is given in the derivation to satisfy energy conservation and to have manifestly divergence-free magnetic fields to all orders in the expansion parameter. Additionally, neoclassical closures and equilibrium shear flow effects are easily accounted for in this model. Equations for the inner resistive layer are derived which reproduce the linear ideal and resistive stability criterion of Glasser, Greene, and Johnson
Computing generalized Langevin equations and generalized Fokker-Planck equations.
Darve, Eric; Solomon, Jose; Kia, Amirali
The Mori-Zwanzig formalism is an effective tool to derive differential equations describing the evolution of a small number of resolved variables. In this paper we present its application to the derivation of generalized Langevin equations and generalized non-Markovian Fokker-Planck equations. We show how long time scales rates and metastable basins can be extracted from these equations. Numerical algorithms are proposed to discretize these equations. An important aspect is the numerical solution of the orthogonal dynamics equation which is a partial differential equation in a high dimensional space. We propose efficient numerical methods to solve this orthogonal dynamics equation. In addition, we present a projection formalism of the Mori-Zwanzig type that is applicable to discrete maps. Numerical applications are presented from the field of Hamiltonian systems.
FMTLxLyLz DIMENSIONAL EQUAT DIMENSIONAL EQUATION ...
eobe
plant made of 12mm thick steel plate was used in de steel plate ... water treatment plant. ... ameters affecting filtration processes were used to derive an equation usin ..... system. However, in deriving the equation onl terms are incorporated.
Reduction operators of Burgers equation.
Pocheketa, Oleksandr A; Popovych, Roman O
The solution of the problem on reduction operators and nonclassical reductions of the Burgers equation is systematically treated and completed. AÂ new proof of the theorem on the special "no-go" case of regular reduction operators is presented, and the representation of the coefficients of operators in terms of solutions of the initial equation is constructed for this case. All possible nonclassical reductions of the Burgers equation to single ordinary differential equations are exhaustively described. Any Lie reduction of the Burgers equation proves to be equivalent via the Hopf-Cole transformation to a parameterized family of Lie reductions of the linear heat equation.
Auxiliary equation method for solving nonlinear partial differential equations
Sirendaoreji,; Jiong, Sun
By using the solutions of an auxiliary ordinary differential equation, a direct algebraic method is described to construct several kinds of exact travelling wave solutions for some nonlinear partial differential equations. By this method some physically important nonlinear equations are investigated and new exact travelling wave solutions are explicitly obtained with the aid of symbolic computation
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
Differential Equations as Actions
Ronkko, Mauno; Ravn, Anders P.
We extend a conventional action system with a primitive action consisting of a differential equation and an evolution invariant. The semantics is given by a predicate transformer. The weakest liberal precondition is chosen, because it is not always desirable that steps corresponding to differential...... actions shall terminate. It is shown that the proposed differential action has a semantics which corresponds to a discrete approximation when the discrete step size goes to zero. The extension gives action systems the power to model real-time clocks and continuous evolutions within hybrid systems....
Levine, Harold
The subject matter, partial differential equations (PDEs), has a long history (dating from the 18th century) and an active contemporary phase. An early phase (with a separate focus on taut string vibrations and heat flow through solid bodies) stimulated developments of great importance for mathematical analysis, such as a wider concept of functions and integration and the existence of trigonometric or Fourier series representations. The direct relevance of PDEs to all manner of mathematical, physical and technical problems continues. This book presents a reasonably broad introductory account of the subject, with due regard for analytical detail, applications and historical matters.
Cox, William
Building on introductory calculus courses, this text provides a sound foundation in the underlying principles of ordinary differential equations. Important concepts, including uniqueness and existence theorems, are worked through in detail and the student is encouraged to develop much of the routine material themselves, thus helping to ensure a solid understanding of the fundamentals required.The wide use of exercises, problems and self-assessment questions helps to promote a deeper understanding of the material and it is developed in such a way that it lays the groundwork for further
Sloan, D; Süli, E
/homepage/sac/cam/na2000/index.html7-Volume Set now available at special set price ! Over the second half of the 20th century the subject area loosely referred to as numerical analysis of partial differential equations (PDEs) has undergone unprecedented development. At its practical end, the vigorous growth and steady diversification of the field were stimulated by the demand for accurate and reliable tools for computational modelling in physical sciences and engineering, and by the rapid development of computer hardware and architecture. At the more theoretical end, the analytical insight in
Elliptic partial differential equations
Han, Qing
Elliptic Partial Differential Equations by Qing Han and FangHua Lin is one of the best textbooks I know. It is the perfect introduction to PDE. In 150 pages or so it covers an amazing amount of wonderful and extraordinary useful material. I have used it as a textbook at both graduate and undergraduate levels which is possible since it only requires very little background material yet it covers an enormous amount of material. In my opinion it is a must read for all interested in analysis and geometry, and for all of my own PhD students it is indeed just that. I cannot say enough good things abo
dimensional Jaulent–Miodek equations
(2+1)-dimensional Jaulent–Miodek equation; the first integral method; kinks; ... and effective method for solving nonlinear partial differential equations which can ... of the method employed and exact kink and soliton solutions are constructed ...
Equationally Noetherian property of Ershov algebras
Dvorzhetskiy, Yuriy
This article is about equationally Noetherian and weak equationally Noetherian property of Ershov algebras. Here we show two canonical forms of the system of equations over Ershov algebras and two criteria of equationally Noetherian and weak equationally Noetherian properties.
The Dirac equation
Thaller, B.
This monograph treats most of the usual material to be found in texts on the Dirac equation such as the basic formalism of quantum mechanics, representations of Dirac matrices, covariant realization of the Dirac equation, interpretation of negative energies, Foldy-Wouthuysen transformation, Klein's paradox, spherically symmetric interactions and a treatment of the relativistic hydrogen atom, etc., and also provides excellent additional treatments of a variety of other relevant topics. The monograph contains an extensive treatment of the Lorentz and Poincare groups and their representations. The author discusses in depth Lie algebaic and projective representations, covering groups, and Mackey's theory and Wigner's realization of induced representations. A careful classification of external fields with respect to their behavior under Poincare transformations is supplemented by a basic account of self-adjointness and spectral properties of Dirac operators. A state-of-the-art treatment of relativistic scattering theory based on a time-dependent approach originally due to Enss is presented. An excellent introduction to quantum electrodynamics in external fields is provided. Various appendices containing further details, notes on each chapter commenting on the history involved and referring to original research papers and further developments in the literature, and a bibliography covering all relevant monographs and over 500 articles on the subject, complete this text. This book should satisfy the needs of a wide audience, ranging from graduate students in theoretical physics and mathematics to researchers interested in mathematical physics
Cryostatic stability equation
Sydoriak, S.G.
Although criteria for cryostatic stability of superconducting magnets cooled by pool boiling of liquid helium have been widely discussed the same cannot be said for magnets cooled by natural convection or forced flow boiling in channels. Boiling in narrow channels is shown to be qualitatively superior to pool boiling because the recovery heat flux equals the breakaway flux for narrow channels, whereas the two are markedly different in pool boiling. A second advantage of channel boiling is that it is well understood and calculable; pool peak nucleate boiling heat flux has been adequately measured only for boiling from the top of an immersed heated body. Peak boiling from the bottom is much less and (probably) depends strongly on the extent of the bottom surface. Equations are presented by which one can calculate the critical boiling heat flux for parallel wall vertical channels subject to either natural convection or forced flow boiling, with one or both walls heated. The one-heated-wall forced flow equation is discussed with regard to design of a spiral wound solenoid (pancake magnet) having a slippery insulating tape between the windings
Solving Nonlinear Coupled Differential Equations
Mitchell, L.; David, J.
Harmonic balance method developed to obtain approximate steady-state solutions for nonlinear coupled ordinary differential equations. Method usable with transfer matrices commonly used to analyze shaft systems. Solution to nonlinear equation, with periodic forcing function represented as sum of series similar to Fourier series but with form of terms suggested by equation itself.
Completely integrable operator evolutionary equations
Chudnovsky, D.V.
The authors present natural generalizations of classical completely integrable equations where the functions are replaced by arbitrary operators. Among these equations are the non-linear Schroedinger, the Korteweg-de Vries, and the modified KdV equations. The Lax representation and the Baecklund transformations are presented. (Auth.)
On the F-equation
Kalinowski, M.W.; Szymanowski, L.
A generalization of the Truesdell F-equations is proposed and some solutions to them - generalized Fox F-functions - are found. It is also shown that a non-linear difference-differential equation, which does not belong to the Truesdell class, nevertheless may be transformed into the standard F-equation. (author)
On the Saha Ionization Equation
Abstract. We revisit the Saha Ionization Equation in order to highlightthe rich interdisciplinary content of the equation thatstraddles distinct areas of spectroscopy, thermodynamics andchemical reactions. In a self-contained discussion, relegatedto an appendix, we delve further into the hidden message ofthe equation in terms ...
Differential equations extended to superspace
Torres, J. [Instituto de Fisica, Universidad de Guanajuato, A.P. E-143, Leon, Guanajuato (Mexico); Rosu, H.C. [Instituto Potosino de Investigacion Cientifica y Tecnologica, A.P. 3-74, Tangamanga, San Luis Potosi (Mexico)
We present a simple SUSY Ns = 2 superspace extension of the differential equations in which the sought solutions are considered to be real superfields but maintaining the common derivative operators and the coefficients of the differential equations unaltered. In this way, we get self consistent systems of coupled differential equations for the components of the superfield. This procedure is applied to the Riccati equation, for which we obtain in addition the system of coupled equations corresponding to the components of the general superfield solution. (Author)
Reduction of infinite dimensional equations
Zhongding Li
Full Text Available In this paper, we use the general Legendre transformation to show the infinite dimensional integrable equations can be reduced to a finite dimensional integrable Hamiltonian system on an invariant set under the flow of the integrable equations. Then we obtain the periodic or quasi-periodic solution of the equation. This generalizes the results of Lax and Novikov regarding the periodic or quasi-periodic solution of the KdV equation to the general case of isospectral Hamiltonian integrable equation. And finally, we discuss the AKNS hierarchy as a special example.
Torres, J.; Rosu, H.C.
On the helix equation
Taouil Hajer
Full Text Available This paper is devoted to the helices processes, i.e. the solutions H : � × Ω → �d, (t, ω ↦ H(t, ω of the helix equation egin{eqnarray} H(0,o=0; quad H(s+t,o= H(s,Phi(t,o +H(t,oonumber end{eqnarray} H ( 0 ,ω = 0 ; � H ( s + t,ω = H ( s, Φ ( t,ω + H ( t,ω where Φ : � × Ω → Ω, (t, ω ↦ Φ(t, ω is a dynamical system on a measurable space (Ω, ℱ. More precisely, we investigate dominated solutions and non differentiable solutions of the helix equation. For the last case, the Wiener helix plays a fundamental role. Moreover, some relations with the cocycle equation defined by Φ, are investigated. Ce papier est consacré aux hélices, c'est-à -dire les solutions H : � × Ω → �d, (t, ω ↦ H(t, ω de l'équation fonctionnelle egin{eqnarray} H(0,o=0; quad H(s+t,o= H(s,Phi(t,o +H(t,o onumber end{eqnarray} H ( 0 ,ω = 0 ; � H ( s + t,ω = H ( s, Φ ( t,ω + H ( t,ω où Φ : � × Ω → Ω, (t, ω ↦ Φ(t, ω est un système dynamique défini sur un espace mesurable (Ω, ℱ. Plus présisément, nous déterminons d'abord les hélices dominées puis nous caractérisons les hélices non différentiables. Dans ce dernier cas, l'hélice de Wiener joue un rôle important. Nous précisons aussi quelques relations des hélices avec les cocycles définis par Φ.
p-Euler equations and p-Navier-Stokes equations
Li, Lei; Liu, Jian-Guo
We propose in this work new systems of equations which we call p-Euler equations and p-Navier-Stokes equations. p-Euler equations are derived as the Euler-Lagrange equations for the action represented by the Benamou-Brenier characterization of Wasserstein-p distances, with incompressibility constraint. p-Euler equations have similar structures with the usual Euler equations but the 'momentum' is the signed (p - 1)-th power of the velocity. In the 2D case, the p-Euler equations have streamfunction-vorticity formulation, where the vorticity is given by the p-Laplacian of the streamfunction. By adding diffusion presented by γ-Laplacian of the velocity, we obtain what we call p-Navier-Stokes equations. If γ = p, the a priori energy estimates for the velocity and momentum have dual symmetries. Using these energy estimates and a time-shift estimate, we show the global existence of weak solutions for the p-Navier-Stokes equations in Rd for γ = p and p ≥ d ≥ 2 through a compactness criterion.
Generalized quantal equation of motion
Morsy, M.W.; Embaby, M.
In the present paper, an attempt is made for establishing a generalized equation of motion for quantal objects, in which intrinsic self adjointness is naturally built in, independently of any prescribed representation. This is accomplished by adopting Hamilton's principle of least action, after incorporating, properly, the quantal features and employing the generalized calculus of variations, without being restricted to fixed end points representation. It turns out that our proposed equation of motion is an intrinsically self-adjoint Euler-Lagrange's differential equation that ensures extremization of the quantal action as required by Hamilton's principle. Time dependence is introduced and the corresponding equation of motion is derived, in which intrinsic self adjointness is also achieved. Reducibility of the proposed equation of motion to the conventional Schroedinger equation is examined. The corresponding continuity equation is established, and both of the probability density and the probability current density are identified. (author)
Alternatives to the Dirac equation
Girvin, S.M.; Brownstein, K.R.
Recent work by Biedenharn, Han, and van Dam (BHvD) has questioned the uniqueness of the Dirac equation. BHvD have obtained a two-component equation as an alternate to the Dirac equation. Although they later show their alternative to be unitarily equivalent to the Dirac equation, certain physical differences were claimed. BHvD attribute the existence of this alternate equation to the fact that their factorizing matrices were position-dependent. To investigate this, we factor the Klein-Gordon equation in spherical coordinates allowing the factorizing matrices to depend arbitrarily upon theta and phi. It is shown that despite this additional freedom, and without involving any relativistic covariance, the conventional four-component Dirac equation is the only possibility
Wave Partial Differential Equation
Szöllös, Alexandr
Práce se zabývá diferenciálními rovnicemi, jejich využitím při analýze vedení, experimenty s vedením a možnou akcelerací výpo�tu v GPU s využitím prostředí nVidia CUDA. This work deals with diffrential equations, with the possibility of using them for analysis of the line and the possibility of accelerating the computations in GPU using nVidia CUDA. C
Λ scattering equations
Gomez, Humberto
The CHY representation of scattering amplitudes is based on integrals over the moduli space of a punctured sphere. We replace the punctured sphere by a double-cover version. The resulting scattering equations depend on a parameter Λ controlling the opening of a branch cut. The new representation of scattering amplitudes possesses an enhanced redundancy which can be used to fix, modulo branches, the location of four punctures while promoting Λ to a variable. Via residue theorems we show how CHY formulas break up into sums of products of smaller (off-shell) ones times a propagator. This leads to a powerful way of evaluating CHY integrals of generic rational functions, which we call the Λ algorithm.
Scaling of differential equations
Langtangen, Hans Petter
The book serves both as a reference for various scaled models with corresponding dimensionless numbers, and as a resource for learning the art of scaling. A special feature of the book is the emphasis on how to create software for scaled models, based on existing software for unscaled models. Scaling (or non-dimensionalization) is a mathematical technique that greatly simplifies the setting of input parameters in numerical simulations. Moreover, scaling enhances the understanding of how different physical processes interact in a differential equation model. Compared to the existing literature, where the topic of scaling is frequently encountered, but very often in only a brief and shallow setting, the present book gives much more thorough explanations of how to reason about finding the right scales. This process is highly problem dependent, and therefore the book features a lot of worked examples, from very simple ODEs to systems of PDEs, especially from fluid mechanics. The text is easily accessible and exam...
Parabolized stability equations
Herbert, Thorwald
The parabolized stability equations (PSE) are a new approach to analyze the streamwise evolution of single or interacting Fourier modes in weakly nonparallel flows such as boundary layers. The concept rests on the decomposition of every mode into a slowly varying amplitude function and a wave function with slowly varying wave number. The neglect of the small second derivatives of the slowly varying functions with respect to the streamwise variable leads to an initial boundary-value problem that can be solved by numerical marching procedures. The PSE approach is valid in convectively unstable flows. The equations for a single mode are closely related to those of the traditional eigenvalue problems for linear stability analysis. However, the PSE approach does not exploit the homogeneity of the problem and, therefore, can be utilized to analyze forced modes and the nonlinear growth and interaction of an initial disturbance field. In contrast to the traditional patching of local solutions, the PSE provide the spatial evolution of modes with proper account for their history. The PSE approach allows studies of secondary instabilities without the constraints of the Floquet analysis and reproduces the established experimental, theoretical, and computational benchmark results on transition up to the breakdown stage. The method matches or exceeds the demonstrated capabilities of current spatial Navier-Stokes solvers at a small fraction of their computational cost. Recent applications include studies on localized or distributed receptivity and prediction of transition in model environments for realistic engineering problems. This report describes the basis, intricacies, and some applications of the PSE methodology.
The Langevin equation
Pomeau, Yves; Piasecki, Jarosław
The existence of atoms has been long predicted by philosophers and scientists. The development of thermodynamics and of the statistical interpretation of its concepts at the end of the nineteenth century and in the early years of the twentieth century made it possible to bridge the gap of scales between the macroscopic world and the world of atoms. Einstein and Smoluchowski showed in 1905 and 1906 that the Brownian motion of particles of measurable size is a manifestation of the motion of atoms in fluids. Their derivation was completely different from each other. Langevin showed in 1908 how to put in a coherent framework the subtle effect of the randomness of the atomic world, responsible for the fluctuating force driving the motion of the Brownian particle and the viscosity of the "macroscopic" flow taking place around the same Brownian particle. Whereas viscous forces were already well understood at this time, the "Langevin" force appears there for the first time: it represents the fluctuating part of the interaction between the Brownian particle and the surrounding fluid. We discuss the derivation by Einstein and Smoluchowski as well as a previous paper by Sutherland on the diffusion coefficient of large spheres. Next we present Langevin's short note and explain the fundamental splitting into a random force and a macroscopic viscous force. This brings us to discuss various points, like the kind of constraints on Langevin-like equations. We insist in particular on the one arising from the time-reversal symmetry of the equilibrium fluctuations. Moreover, we discuss another constraint, raised first by Lorentz, which implies that, if the Brownian particle is not very heavy, the viscous force cannot be taken as the standard Stokes drag on an object moving at uniform speed. Lastly, we examine the so-called Langevin-Heisenberg and/or Langevin-Schrödinger equation used in quantum mechanics.
Borthwick, David
This modern take on partial differential equations does not require knowledge beyond vector calculus and linear algebra. The author focuses on the most important classical partial differential equations, including conservation equations and their characteristics, the wave equation, the heat equation, function spaces, and Fourier series, drawing on tools from analysis only as they arise.Within each section the author creates a narrative that answers the five questions: (1) What is the scientific problem we are trying to understand? (2) How do we model that with PDE? (3) What techniques can we use to analyze the PDE? (4) How do those techniques apply to this equation? (5) What information or insight did we obtain by developing and analyzing the PDE? The text stresses the interplay between modeling and mathematical analysis, providing a thorough source of problems and an inspiration for the development of methods.
Analytic solutions of hydrodynamics equations
Coggeshall, S.V.
Many similarity solutions have been found for the equations of one-dimensional (1-D) hydrodynamics. These special combinations of variables allow the partial differential equations to be reduced to ordinary differential equations, which must then be solved to determine the physical solutions. Usually, these reduced ordinary differential equations are solved numerically. In some cases it is possible to solve these reduced equations analytically to obtain explicit solutions. In this work a collection of analytic solutions of the 1-D hydrodynamics equations is presented. These can be used for a variety of purposes, including (i) numerical benchmark problems, (ii) as a basis for analytic models, and (iii) to provide insight into more complicated solutions
On matrix fractional differential equations
Adem Kılıçman
Full Text Available The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objective of this article is to discuss the Laplace transform method based on operational matrices of fractional derivatives for solving several kinds of linear fractional differential equations. Moreover, we present the operational matrices of fractional derivatives with Laplace transform in many applications of various engineering systems as control system. We present the analytical technique for solving fractional-order, multi-term fractional differential equation. In other words, we propose an efficient algorithm for solving fractional matrix equation.
Differential equations methods and applications
Said-Houari, Belkacem
This book presents a variety of techniques for solving ordinary differential equations analytically and features a wealth of examples. Focusing on the modeling of real-world phenomena, it begins with a basic introduction to differential equations, followed by linear and nonlinear first order equations and a detailed treatment of the second order linear equations. After presenting solution methods for the Laplace transform and power series, it lastly presents systems of equations and offers an introduction to the stability theory. To help readers practice the theory covered, two types of exercises are provided: those that illustrate the general theory, and others designed to expand on the text material. Detailed solutions to all the exercises are included. The book is excellently suited for use as a textbook for an undergraduate class (of all disciplines) in ordinary differential equations. .
Integral equations and their applications
Rahman, M
For many years, the subject of functional equations has held a prominent place in the attention of mathematicians. In more recent years this attention has been directed to a particular kind of functional equation, an integral equation, wherein the unknown function occurs under the integral sign. The study of this kind of equation is sometimes referred to as the inversion of a definite integral. While scientists and engineers can already choose from a number of books on integral equations, this new book encompasses recent developments including some preliminary backgrounds of formulations of integral equations governing the physical situation of the problems. It also contains elegant analytical and numerical methods, and an important topic of the variational principles. Primarily intended for senior undergraduate students and first year postgraduate students of engineering and science courses, students of mathematical and physical sciences will also find many sections of direct relevance. The book contains eig...
Stochastic partial differential equations
Lototsky, Sergey V
Taking readers with a basic knowledge of probability and real analysis to the frontiers of a very active research discipline, this textbook provides all the necessary background from functional analysis and the theory of PDEs. It covers the main types of equations (elliptic, hyperbolic and parabolic) and discusses different types of random forcing. The objective is to give the reader the necessary tools to understand the proofs of existing theorems about SPDEs (from other sources) and perhaps even to formulate and prove a few new ones. Most of the material could be covered in about 40 hours of lectures, as long as not too much time is spent on the general discussion of stochastic analysis in infinite dimensions. As the subject of SPDEs is currently making the transition from the research level to that of a graduate or even undergraduate course, the book attempts to present enough exercise material to fill potential exams and homework assignments. Exercises appear throughout and are usually directly connected ...
JWL Equation of State
Menikoff, Ralph [Los Alamos National Laboratory
The JWL equation of state (EOS) is frequently used for the products (and sometimes reactants) of a high explosive (HE). Here we review and systematically derive important properties. The JWL EOS is of the Mie-Grueneisen form with a constant Grueneisen coefficient and a constants specific heat. It is thermodynamically consistent to specify the temperature at a reference state. However, increasing the reference state temperature restricts the EOS domain in the (V, e)-plane of phase space. The restrictions are due to the conditions that P ≥ 0, T ≥ 0, and the isothermal bulk modulus is positive. Typically, this limits the low temperature regime in expansion. The domain restrictions can result in the P-T equilibrium EOS of a partly burned HE failing to have a solution in some cases. For application to HE, the heat of detonation is discussed. Example JWL parameters for an HE, both products and reactions, are used to illustrate the restrictions on the domain of the EOS.
Gauge-invariant flow equation
Wetterich, C.
We propose a closed gauge-invariant functional flow equation for Yang-Mills theories and quantum gravity that only involves one macroscopic gauge field or metric. It is based on a projection on physical and gauge fluctuations. Deriving this equation from a functional integral we employ the freedom in the precise choice of the macroscopic field and the effective average action in order to realize a closed and simple form of the flow equation.
The generalized Airy diffusion equation
Frank M. Cholewinski
Full Text Available Solutions of a generalized Airy diffusion equation and an associated nonlinear partial differential equation are obtained. Trigonometric type functions are derived for a third order generalized radial Euler type operator. An associated complex variable theory and generalized Cauchy-Euler equations are obtained. Further, it is shown that the Airy expansions can be mapped onto the Bessel Calculus of Bochner, Cholewinski and Haimo.
Supersymmetric two-particle equations
Sissakyan, A.N.; Skachkov, N.B.; Shevchenko, O.Yu.
In the framework of the scalar superfield model, a particular case of which is the well-known Wess-Zumino model, the supersymmetric Schwinger equations are found. On their basis with the use of the second Legendre transformation the two-particle supersymmetric Edwards and Bethe-Salpeter equations are derived. A connection of the kernels and inhomogeneous terms of these equations with generating functional of the second Legendre transformation is found
Introduction to ordinary differential equations
Rabenstein, Albert L
Introduction to Ordinary Differential Equations is a 12-chapter text that describes useful elementary methods of finding solutions using ordinary differential equations. This book starts with an introduction to the properties and complex variable of linear differential equations. Considerable chapters covered topics that are of particular interest in applications, including Laplace transforms, eigenvalue problems, special functions, Fourier series, and boundary-value problems of mathematical physics. Other chapters are devoted to some topics that are not directly concerned with finding solutio
Adem Kılıçman; Wasan Ajeel Ahmood
The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objec...
Electronic representation of wave equation
Veigend, Petr; Kunovský, Jiří, E-mail: [email protected]; Kocina, Filip; Ne�asová, Gabriela; Valenta, Václav [University of Technology, Faculty of Information Technology, Božetěchova 2, 612 66 Brno (Czech Republic); Šátek, Václav [IT4Innovations, VŠB Technical University of Ostrava, 17. listopadu 15/2172, 708 33 Ostrava-Poruba (Czech Republic); University of Technology, Faculty of Information Technology, Božetěchova 2, 612 66 Brno (Czech Republic)
The Taylor series method for solving differential equations represents a non-traditional way of a numerical solution. Even though this method is not much preferred in the literature, experimental calculations done at the Department of Intelligent Systems of the Faculty of Information Technology of TU Brno have verified that the accuracy and stability of the Taylor series method exceeds the currently used algorithms for numerically solving differential equations. This paper deals with solution of Telegraph equation using modelling of a series small pieces of the wire. Corresponding differential equations are solved by the Modern Taylor Series Method.
Generalized Lorentz-Force equations
Yamaleev, R.M.
Guided by Nambu (n+1)-dimensional phase space formalism we build a new system of dynamic equations. These equations describe a dynamic state of the corporeal system composed of n subsystems. The dynamic equations are formulated in terms of dynamic variables of the subsystems as well as in terms of dynamic variables of the corporeal system. These two sets of variables are related respectively as roots and coefficients of the n-degree polynomial equation. In the special n=2 case, this formalism reproduces relativistic dynamics for the charged spinning particles
The forced nonlinear Schroedinger equation
Kaup, D.J.; Hansen, P.J.
The nonlinear Schroedinger equation describes the behaviour of a radio frequency wave in the ionosphere near the reflexion point where nonlinear processes are important. A simple model of this phenomenon leads to the forced nonlinear Schroedinger equation in terms of a nonlinear boundary value problem. A WKB analysis of the time evolution equations for the nonlinear Schroedinger equation in the inverse scattering transform formalism gives a crude order of magnitude estimation of the qualitative behaviour of the solutions. This estimation is compared with the numerical solutions. (D.Gy.)
Correct Linearization of Einstein's Equations
Rabounski D.
Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.
The Dirac equation for accountants
Ord, G.N.
In the context of relativistic quantum mechanics, derivations of the Dirac equation usually take the form of plausibility arguments based on experience with the Schroedinger equation. The primary reason for this is that we do not know what wavefunctions physically represent, so derivations have to rely on formal arguments. There is however a context in which the Dirac equation in one dimension is directly related to a classical generating function. In that context, the derivation of the Dirac equation is an exercise in counting. We provide this derivation here and discuss its relationship to quantum mechanics
Difference equations theory, applications and advanced topics
Mickens, Ronald E
THE DIFFERENCE CALCULUS GENESIS OF DIFFERENCE EQUATIONS DEFINITIONS DERIVATION OF DIFFERENCE EQUATIONS EXISTENCE AND UNIQUENESS THEOREM OPERATORS ∆ AND E ELEMENTARY DIFFERENCE OPERATORS FACTORIAL POLYNOMIALS OPERATOR ∆−1 AND THE SUM CALCULUS FIRST-ORDER DIFFERENCE EQUATIONS INTRODUCTION GENERAL LINEAR EQUATION CONTINUED FRACTIONS A GENERAL FIRST-ORDER EQUATION: GEOMETRICAL METHODS A GENERAL FIRST-ORDER EQUATION: EXPANSION TECHNIQUES LINEAR DIFFERENCE EQUATIONSINTRODUCTION LINEARLY INDEPENDENT FUNCTIONS FUNDAMENTAL THEOREMS FOR HOMOGENEOUS EQUATIONSINHOMOGENEOUS EQUATIONS SECOND-ORDER EQUATIONS STURM-LIOUVILLE DIFFERENCE EQUATIONS LINEAR DIFFERENCE EQUATIONS INTRODUCTION HOMOGENEOUS EQUATIONS CONSTRUCTION OF A DIFFERENCE EQUATION HAVING SPECIFIED SOLUTIONS RELATIONSHIP BETWEEN LINEAR DIFFERENCE AND DIFFERENTIAL EQUATIONS INHOMOGENEOUS EQUATIONS: METHOD OF UNDETERMINED COEFFICIENTS INHOMOGENEOUS EQUATIONS: OPERATOR METHODS z-TRANSFORM METHOD SYSTEMS OF DIFFERENCE EQUATIONS LINEAR PARTIAL DIFFERENCE EQUATI...
Differential equations a dynamical systems approach ordinary differential equations
Hubbard, John H
This is a corrected third printing of the first part of the text Differential Equations: A Dynamical Systems Approach written by John Hubbard and Beverly West. The authors' main emphasis in this book is on ordinary differential equations. The book is most appropriate for upper level undergraduate and graduate students in the fields of mathematics, engineering, and applied mathematics, as well as the life sciences, physics and economics. Traditional courses on differential equations focus on techniques leading to solutions. Yet most differential equations do not admit solutions which can be written in elementary terms. The authors have taken the view that a differential equations defines functions; the object of the theory is to understand the behavior of these functions. The tools the authors use include qualitative and numerical methods besides the traditional analytic methods. The companion software, MacMath, is designed to bring these notions to life.
Solutions to Arithmetic Convolution Equations
Glöckner, H.; Lucht, L.G.; Porubský, Štefan
Ro�. 135, �. 6 (2007), s. 1619-1629 ISSN 0002-9939 R&D Projects: GA ČR GA201/04/0381 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic functions * Dirichlet convolution * polynomial equations * analytic equations * topological algebras * holomorphic functional calculus Subject RIV: BA - General Mathematics Impact factor: 0.520, year: 2007
On Degenerate Partial Differential Equations
Chen, Gui-Qiang G.
Some of recent developments, including recent results, ideas, techniques, and approaches, in the study of degenerate partial differential equations are surveyed and analyzed. Several examples of nonlinear degenerate, even mixed, partial differential equations, are presented, which arise naturally in some longstanding, fundamental problems in fluid mechanics and differential geometry. The solution to these fundamental problems greatly requires a deep understanding of nonlinear degenerate parti...
Differential equations a concise course
Bear, H S
Concise introduction for undergraduates includes, among other topics, a survey of first order equations, discussions of complex-valued solutions, linear differential operators, inverse operators and variation of parameters method, the Laplace transform, Picard's existence theorem, and an exploration of various interpretations of systems of equations. Numerous clearly stated theorems and proofs, examples, and problems followed by solutions.
Differential equations and finite groups
Put, Marius van der; Ulmer, Felix
The classical solution of the Riemann-Hilbert problem attaches to a given representation of the fundamental group a regular singular linear differential equation. We present a method to compute this differential equation in the case of a representation with finite image. The approach uses Galois
Saturation and linear transport equation
Kutak, K.
We show that the GBW saturation model provides an exact solution to the one dimensional linear transport equation. We also show that it is motivated by the BK equation considered in the saturated regime when the diffusion and the splitting term in the diffusive approximation are balanced by the nonlinear term. (orig.)
Lie symmetries in differential equations
Pleitez, V.
A study of ordinary and Partial Differential equations using the symmetries of Lie groups is made. Following such a study, an application to the Helmholtz, Line-Gordon, Korleweg-de Vries, Burguer, Benjamin-Bona-Mahony and wave equations is carried out [pt
Introduction to nonlinear dispersive equations
Linares, Felipe
This textbook introduces the well-posedness theory for initial-value problems of nonlinear, dispersive partial differential equations, with special focus on two key models, the Korteweg–de Vries equation and the nonlinear Schrödinger equation. A concise and self-contained treatment of background material (the Fourier transform, interpolation theory, Sobolev spaces, and the linear Schrödinger equation) prepares the reader to understand the main topics covered: the initial-value problem for the nonlinear Schrödinger equation and the generalized Korteweg–de Vries equation, properties of their solutions, and a survey of general classes of nonlinear dispersive equations of physical and mathematical significance. Each chapter ends with an expert account of recent developments and open problems, as well as exercises. The final chapter gives a detailed exposition of local well-posedness for the nonlinear Schrödinger equation, taking the reader to the forefront of recent research. The second edition of Introdu...
Students' Understanding of Quadratic Equations
López, Jonathan; Robles, Izraim; Martínez-Planell, Rafael
Action-Process-Object-Schema theory (APOS) was applied to study student understanding of quadratic equations in one variable. This required proposing a detailed conjecture (called a genetic decomposition) of mental constructions students may do to understand quadratic equations. The genetic decomposition which was proposed can contribute to help…
Solving equations by topological methods
Lech Górniewicz
Full Text Available In this paper we survey most important results from topological fixed point theory which can be directly applied to differential equations. Some new formulations are presented. We believe that our article will be useful for analysts applying topological fixed point theory in nonlinear analysis and in differential equations.
Generalized Fermat equations: A miscellany
Bennett, M.A.; Chen, I.; Dahmen, S.R.; Yazdani, S.
This paper is devoted to the generalized Fermat equation xp + yq = zr, where p, q and r are integers, and x, y and z are nonzero coprime integers. We begin by surveying the exponent triples (p, q, r), including a number of infinite families, for which the equation has been solved to date, detailing
Equation with the many fathers
Kragh, Helge
In this essay I discuss the origin and early development of the first relativistic wave equation, known as the Klein-Gordon equation. In 1926 several physicists, among them Klein, Fock, Schrödinger, and de Broglie, announced this equation as a candidate for a relativistic generalization of the us...... as electrodynamics. Although this ambitious attempt attracted some interest in 1926, its impact on the mainstream of development in quantum mechanics was virtually nil....... of the usual Schrödinger equation. In most of the early versions the Klein-Gordon equation was connected with the general theory of relativity. Klein and some other physicists attempted to express quantum mechanics within a five-dimensional unified theory, embracing general relativity as well...
The relativistic electron wave equation
Dirac, P.A.M.
The paper was presented at the European Conference on Particle Physics held in Budapest between the 4th and 9th July of 1977. A short review is given on the birth of the relativistic electron wave equation. After Schroedinger has shown the equivalence of his wave mechanics and the matrix mechanics of Heisenberg, a general transformation theory was developed by the author. This theory required a relativistic wave equation linear in delta/delta t. As the Klein--Gordon equation available at this time did not satisfy this condition the development of a new equation became necessary. The equation which was found gave the value of the electron spin and magnetic moment automatically. (D.P.)
Higher order field equations. II
Tolhoek, H.A.
In a previous paper wave propagation was studied according to a sixth-order partial differential equation involving a complex mass M. The corresponding Yang-Feldman integral equations (indicated as SM-YF-equations), were formulated using modified Green's functions Gsub(R)sup(M)(x) and Gsub(A)sup(M)(x), which then incorporate the partial differential equation together with certain boundary conditions. In this paper certain limit properties of these modified Green's functions are derived: (a) It is shown that for mod(M)→infinity the Green's functions Gsub(R)sup(M)(x) and Gsub(A)sup(M)(x) approach the Green's functions Δsub(R)(x) and Δsub(A)(x) of the corresponding KG-equation (Klein-Gordon equation). (b) It is further shown that the asymptotic behaviour of Gsub(R)sup(M)(x) and Gsub(A)sup(M)(x) is the same as of Δsub(R)(x) and Δsub(A)(x)-and also the same as for Dsub(R)(x) and Dsub(A)(x) for t→+-infinity;, where Dsub(R) and Dsub(A) are the Green's functions for the KG-equation with mass zero. It is essential to take limits in the sense of distribution theory in both cases (a) and (b). The property (b) indicates that the wave propagation properties of the SM-YF-equations, the KG-equation with finite mass and the KG-equation with mass zero are closely related in an asymptotic sense. (Auth.)
Equating TIMSS Mathematics Subtests with Nonlinear Equating Methods Using NEAT Design: Circle-Arc Equating Approaches
Ozdemir, Burhanettin
The purpose of this study is to equate Trends in International Mathematics and Science Study (TIMSS) mathematics subtest scores obtained from TIMSS 2011 to scores obtained from TIMSS 2007 form with different nonlinear observed score equating methods under Non-Equivalent Anchor Test (NEAT) design where common items are used to link two or more test…
Neoclassical MHD equations for tokamaks
Callen, J.D.; Shaing, K.C.
The moment equation approach to neoclassical-type processes is used to derive the flows, currents and resistive MHD-like equations for studying equilibria and instabilities in axisymmetric tokamak plasmas operating in the banana-plateau collisionality regime (ν* approx. 1). The resultant ''neoclassical MHD'' equations differ from the usual reduced equations of resistive MHD primarily by the addition of the important viscous relaxation effects within a magnetic flux surface. The primary effects of the parallel (poloidal) viscous relaxation are: (1) Rapid (approx. ν/sub i/) damping of the poloidal ion flow so the residual flow is only toroidal; (2) addition of the bootstrap current contribution to Ohm's laws; and (3) an enhanced (by B 2 /B/sub theta/ 2 ) polarization drift type term and consequent enhancement of the perpendicular dielectric constant due to parallel flow inertia, which causes the equations to depend only on the poloidal magnetic field B/sub theta/. Gyroviscosity (or diamagnetic vfiscosity) effects are included to properly treat the diamagnetic flow effects. The nonlinear form of the neoclassical MHD equations is derived and shown to satisfy an energy conservation equation with dissipation arising from Joule and poloidal viscous heating, and transport due to classical and neoclassical diffusion
Approximate solutions to Mathieu's equation
Wilkinson, Samuel A.; Vogt, Nicolas; Golubev, Dmitry S.; Cole, Jared H.
Mathieu's equation has many applications throughout theoretical physics. It is especially important to the theory of Josephson junctions, where it is equivalent to Schrödinger's equation. Mathieu's equation can be easily solved numerically, however there exists no closed-form analytic solution. Here we collect various approximations which appear throughout the physics and mathematics literature and examine their accuracy and regimes of applicability. Particular attention is paid to quantities relevant to the physics of Josephson junctions, but the arguments and notation are kept general so as to be of use to the broader physics community.
Soliton equations and Hamiltonian systems
Dickey, L A
The theory of soliton equations and integrable systems has developed rapidly during the last 30 years with numerous applications in mechanics and physics. For a long time, books in this field have not been written but the flood of papers was overwhelming: many hundreds, maybe thousands of them. All this output followed one single work by Gardner, Green, Kruskal, and Mizura on the Korteweg-de Vries equation (KdV), which had seemed to be merely an unassuming equation of mathematical physics describing waves in shallow water. Besides its obvious practical use, this theory is attractive also becau
Galois theory of difference equations
Put, Marius
This book lays the algebraic foundations of a Galois theory of linear difference equations and shows its relationship to the analytic problem of finding meromorphic functions asymptotic to formal solutions of difference equations. Classically, this latter question was attacked by Birkhoff and Tritzinsky and the present work corrects and greatly generalizes their contributions. In addition results are presented concerning the inverse problem in Galois theory, effective computation of Galois groups, algebraic properties of sequences, phenomena in positive characteristics, and q-difference equations. The book is aimed at advanced graduate researchers and researchers.
Integral equation methods for electromagnetics
Volakis, John
This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo
Bridging the Knowledge Gaps between Richards' Equation and Budyko Equation
Wang, D.
The empirical Budyko equation represents the partitioning of mean annual precipitation into evaporation and runoff. Richards' equation, based on Darcy's law, represents the movement of water in unsaturated soils. The linkage between Richards' equation and Budyko equation is presented by invoking the empirical Soil Conservation Service curve number (SCS-CN) model for computing surface runoff at the event-scale. The basis of the SCS-CN method is the proportionality relationship, i.e., the ratio of continuing abstraction to its potential is equal to the ratio of surface runoff to its potential value. The proportionality relationship can be derived from the Richards' equation for computing infiltration excess and saturation excess models at the catchment scale. Meanwhile, the generalized proportionality relationship is demonstrated as the common basis of SCS-CN method, monthly "abcd" model, and Budyko equation. Therefore, the linkage between Darcy's law and the emergent pattern of mean annual water balance at the catchment scale is presented through the proportionality relationship.
Iterative Splitting Methods for Differential Equations
Geiser, Juergen
Iterative Splitting Methods for Differential Equations explains how to solve evolution equations via novel iterative-based splitting methods that efficiently use computational and memory resources. It focuses on systems of parabolic and hyperbolic equations, including convection-diffusion-reaction equations, heat equations, and wave equations. In the theoretical part of the book, the author discusses the main theorems and results of the stability and consistency analysis for ordinary differential equations. He then presents extensions of the iterative splitting methods to partial differential
Nonlinear integrodifferential equations as discrete systems
Tamizhmani, K. M.; Satsuma, J.; Grammaticos, B.; Ramani, A.
We analyse a class of integrodifferential equations of the `intermediate long wave' (ILW) type. We show that these equations can be formally interpreted as discrete, differential-difference systems. This allows us to link equations of this type with previous results of ours involving differential-delay equations and, on the basis of this, propose new integrable equations of ILW type. Finally, we extend this approach to pure difference equations and propose ILW forms for the discrete lattice KdV equation.
Direct 'delay' reductions of the Toda equation
Joshi, Nalini
A new direct method of obtaining reductions of the Toda equation is described. We find a canonical and complete class of all possible reductions under certain assumptions. The resulting equations are ordinary differential-difference equations, sometimes referred to as delay-differential equations. The representative equation of this class is hypothesized to be a new version of one of the classical Painleve equations. The Lax pair associated with this equation is obtained, also by reduction. (fast track communication)
Integral equation for Coulomb problem
Sasakawa, T.
For short range potentials an inhomogeneous (homogeneous) Lippmann-Schwinger integral equation of the Fredholm type yields the wave function of scattering (bound) state. For the Coulomb potential, this statement is no more valid. It has been felt difficult to express the Coulomb wave function in a form of an integral equation with the Coulomb potential as the perturbation. In the present paper, the author shows that an inhomogeneous integral equation of a Volterra type with the Coulomb potential as the perturbation can be constructed both for the scattering and the bound states. The equation yielding the binding energy is given in an integral form. The present treatment is easily extended to the coupled Coulomb problems
Geophysical interpretation using integral equations
Eskola, L
Along with the general development of numerical methods in pure and applied to apply integral equations to geophysical modelling has sciences, the ability improved considerably within the last thirty years or so. This is due to the successful derivation of integral equations that are applicable to the modelling of complex structures, and efficient numerical algorithms for their solution. A significant stimulus for this development has been the advent of fast digital computers. The purpose of this book is to give an idea of the principles by which boundary-value problems describing geophysical models can be converted into integral equations. The end results are the integral formulas and integral equations that form the theoretical framework for practical applications. The details of mathematical analysis have been kept to a minimum. Numerical algorithms are discussed only in connection with some illustrative examples involving well-documented numerical modelling results. The reader is assu med to have a back...
Singularity: Raychaudhuri equation once again
Cosmology; Raychaudhuri equation; Universe; quantum gravity; loop quan- tum gravity ... than the observation verifying the prediction of theory. This gave .... which was now expanding, would have had a singular beginning in a hot Big Bang.
Kinetic equations in dirty superconductors
Kraehenbuehl, Y.
Kinetic equations for superconductors in the dirty limit are derived using a method developed for superfluid systems, which allows a systematic expansion in small parameters; exact charge conservation is obeyed. (orig.)
Kinks and the Dirac equation
Skyrme, T.H.R.
In a model quantum theory of interacting mesons, the motion of certain conserved particle-like structures is discussed. It is shown how collective coordinates may be introduced to describe them, leading, in lowest approximation, to a Dirac equation. (author)
Solving Differential Equations in R
Although R is still predominantly applied for statistical analysis and graphical representation, it is rapidly becoming more suitable for mathematical computing. One of the fields where considerable progress has been made recently is the solution of differential equations. Here w...
Wave-equation dispersion inversion
KAUST Repository
Li, Jing; Feng, Zongcai; Schuster, Gerard T.
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained
On the equations of motion
Jannussis, A.; Streclas, A.; Sourlas, D.; Vlachos, K.
Using the theorem of the derivative of a function of operators with respect to any parameter, we can find the equation of motion of a system in classical mechanics, in canonical as well as in non-canonical mechanics
Quantum-statistical kinetic equations
Loss, D.; Schoeller, H.
Considering a homogeneous normal quantum fluid consisting of identical interacting fermions or bosons, the authors derive an exact quantum-statistical generalized kinetic equation with a collision operator given as explicit cluster series where exchange effects are included through renormalized Liouville operators. This new result is obtained by applying a recently developed superoperator formalism (Liouville operators, cluster expansions, symmetrized projectors, P q -rule, etc.) to nonequilibrium systems described by a density operator �(t) which obeys the von Neumann equation. By means of this formalism a factorization theorem is proven (being essential for obtaining closed equations), and partial resummations (leading to renormalized quantities) are performed. As an illustrative application, the quantum-statistical versions (including exchange effects due to Fermi-Dirac or Bose-Einstein statistics) of the homogeneous Boltzmann (binary collisions) and Choh-Uhlenbeck (triple collisions) equations are derived
Lorentz Covariance of Langevin Equation
Koide, T.; Denicol, G.S.; Kodama, T.
Relativistic covariance of a Langevin type equation is discussed. The requirement of Lorentz invariance generates an entanglement between the force and noise terms so that the noise itself should not be a covariant quantity. (author)
Equational theories of tropical sernirings
Aceto, Luca; Esik, Zoltan; Ingolfsdottir, Anna
examples of such structures are the (max,+) semiring and the tropical semiring. It is shown that none of the exotic semirings commonly considered in the literature has a finite basis for its equations, and that similar results hold for the commutative idempotent weak semirings that underlie them. For each......This paper studies the equational theories of various exotic semirings presented in the literature. Exotic semirings are semirings whose underlying carrier set is some subset of the set of real numbers equipped with binary operations of minimum or maximum as sum, and addition as product. Two prime...... of these commutative idempotent weak semirings, the paper offers characterizations of the equations that hold in them, decidability results for their equational theories, explicit descriptions of the free algebras in the varieties they generate, and relative axiomatization results. Udgivelsesdato: APR 11...
Wave equations for pulse propagation
Theoretical discussions of the propagation of pulses of laser radiation through atomic or molecular vapor rely on a number of traditional approximations for idealizing the radiation and the molecules, and for quantifying their mutual interaction by various equations of propagation (for the radiation) and excitation (for the molecules). In treating short-pulse phenomena it is essential to consider coherent excitation phenomena of the sort that is manifest in Rabi oscillations of atomic or molecular populations. Such processes are not adequately treated by rate equations for excitation nor by rate equations for radiation. As part of a more comprehensive treatment of the coupled equations that describe propagation of short pulses, this memo presents background discussion of the equations that describe the field. This memo discusses the origin, in Maxwell's equations, of the wave equation used in the description of pulse propagation. It notes the separation into lamellar and solenoidal (or longitudinal and transverse) and positive and negative frequency parts. It mentions the possibility of separating the polarization field into linear and nonlinear parts, in order to define a susceptibility or index of refraction and, from these, a phase and group velocity. The memo discusses various ways of characterizing the polarization characteristics of plane waves, that is, of parameterizing a transverse unit vector, such as the Jones vector, the Stokes vector, and the Poincare sphere. It discusses the connection between macroscopically defined quantities, such as the intensity or, more generally, the Stokes parameters, and microscopic field amplitudes. The material presented here is a portion of a more extensive treatment of propagation to be presented separately. The equations presented here have been described in various books and articles. They are collected here as a summary and review of theory needed when treating pulse propagation
Feynman integrals and difference equations
Moch, S.; Schneider, C.
We report on the calculation of multi-loop Feynman integrals for single-scale problems by means of difference equations in Mellin space. The solution to these difference equations in terms of harmonic sums can be constructed algorithmically over difference fields, the so-called ΠΣ * -fields. We test the implementation of the Mathematica package Sigma on examples from recent higher order perturbative calculations in Quantum Chromodynamics. (orig.)
Hidden Statistics of Schroedinger Equation
Zak, Michail
Work was carried out in determination of the mathematical origin of randomness in quantum mechanics and creating a hidden statistics of Schr dinger equation; i.e., to expose the transitional stochastic process as a "bridge" to the quantum world. The governing equations of hidden statistics would preserve such properties of quantum physics as superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods.
Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation
We report on the calculation of multi-loop Feynman integrals for single-scale problems by means of difference equations in Mellin space. The solution to these difference equations in terms of harmonic sums can be constructed algorithmically over difference fields, the so-called {pi}{sigma}{sup *}-fields. We test the implementation of the Mathematica package Sigma on examples from recent higher order perturbative calculations in Quantum Chromodynamics. (orig.)
Numerical solution of Boltzmann's equation
Sod, G.A.
The numerical solution of Boltzmann's equation is considered for a gas model consisting of rigid spheres by means of Hilbert's expansion. If only the first two terms of the expansion are retained, Boltzmann's equation reduces to the Boltzmann-Hilbert integral equation. Successive terms in the Hilbert expansion are obtained by solving the same integral equation with a different source term. The Boltzmann-Hilbert integral equation is solved by a new very fast numerical method. The success of the method rests upon the simultaneous use of four judiciously chosen expansions; Hilbert's expansion for the distribution function, another expansion of the distribution function in terms of Hermite polynomials, the expansion of the kernel in terms of the eigenvalues and eigenfunctions of the Hilbert operator, and an expansion involved in solving a system of linear equations through a singular value decomposition. The numerical method is applied to the study of the shock structure in one space dimension. Numerical results are presented for Mach numbers of 1.1 and 1.6. 94 refs, 7 tables, 1 fig
Computational partial differential equations using Matlab
Li, Jichun
Brief Overview of Partial Differential Equations The parabolic equations The wave equations The elliptic equations Differential equations in broader areasA quick review of numerical methods for PDEsFinite Difference Methods for Parabolic Equations Introduction Theoretical issues: stability, consistence, and convergence 1-D parabolic equations2-D and 3-D parabolic equationsNumerical examples with MATLAB codesFinite Difference Methods for Hyperbolic Equations IntroductionSome basic difference schemes Dissipation and dispersion errors Extensions to conservation lawsThe second-order hyperbolic PDE
Linear determining equations for differential constraints
Kaptsov, O V
A construction of differential constraints compatible with partial differential equations is considered. Certain linear determining equations with parameters are used to find such differential constraints. They generalize the classical determining equations used in the search for admissible Lie operators. As applications of this approach equations of an ideal incompressible fluid and non-linear heat equations are discussed
Equationally Compact Acts : Coproducts / Peeter Normak
Index Scriptorium Estoniae
Normak, Peeter
In this article equational compactness of acts and its generalizations are discussed. As equational compactness does not carry over to coproducts a slight generalization of c-equational campactness is introduced. It is proved that a coproduct of acts is c-equationally compact if and only if all components are c-equationally campact
Exact results for the Boltzmann equation and Smoluchowski's coagulation equation
Hendriks, E.M.
Almost no analytical solutions have been found for realistic intermolecular forces, largely due to the complicated structure of the collision term which calls for the construction of simplified models, in which as many physical properties are maintained as possible. In the first three chapters of this thesis such model Boltzmann equations are studied. Only spatially homogeneous gases with isotropic distribution functions are considered. Chapter I considers transition kernels, chapter II persistent scattering models and chapter III very hard particles. The second part of this dissertation deals with Smoluchowski's coagulation equation for the size distribution function in a coagulating system, with chapters devoted to the following topics: kinetics of gelation and universality, coagulation equations with gelation and exactly soluble models of nucleation. (Auth./C.F.)
Abstract methods in partial differential equations
Carroll, Robert W
Detailed, self-contained treatment examines modern abstract methods in partial differential equations, especially abstract evolution equations. Suitable for graduate students with some previous exposure to classical partial differential equations. 1969 edition.
Linear integral equations and soliton systems
Quispel, G.R.W.
A study is presented of classical integrable dynamical systems in one temporal and one spatial dimension. The direct linearizations are given of several nonlinear partial differential equations, for example the Korteweg-de Vries equation, the modified Korteweg-de Vries equation, the sine-Gordon equation, the nonlinear Schroedinger equation, and the equation of motion for the isotropic Heisenberg spin chain; the author also discusses several relations between these equations. The Baecklund transformations of these partial differential equations are treated on the basis of a singular transformation of the measure (or equivalently of the plane-wave factor) occurring in the corresponding linear integral equations, and the Baecklund transformations are used to derive the direct linearization of a chain of so-called modified partial differential equations. Finally it is shown that the singular linear integral equations lead in a natural way to the direct linearizations of various nonlinear difference-difference equations. (Auth.)
ON THE EQUIVALENCE OF THE ABEL EQUATION
This article uses the reflecting function of Mironenko to study some complicated differential equations which are equivalent to the Abel equation. The results are applied to discuss the behavior of solutions of these complicated differential equations.
Exact solitary waves of the Fisher equation
Kudryashov, Nikolai A.
New method is presented to search exact solutions of nonlinear differential equations. This approach is used to look for exact solutions of the Fisher equation. New exact solitary waves of the Fisher equation are given
How to obtain the covariant form of Maxwell's equations from the continuity equation
Heras, Jose A
The covariant Maxwell equations are derived from the continuity equation for the electric charge. This result provides an axiomatic approach to Maxwell's equations in which charge conservation is emphasized as the fundamental axiom underlying these equations
How to obtain the covariant form of Maxwell's equations from the continuity equation
Heras, Jose A [Departamento de Ciencias Basicas, Universidad Autonoma Metropolitana, Unidad Azcapotzalco, Av. San Pablo No. 180, Col. Reynosa, 02200, Mexico D. F. (Mexico); Departamento de Fisica y Matematicas, Universidad Iberoamericana, Prolongacion Paseo de la Reforma 880, Mexico D. F. 01210 (Mexico)
The covariant Maxwell equations are derived from the continuity equation for the electric charge. This result provides an axiomatic approach to Maxwell's equations in which charge conservation is emphasized as the fundamental axiom underlying these equations.
Extraction of dynamical equations from chaotic data
Rowlands, G.; Sprott, J.C.
A method is described for extracting from a chaotic time series a system of equations whose solution reproduces the general features of the original data even when these are contaminated with noise. The equations facilitate calculation of fractal dimension, Lyapunov exponents and short-term predictions. The method is applied to data derived from numerical solutions of the Logistic equation, the Henon equations, the Lorenz equations and the Roessler equations. 10 refs., 5 figs
First-order partial differential equations
Rhee, Hyun-Ku; Amundson, Neal R
This first volume of a highly regarded two-volume text is fully usable on its own. After going over some of the preliminaries, the authors discuss mathematical models that yield first-order partial differential equations; motivations, classifications, and some methods of solution; linear and semilinear equations; chromatographic equations with finite rate expressions; homogeneous and nonhomogeneous quasilinear equations; formation and propagation of shocks; conservation equations, weak solutions, and shock layers; nonlinear equations; and variational problems. Exercises appear at the end of mo
Differential equations, mechanics, and computation
Palais, Richard S
This book provides a conceptual introduction to the theory of ordinary differential equations, concentrating on the initial value problem for equations of evolution and with applications to the calculus of variations and classical mechanics, along with a discussion of chaos theory and ecological models. It has a unified and visual introduction to the theory of numerical methods and a novel approach to the analysis of errors and stability of various numerical solution algorithms based on carefully chosen model problems. While the book would be suitable as a textbook for an undergraduate or elementary graduate course in ordinary differential equations, the authors have designed the text also to be useful for motivated students wishing to learn the material on their own or desiring to supplement an ODE textbook being used in a course they are taking with a text offering a more conceptual approach to the subject.
Generalized equations of gravitational field
Stanyukovich, K.P.; Borisova, L.B.
Equations for gravitational fields are obtained on the basis of a generalized Lagrangian Z=f(R) (R is the scalar curvature). Such an approach permits to take into account the evolution of a gravitation ''constant''. An expression for the force Fsub(i) versus the field variability is obtained. Conservation laws are formulated differing from the standard ones by the fact that in the right part of new equations the value Fsub(i) is present that goes to zero at an ultimate passage to the standard Einstein theory. An equation of state is derived for cosmological metrics for a particular case, f=bRsup(1+α) (b=const, α=const)
Numerical optimization using flow equations
Punk, Matthias
We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.
Quantum Gross-Pitaevskii Equation
Jutho Haegeman, Damian Draxler, Vid Stojevic, J. Ignacio Cirac, Tobias J. Osborne, Frank Verstraete
Full Text Available We introduce a non-commutative generalization of the Gross-Pitaevskii equation for one-dimensional quantum gasses and quantum liquids. This generalization is obtained by applying the time-dependent variational principle to the variational manifold of continuous matrix product states. This allows for a full quantum description of many body system ---including entanglement and correlations--- and thus extends significantly beyond the usual mean-field description of the Gross-Pitaevskii equation, which is known to fail for (quasi one-dimensional systems. By linearizing around a stationary solution, we furthermore derive an associated generalization of the Bogoliubov -- de Gennes equations. This framework is applied to compute the steady state response amplitude to a periodic perturbation of the potential.
Introductory course on differential equations
Gorain, Ganesh C
Introductory Course on DIFFERENTIAL EQUATIONS provides an excellent exposition of the fundamentals of ordinary and partial differential equations and is ideally suited for a first course of undergraduate students of mathematics, physics and engineering. The aim of this book is to present the elementary theories of differential equations in the forms suitable for use of those students whose main interest in the subject are based on simple mathematical ideas. KEY FEATURES: Discusses the subject in a systematic manner without sacrificing mathematical rigour. A variety of exercises drill the students in problem solving in view of the mathematical theories explained in the book. Worked out examples illustrated according to the theories developed in the book with possible alternatives. Exhaustive collection of problems and the simplicity of presentation differentiate this book from several others. Material contained will help teachers as well as aspiring students of different competitive examinations.
The respiratory system in equations
Maury, Bertrand
The book proposes an introduction to the mathematical modeling of the respiratory system. A detailed introduction on the physiological aspects makes it accessible to a large audience without any prior knowledge on the lung. Different levels of description are proposed, from the lumped models with a small number of parameters (Ordinary Differential Equations), up to infinite dimensional models based on Partial Differential Equations. Besides these two types of differential equations, two chapters are dedicated to resistive networks, and to the way they can be used to investigate the dependence of the resistance of the lung upon geometrical characteristics. The theoretical analysis of the various models is provided, together with state-of-the-art techniques to compute approximate solutions, allowing comparisons with experimental measurements. The book contains several exercises, most of which are accessible to advanced undergraduate students.
Dynamics of partial differential equations
Wayne, C Eugene
This book contains two review articles on the dynamics of partial differential equations that deal with closely related topics but can be read independently. Wayne reviews recent results on the global dynamics of the two-dimensional Navier-Stokes equations. This system exhibits stable vortex solutions: the topic of Wayne's contribution is how solutions that start from arbitrary initial conditions evolve towards stable vortices. Weinstein considers the dynamics of localized states in nonlinear Schrodinger and Gross-Pitaevskii equations that describe many optical and quantum systems. In this contribution, Weinstein reviews recent bifurcations results of solitary waves, their linear and nonlinear stability properties, and results about radiation damping where waves lose energy through radiation. The articles, written independently, are combined into one volume to showcase the tools of dynamical systems theory at work in explaining qualitative phenomena associated with two classes of partial differential equ...
Evolution equations for Killing fields
Coll, B.
The problem of finding necessary and sufficient conditions on the Cauchy data for Einstein equations which insure the existence of Killing fields in a neighborhood of an initial hypersurface has been considered recently by Berezdivin, Coll, and Moncrief. Nevertheless, it can be shown that the evolution equations obtained in all these cases are of nonstrictly hyperbolic type, and, thus, the Cauchy data must belong to a special class of functions. We prove here that, for the vacuum and Einstein--Maxwell space--times and in a coordinate independent way, one can always choose, as evolution equations for the Killing fields, a strictly hyperbolic system: The above theorems can be thus extended to all Cauchy data for which the Einstein evolution problem has been proved to be well set
Quasisymmetry equations for conventional stellarators
Pustovitov, V.D.
General quasisymmetry condition, which demands the independence of B 2 on one of the angular Boozer coordinates, is reduced to two equations containing only geometrical characteristics and helical field of a stellarator. The analysis is performed for conventional stellarators with a planar circular axis using standard stellarator expansion. As a basis, the invariant quasisymmetry condition is used. The quasisymmetry equations for stellarators are obtained from this condition also in an invariant form. Simplified analogs of these equations are given for the case when averaged magnetic surfaces are circular shifted torii. It is shown that quasisymmetry condition can be satisfied, in principle, in a conventional stellarator by a proper choice of two satellite harmonics of the helical field in addition to the main harmonic. Besides, there appears a restriction on the shift of magnetic surfaces. Thus, in general, the problem is closely related with that of self-consistent description of a configuration. (author)
The generalized good cut equation
Adamo, T M; Newman, E T
The properties of null geodesic congruences (NGCs) in Lorentzian manifolds are a topic of considerable importance. More specifically NGCs with the special property of being shear-free or asymptotically shear-free (as either infinity or a horizon is approached) have received a great deal of recent attention for a variety of reasons. Such congruences are most easily studied via solutions to what has been referred to as the 'good cut equation' or the 'generalization good cut equation'. It is the purpose of this paper to study these equations and show their relationship to each other. In particular we show how they all have a four-complex-dimensional manifold (known as H-space, or in a special case as complex Minkowski space) as a solution space.
Integration rules for scattering equations
Baadsgaard, Christian; Bjerrum-Bohr, N.E.J.; Bourjaily, Jacob L.; Damgaard, Poul H.
As described by Cachazo, He and Yuan, scattering amplitudes in many quantum field theories can be represented as integrals that are fully localized on solutions to the so-called scattering equations. Because the number of solutions to the scattering equations grows quite rapidly, the contour of integration involves contributions from many isolated components. In this paper, we provide a simple, combinatorial rule that immediately provides the result of integration against the scattering equation constraints for any Möbius-invariant integrand involving only simple poles. These rules have a simple diagrammatic interpretation that makes the evaluation of any such integrand immediate. Finally, we explain how these rules are related to the computation of amplitudes in the field theory limit of string theory.
Coupled Higgs field equation and Hamiltonian amplitude equation ...
Home; Journals; Pramana – Journal of Physics; Volume 79; Issue 1. Coupled Higgs �eld equation and ... School of Mathematics and Computer Applications, Thapar University, Patiala 147 004, India; Department of Mathematics, Jaypee University of Information Technology, Waknaghat, Distt. Solan 173 234, India ...
the rational functions are obtained. Keywords. ... differential equations as is evident by the number of research papers, books and a new symbolic software .... Now using (2.11), (2.14) in (2.8) with C1 = 0 and integrating once we get. P. 2 = − β.
The nuclear equation of state
Kahana, S.
The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab
Kinetic equations with pairing correlations
Fauser, R.
The Gorkov equations are derived for a general non-equilibrium system. The Gorkov factorization is generalized by the cumulant expansion of the 2-particle correlation and by a generalized Wick theorem in the case of a perturbation expansion. A stationary solution for the Green functions in the Schwinger-Keldysh formalism is presented taking into account pairing correlations. Especially the effects of collisional broadening on the spectral functions and Green functions is discussed. Kinetic equations are derived in the quasi-particle approximation and in the case of particles with width. Explicit expressions for the self-energies are given. (orig.)
Partial differential equations an introduction
Colton, David
Intended for a college senior or first-year graduate-level course in partial differential equations, this text offers students in mathematics, engineering, and the applied sciences a solid foundation for advanced studies in mathematics. Classical topics presented in a modern context include coverage of integral equations and basic scattering theory. This complete and accessible treatment includes a variety of examples of inverse problems arising from improperly posed applications. Exercises at the ends of chapters, many with answers, offer a clear progression in developing an understanding of
Geometric approach to soliton equations
Sasaki, R.
A class of nonlinear equations that can be solved in terms of nxn scattering problem is investigated. A systematic geometric method of exploiting conservation laws and related equations, the so-called prolongation structure, is worked out. The nxn problem is reduced to nsub(n-1)x(n-1) problems and finally to 2x2 problems, which have been comprehensively investigated recently by the author. A general method of deriving the infinite numbers of polynomial conservation laws for an nxn problem is presented. The cases of 3x3 and 2x2 problems are discussed explicitly. (Auth.)
Sensitivity for the Smoluchowski equation
Bailleul, I F
This paper investigates the question of sensitivity of the solutions μ λ t of the Smoluchowski equation on R + * with respect to the parameters λ in the interaction kernel K λ . It is proved that μ λ t is a C 1 function of (t, λ) with values in a good space of measures under the hypotheses K λ (x, y) ≤ ψ(x) ψ(y), for some sub-linear function ψ, and ∫ψ 4+ε (x) μ 0 (dx) < ∞, and that the derivative is the unique solution of a related equation.
Basic linear partial differential equations
Treves, Francois
Focusing on the archetypes of linear partial differential equations, this text for upper-level undergraduates and graduate students features most of the basic classical results. The methods, however, are decidedly nontraditional: in practically every instance, they tend toward a high level of abstraction. This approach recalls classical material to contemporary analysts in a language they can understand, as well as exploiting the field's wealth of examples as an introduction to modern theories.The four-part treatment covers the basic examples of linear partial differential equations and their
The role of the nuclear equation of state in determining the fate of the collapsing cores of massive stars is examined in light of both recent theoretical advances in this subject and recent experimental measurements with relativistic heavy ions. The difficulties existing in attempts to bring the softer nuclear matter apparently required by the theory of Type II supernovae into consonance with the heavy ion data are discussed. Relativistic mean field theory is introduced as a candidate for derivation of the equation of state, and a simple form for the saturation compressibility is obtained. 28 refs., 4 figs., 1 tab.
Solution of the Baxter equation
Janik, R.A.
We present a method of construction of a family of solutions of the Baxter equation arising in the Generalized Leading Logarithmic Approximation (GLLA) of the QCD pomeron. The details are given for the exchange of N = 2 reggeons but everything can be generalized in a straightforward way to arbitrary N. A specific choice of solutions is shown to reproduce the correct energy levels for half integral conformal weights. It is shown that the Baxter's equation must be supplemented by an additional condition on the solution. (author)
Fundamentals of equations of state
Eliezer, Shalom; Hora, Heinrich
The equation of state was originally developed for ideal gases, and proved central to the development of early molecular and atomic physics. Increasingly sophisticated equations of state have been developed to take into account molecular interactions, quantization, relativistic effects, etc. Extreme conditions of matter are encountered both in nature and in the laboratory, for example in the centres of stars, in relativistic collisions of heavy nuclei, in inertial confinement fusion (where a temperature of 10 9 K and a pressure exceeding a billion atmospheres can be achieved). A sound knowledg
Nielsen number and differential equations
Andres Jan
Full Text Available In reply to a problem of Jean Leray (application of the Nielsen theory to differential equations, two main approaches are presented. The first is via Poincaré's translation operator, while the second one is based on the Hammerstein-type solution operator. The applicability of various Nielsen theories is discussed with respect to several sorts of differential equations and inclusions. Links with the Sharkovskii-like theorems (a finite number of periodic solutions imply infinitely many subharmonics are indicated, jointly with some further consequences like the nontrivial -structure of solutions of initial value problems. Some illustrating examples are supplied and open problems are formulated.
Applied analysis and differential equations
Cârj, Ovidiu
This volume contains refereed research articles written by experts in the field of applied analysis, differential equations and related topics. Well-known leading mathematicians worldwide and prominent young scientists cover a diverse range of topics, including the most exciting recent developments. A broad range of topics of recent interest are treated: existence, uniqueness, viability, asymptotic stability, viscosity solutions, controllability and numerical analysis for ODE, PDE and stochastic equations. The scope of the book is wide, ranging from pure mathematics to various applied fields such as classical mechanics, biomedicine, and population dynamics.
Sequent Calculus and Equational Programming
Nicolas Guenot
Full Text Available Proof assistants and programming languages based on type theories usually come in two flavours: one is based on the standard natural deduction presentation of type theory and involves eliminators, while the other provides a syntax in equational style. We show here that the equational approach corresponds to the use of a focused presentation of a type theory expressed as a sequent calculus. A typed functional language is presented, based on a sequent calculus, that we relate to the syntax and internal language of Agda. In particular, we discuss the use of patterns and case splittings, as well as rules implementing inductive reasoning and dependent products and sums.
Radar equations for modern radar
Barton, David K
Based on the classic Radar Range-Performance Analysis from 1980, this practical volume extends that work to ensure applicability of radar equations to the design and analysis of modern radars. This unique book helps you identify what information on the radar and its environment is needed to predict detection range. Moreover, it provides equations and data to improve the accuracy of range calculations. You find detailed information on propagation effects, methods of range calculation in environments that include clutter, jamming and thermal noise, as well as loss factors that reduce radar perfo
Equating accelerometer estimates among youth
Brazendale, Keith; Beets, Michael W; Bornstein, Daniel B
from one set of cutpoints into another. Bland Altman plots illustrate the agreement between actual MVPA and predicted MVPA values. RESULTS: Across the total sample, mean MVPA ranged from 29.7MVPAmind(-1) (Puyau) to 126.1MVPAmind(-1) (Freedson 3 METs). Across conversion equations, median absolute...
Variational linear algebraic equations method
Moiseiwitsch, B.L.
A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)
Integrodifferential equation approach. Pt. 1
Oehm, W.; Sofianos, S.A.; Fiedeldey, H.; South Africa Univ., Pretoria. Dept. of Physics); Fabre de la Ripelle, M.; South Africa Univ., Pretoria. Dept. of Physics)
A single integrodifferential equation in two variables, valid for A nucleons interacting by pure Wigner forces, which has previously only been solved in the extreme and uncoupled adiabatic approximations is now solved exactly for three- and four-nucleon systems. The results are in good agreement with the values obtained for the binding energies by means of an empirical interpolation formula. This validates all our previous conclusions, in particular that the omission of higher (than two) order correlations in our four-body equation only produces a rather small underbinding. The integrodifferential equation approach (IDEA) is here also extended to spin-dependent forces of the Malfliet-Tjon type, resulting in two coupled integrodifferential equations in two variables. The exact solution and the interpolated adiabatic approximation are again in good agreement. The inclusion of the hypercentral part of the two-body interaction in the definition of the Faddeev-type components again leads to substantial improvement for fully local potentials, acting in all partial waves. (orig.)
A generalized advection dispersion equation
This paper examines a possible effect of uncertainties, variability or heterogeneity of any dynamic system when being included in its evolution rule; the notion is illustrated with the advection dispersion equation, which describes the groundwater pollution model. An uncertain derivative is defined; some properties of.
Nonlocal higher order evolution equations
Rossi, Julio D.; Schö nlieb, Carola-Bibiane
In this article, we study the asymptotic behaviour of solutions to the nonlocal operator ut(x, t)1/4(-1)n-1 (J*Id -1)n (u(x, t)), x ∈ �N, which is the nonlocal analogous to the higher order local evolution equation vt(-1)n-1(Δ)nv. We prove
Vapor-droplet flow equations
Crowe, C.T.
General features of a vapor-droplet flow are discussed and the equations expressing the conservation of mass, momentum, and energy for the vapor, liquid, and mixture using the control volume approach are derived. The phenomenological laws describing the exchange of mass, momentum, and energy between phases are also reviewed. The results have application to development of water-dominated geothermal resources
the equation in terms of rate theory. ... that the said theory is said to be the harbinger of modern astro- ... Parichay (An Introduction to the Universe). Tagore ..... where |e| is the magnitude of the electron's charge and E is the electric field intensity ...
Saha equation in Rindler space
Sanchari De
May 31, 2017 ... scenario, the flat local geometry is called the Rindler space. For an illustration, let us consider two reference ... the local acceleration of the frame. To investigate Saha equation in a uniformly acceler- ... the best of our knowledge, the study of Saha equa- tion in Rindler space has not been reported earlier.
Slave equations for spin models
Catterall, S.M.; Drummond, I.T.; Horgan, R.R.
We apply an accelerated Langevin algorithm to the simulation of continuous spin models on the lattice. In conjunction with the evolution equation for the spins we use slave equations to compute estimators for the connected correlation functions of the model. In situations for which the symmetry of the model is sufficiently strongly broken by an external field these estimators work well and yield a signal-to-noise ratio for the Green function at large time separations more favourable than that resulting from the standard method. With the restoration of symmetry, however, the slave equation estimators exhibit an intrinsic instability associated with the growth of a power law tail in the probability distributions for the measured quantities. Once this tail has grown sufficiently strong it results in a divergence of the variance of the estimator which then ceases to be useful for measurement purposes. The instability of the slave equation method in circumstances of weak symmetry breaking precludes its use in determining the mass gap in non-linear sigma models. (orig.)
Pendulum Motion and Differential Equations
Reid, Thomas F.; King, Stephen C.
A common example of real-world motion that can be modeled by a differential equation, and one easily understood by the student, is the simple pendulum. Simplifying assumptions are necessary for closed-form solutions to exist, and frequently there is little discussion of the impact if those assumptions are not met. This article presents a…
Quasi-gas dynamic equations
Elizarova, Tatiana G
This book presents two interconnected mathematical models generalizing the Navier-Stokes system. The models, called the quasi-gas-dynamic and quasi-hydrodynamic equations, are then used as the basis of numerical methods solving gas- and fluid-dynamic problems.
Stability of Functional Differential Equations
Lemm, Jeffrey M
This book provides an introduction to the structure and stability properties of solutions of functional differential equations. Numerous examples of applications (such as feedback systrems with aftereffect, two-reflector antennae, nuclear reactors, mathematical models in immunology, viscoelastic bodies, aeroautoelastic phenomena and so on) are considered in detail. The development is illustrated by numerous figures and tables.
Quantum adiabatic Markovian master equations
Albash, Tameem; Zanardi, Paolo; Boixo, Sergio; Lidar, Daniel A
We develop from first principles Markovian master equations suited for studying the time evolution of a system evolving adiabatically while coupled weakly to a thermal bath. We derive two sets of equations in the adiabatic limit, one using the rotating wave (secular) approximation that results in a master equation in Lindblad form, the other without the rotating wave approximation but not in Lindblad form. The two equations make markedly different predictions depending on whether or not the Lamb shift is included. Our analysis keeps track of the various time and energy scales associated with the various approximations we make, and thus allows for a systematic inclusion of higher order corrections, in particular beyond the adiabatic limit. We use our formalism to study the evolution of an Ising spin chain in a transverse field and coupled to a thermal bosonic bath, for which we identify four distinct evolution phases. While we do not expect this to be a generic feature, in one of these phases dissipation acts to increase the fidelity of the system state relative to the adiabatic ground state. (paper)
Weak solutions of magma equations
Krishnan, E.V.
Periodic solutions in terms of Jacobian cosine elliptic functions have been obtained for a set of values of two physical parameters for the magma equation which do not reduce to solitary-wave solutions. It was also obtained solitary-wave solutions for another set of these parameters as an infinite period limit of periodic solutions in terms of Weierstrass and Jacobian elliptic functions
Li, Jing
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.
Solutions of Einstein's field equations
Tomonaga, Y [Utsunomiya Univ. (Japan). Faculty of Education
In this paper the author investigates the Einstein's field equations of the non-vacuum case and generalizes the solution of Robertson-Walker by the three dimensional Einstein spaces. In Section 2 the author shortly generalizes the dynamic space-time of G. Lemetre and A. Friedmann by a simple transformation.
Equations for formally real meadows
Bergstra, J.A.; Bethke, I.; Ponse, A.
We consider the signatures Σm = (0,1,−,+,⋅,−1) of meadows and (Σm,s) of signed meadows. We give two complete axiomatizations of the equational theories of the real numbers with respect to these signatures. In the first case, we extend the axiomatization of zero-totalized fields by a single axiom
Wave equation of hydrogen atom
Suwito.
The calculation of the energy levels of the hydrogen atom using Bohr, Schroedinger and Dirac theories is reviewed. The result is compared with that obtained from infinite component wave equations theory which developed recently. The conclusion can be stated that the latter theory is better to describe the composit system than the former. (author)
Transport equation and shock waves
Besnard, D.
A multi-group method is derived from a one dimensional transport equation for the slowing down and spatial transport of energetic positive ions in a plasma. This method is used to calculate the behaviour of energetic charged particles in non homogeneous and non stationary plasma, and the effect of energy deposition of the particles on the heating of the plasma. In that purpose, an equation for the density of fast ions is obtained from the Fokker-Planck equation, and a closure condition for the second moment of this equation is deduced from phenomenological considerations. This method leads to a numerical method, simple and very efficient, which doesn't require much computer storage. Two types of numerical results are obtained. First, results on the slowing down of 3.5 MeV alpha particles in a 50 keV plasma plublished by Corman and al and Moses are compared with the results obtained with both our method and a Monte Carlo type method. Good agreement was obtained, even for energy deposition on the ions of the plasma. Secondly, we have calculated propagation of alpha particles heating a cold plasma. These results are in very good agreement with those given by an accurate Monte Carlo method, for both the thermal velocity, and the energy deposition in the plasma
Structural equations in language learning
Moortgat, M.J.
In categorial systems with a fixed structural component, the learning problem comes down to finding the solution for a set of typeassignment equations. A hard-wired structural component is problematic if one want to address issues of structural variation. Our starting point is a type-logical
Fractional Diffusion Equations and Anomalous Diffusion
Evangelista, Luiz Roberto; Kaminski Lenzi, Ervin
Preface; 1. Mathematical preliminaries; 2. A survey of the fractional calculus; 3. From normal to anomalous diffusion; 4. Fractional diffusion equations: elementary applications; 5. Fractional diffusion equations: surface effects; 6. Fractional nonlinear diffusion equation; 7. Anomalous diffusion: anisotropic case; 8. Fractional Schrödinger equations; 9. Anomalous diffusion and impedance spectroscopy; 10. The Poisson–Nernst–Planck anomalous (PNPA) models; References; Index.
Painleve test and discrete Boltzmann equations
Euler, N.; Steeb, W.H.
The Painleve test for various discrete Boltzmann equations is performed. The connection with integrability is discussed. Furthermore the Lie symmetry vector fields are derived and group-theoretical reduction of the discrete Boltzmann equations to ordinary differentiable equations is performed. Lie Backlund transformations are gained by performing the Painleve analysis for the ordinary differential equations. 16 refs
Development of kinetics equations from the Boltzmann equation; Etablissement des equations de la cinetique a partir de l'equation de Boltzmann
Plas, R.
The author reports a study on kinetics equations for a reactor. He uses the conventional form of these equations but by using a dynamic multiplication factor. Thus, constants related to delayed neutrons are not modified by efficiency factors. The author first describes the theoretic kinetic operation of a reactor and develops the associated equations. He reports the development of equations for multiplication factors.
Algebraic entropy for differential-delay equations
Viallet, Claude M.
We extend the definition of algebraic entropy to a class of differential-delay equations. The vanishing of the entropy, as a structural property of an equation, signals its integrability. We suggest a simple way to produce differential-delay equations with vanishing entropy from known integrable differential-difference equations.
Invariant imbedding equations for linear scattering problems
Apresyan, L.
A general form of the invariant imbedding equations is investigated for the linear problem of scattering by a bounded scattering volume. The conditions for the derivability of such equations are described. It is noted that the possibility of the explicit representation of these equations for a sphere and for a layer involves the separation of variables in the unperturbed wave equation
The AGL equation from the dipole picture
Gay Ducati, M.B.; Goncalves, V.P.
The AGL equation includes all multiple pomeron exchanges in the double logarithmic approximation (DLA) limit, leading to a unitarized gluon distribution in the small x regime. This equation was originally obtained using the Glauber-Mueller approach. We demonstrate in this paper that the AGL equation and, consequently, the GLR equation, can also be obtained from the dipole picture in the double logarithmic limit, using an evolution equation, recently proposed, which includes all multiple pomeron exchanges in the leading logarithmic approximation. Our conclusion is that the AGL equation is a good candidate for a unitarized evolution equation at small x in the DLA limit
Thermoviscous Model Equations in Nonlinear Acoustics
Rasmussen, Anders Rønne
Four nonlinear acoustical wave equations that apply to both perfect gasses and arbitrary fluids with a quadratic equation of state are studied. Shock and rarefaction wave solutions to the equations are studied. In order to assess the accuracy of the wave equations, their solutions are compared...... to solutions of the basic equations from which the wave equations are derived. A straightforward weakly nonlinear equation is the most accurate for shock modeling. A higher order wave equation is the most accurate for modeling of smooth disturbances. Investigations of the linear stability properties...... of solutions to the wave equations, reveal that the solutions may become unstable. Such instabilities are not found in the basic equations. Interacting shocks and standing shocks are investigated....
Manhattan equation for the operational amplifier
Mishonov, Todor M.; Danchev, Victor I.; Petkov, Emil G.; Gourev, Vassil N.; Dimitrova, Iglika M.; Varonov, Albert M.
A differential equation relating the voltage at the output of an operational amplifier $U_0$ and the difference between the input voltages ($U_{+}$ and $U_{-}$) has been derived. The crossover frequency $f_0$ is a parameter in this operational amplifier master equation. The formulas derived as a consequence of this equation find applications in thousands of specifications for electronic devices but as far as we know, the equation has never been published. Actually, the master equation of oper...
Reduced kinetic equations: An influence functional approach
Wio, H.S.
The author discusses a scheme for obtaining reduced descriptions of multivariate kinetic equations based on the 'influence functional' method of Feynmann. It is applied to the case of Fokker-Planck equations showing the form that results for the reduced equation. The possibility of Markovian or non-Markovian reduced description is discussed. As a particular example, the reduction of the Kramers equation to the Smoluchwski equation in the limit of high friction is also discussed
Dynamical equations for the optical potential
Kowalski, K.L.
Dynamical equations for the optical potential are obtained starting from a wide class of N-particle equations. This is done with arbitrary multiparticle interactions to allow adaptation to few-body models of nuclear reactions and including all effects of nucleon identity. Earlier forms of the optical potential equations are obtained as special cases. Particular emphasis is placed upon obtaining dynamical equations for the optical potential from the equations of Kouri, Levin, and Tobocman including all effects of particle identity
Group foliation of finite difference equations
Thompson, Robert; Valiquette, Francis
Using the theory of equivariant moving frames, a group foliation method for invariant finite difference equations is developed. This method is analogous to the group foliation of differential equations and uses the symmetry group of the equation to decompose the solution process into two steps, called resolving and reconstruction. Our constructions are performed algorithmically and symbolically by making use of discrete recurrence relations among joint invariants. Applications to invariant finite difference equations that approximate differential equations are given.
An inverse problem in a parabolic equation
Zhilin Li
Full Text Available In this paper, an inverse problem in a parabolic equation is studied. An unknown function in the equation is related to two integral equations in terms of heat kernel. One of the integral equations is well-posed while another is ill-posed. A regularization approach for constructing an approximate solution to the ill-posed integral equation is proposed. Theoretical analysis and numerical experiment are provided to support the method.
Systems of Inhomogeneous Linear Equations
Scherer, Philipp O. J.
Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.
MAGNETOHYDRODYNAMIC EQUATIONS (MHD GENERATION CODE
Francisco Frutos Alfaro
Full Text Available A program to generate codes in Fortran and C of the full magnetohydrodynamic equations is shown. The program uses the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the magnetohydrodynamic equations to obtain a code that can be used as a seed for a magnetohydrodynamic code for numerical applications. As an example, we present part of the output of our programs for Cartesian coordinates and how to do the discretization.
Combinatorics of Generalized Bethe Equations
Kozlowski, Karol K.; Sklyanin, Evgeny K.
A generalization of the Bethe ansatz equations is studied, where a scalar two-particle S-matrix has several zeroes and poles in the complex plane, as opposed to the ordinary single pole/zero case. For the repulsive case (no complex roots), the main result is the enumeration of all distinct solutions to the Bethe equations in terms of the Fuss-Catalan numbers. Two new combinatorial interpretations of the Fuss-Catalan and related numbers are obtained. On the one hand, they count regular orbits of the permutation group in certain factor modules over {{Z}^M}, and on the other hand, they count integer points in certain M-dimensional polytopes.
Rossi, Julio D.
In this article, we study the asymptotic behaviour of solutions to the nonlocal operator ut(x, t)1/4(-1)n-1 (J*Id -1)n (u(x, t)), x ∈ �N, which is the nonlocal analogous to the higher order local evolution equation vt(-1)n-1(Δ)nv. We prove that the solutions of the nonlocal problem converge to the solution of the higher order problem with the right-hand side given by powers of the Laplacian when the kernel J is rescaled in an appropriate way. Moreover, we prove that solutions to both equations have the same asymptotic decay rate as t goes to infinity. © 2010 Taylor & Francis.
Numerical Solution of Parabolic Equations
Østerby, Ole
These lecture notes are designed for a one-semester course on finite-difference methods for parabolic equations. These equations which traditionally are used for describing diffusion and heat-conduction problems in Geology, Physics, and Chemistry have recently found applications in Finance Theory...... ? and how do boundary value approximations affect the overall order of the method. Knowledge of a reliable order and error estimate enables us to determine (near-)optimal step sizes to meet a prescribed error tolerance, and possibly to extrapolate to get (higher order and) better accuracy at a minimal...... expense. Problems in two space dimensions are effectively handled using the Alternating Direction Implicit (ADI) technique. We present a systematic way of incorporating inhomogeneous terms and derivative boundary conditions in ADI methods as well as mixed derivative terms....
Chiral equations and fiber bundles
Mateos, T.; Becerril, R.
Using the hypothesis g = g (lambda i ), the chiral equations (rhog, z g -1 ), z -bar + (rhog, z -barg -1 ), z = 0 are reduced to a Killing equation of a p-dimensional space V p , being lambda i lambda i (z, z-bar) 'geodesic' parameters of V p . Supposing that g belongs to a Lie group G, one writes the corresponding Lie algebra elements (F) in terms of the Killing vectors of V p and the generators of the subalgebra of F of dimension d = dimension of the Killing space. The elements of the subalgebras belong to equivalence classes which in the respective group form a principal fiber bundle. This is used to integrate the matrix g in terms of the complex variables z and z-bar ( Author)
The equations icons of knowledge
Bais, Sander
For thousands of years mankind has tried to understand nature. Exploring the world on all scales with instruments of ever more ingenuity, we have been able to unravel some of the great mysteries that surround us. While collecting an overwhelming multitude of observational facts, we discovered fundamental laws that govern the structure and evolution of physical reality. We know that nature speaks to us in the language of mathematics. In this language most of our basic understanding of the physical world can be expressed in an unambiguous and concise way. The most artificial language turns out to be the most natural of all. The laws of nature correspond to equations. These equations are the icons of knowledge that mark crucial turning points in our thinking about the world we happen to live in. They form the symbolic representation of most of what we know, and as such constitute an important and robust part of our culture.
Implementing Parquet equations using HPX
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
Handbook of structural equation modeling
Hoyle, Rick H
The first comprehensive structural equation modeling (SEM) handbook, this accessible volume presents both the mechanics of SEM and specific SEM strategies and applications. The editor, contributors, and editorial advisory board are leading methodologists who have organized the book to move from simpler material to more statistically complex modeling approaches. Sections cover the foundations of SEM; statistical underpinnings, from assumptions to model modifications; steps in implementation, from data preparation through writing the SEM report; and basic and advanced applications, inclu
The uranium equation in 1982
Bonny, J.; Fulton, M.
The subject is discussed under the headings: comparison of world nuclear generating capacity forecasts; world uranium requirements; comparison of uranium production capability forecasts; supply and demand situation in 1990 and 1995; a perspective on the uranium equation (economic factors; development lead times as a factor affecting market stability; the influence of uncertainty; the uranium market in perspective; the uranium market in 1995). (U.K.)
Differential equations in airplane mechanics
Carleman, M T
In the following report, we will first draw some conclusions of purely theoretical interest, from the general equations of motion. At the end, we will consider the motion of an airplane, with the engine dead and with the assumption that the angle of attack remains constant. Thus we arrive at a simple result, which can be rendered practically utilizable for determining the trajectory of an airplane descending at a constant steering angle.
Integration of Chandrasekhar's integral equation
Tanaka, Tasuku
We solve Chandrasekhar's integration equation for radiative transfer in the plane-parallel atmosphere by iterative integration. The primary thrust in radiative transfer has been to solve the forward problem, i.e., to evaluate the radiance, given the optical thickness and the scattering phase function. In the area of satellite remote sensing, our problem is the inverse problem: to retrieve the surface reflectance and the optical thickness of the atmosphere from the radiance measured by satellites. In order to retrieve the optical thickness and the surface reflectance from the radiance at the top-of-the atmosphere (TOA), we should express the radiance at TOA 'explicitly' in the optical thickness and the surface reflectance. Chandrasekhar formalized radiative transfer in the plane-parallel atmosphere in a simultaneous integral equation, and he obtained the second approximation. Since then no higher approximation has been reported. In this paper, we obtain the third approximation of the scattering function. We integrate functions derived from the second approximation in the integral interval from 1 to ∞ of the inverse of the cos of zenith angles. We can obtain the indefinite integral rather easily in the form of a series expansion. However, the integrals at the upper limit, ∞, are not yet known to us. We can assess the converged values of those series expansions at ∞ through calculus. For integration, we choose coupling pairs to avoid unnecessary terms in the outcome of integral and discover that the simultaneous integral equation can be deduced to the mere integral equation. Through algebraic calculation, we obtain the third approximation as a polynomial of the third degree in the atmospheric optical thickness
Equation of State Project Overview
Crockett, Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
A general overview of the Equation of State (EOS) Project will be presented. The goal is to provide the audience with an introduction of what our more advanced methods entail (DFT, QMD, etc.. ) and how these models are being utilized to better constrain the thermodynamic models. These models substantially reduce our regions of interpolation between the various thermodynamic limits. I will also present a variety example of recent EOS work.
Simple equation method for nonlinear partial differential equations and its applications
Taher A. Nofal
Full Text Available In this article, we focus on the exact solution of the some nonlinear partial differential equations (NLPDEs such as, Kodomtsev–Petviashvili (KP equation, the (2 + 1-dimensional breaking soliton equation and the modified generalized Vakhnenko equation by using the simple equation method. In the simple equation method the trial condition is the Bernoulli equation or the Riccati equation. It has been shown that the method provides a powerful mathematical tool for solving nonlinear wave equations in mathematical physics and engineering problems.
Effective Schroedinger equations on submanifolds
Wachsmuth, Jakob
In this thesis the time dependent Schroedinger equation is considered on a Riemannian manifold A with a potential that localizes a certain class of states close to a fixed submanifold C, the constraint manifold. When the potential is scaled in the directions normal to C by a small parameter epsilon, the solutions concentrate in an epsilon-neighborhood of the submanifold. An effective Schroedinger equation on the submanifold C is derived and it is shown that its solutions, suitably lifted to A, approximate the solutions of the original equation on A up to errors of order {epsilon}{sup 3} vertical stroke t vertical stroke at time t. Furthermore, it is proved that, under reasonable conditions, the eigenvalues of the corresponding Hamiltonians below a certain energy coincide upto errors of order {epsilon}{sup 3}. These results holds in the situation where tangential and normal energies are of the same order, and where exchange between normal and tangential energies occurs. In earlier results tangential energies were assumed to be small compared to normal energies, and rather restrictive assumptions were needed, to ensure that the separation of energies is maintained during the time evolution. The most important consequence of this thesis is that now constraining potentials that change their shape along the submanifold can be treated, which is the typical situation in applications like molecular dynamics and quantum waveguides.
Deriving the bond pricing equation
Kožul Nataša
Full Text Available Given the recent focus on Eurozone debt crisis and the credit rating downgrade not only of US debt, but that of other countries and many UK major banking institutions, this paper aims to explain the concept of bond yield, its different measures and bond pricing equation. Yields on capital market instruments are rarely quoted on the same basis, which makes direct comparison between different as investment choices impossible. Some debt instruments are quoted on discount basis, whilst coupon-bearing ones accrue interest differently, offer different compounding opportunities, have different coupon payment frequencies, and manage non-business day maturity dates differently. Moreover, rules governing debt vary across countries, markets and currencies, making yield calculation and comparison a rather complex issue. Thus, some fundamental concepts applicable to debt instrument yield measurement, with focus on bond equation, are presented here. In addition, bond equation expressed in annuity form and used to apply Newton-Raphson algorithm to derive true bond yield is also shown.
Wave equations in higher dimensions
Dong, Shi-Hai
Higher dimensional theories have attracted much attention because they make it possible to reduce much of physics in a concise, elegant fashion that unifies the two great theories of the 20th century: Quantum Theory and Relativity. This book provides an elementary description of quantum wave equations in higher dimensions at an advanced level so as to put all current mathematical and physical concepts and techniques at the reader's disposal. A comprehensive description of quantum wave equations in higher dimensions and their broad range of applications in quantum mechanics is provided, which complements the traditional coverage found in the existing quantum mechanics textbooks and gives scientists a fresh outlook on quantum systems in all branches of physics. In Parts I and II the basic properties of the SO(n) group are reviewed and basic theories and techniques related to wave equations in higher dimensions are introduced. Parts III and IV cover important quantum systems in the framework of non-relativisti...
Geometric Implications of Maxwell's Equations
Smith, Felix T.
Maxwell's synthesis of the varied results of the accumulated knowledge of electricity and magnetism, based largely on the searching insights of Faraday, still provide new issues to explore. A case in point is a well recognized anomaly in the Maxwell equations: The laws of electricity and magnetism require two 3-vector and two scalar equations, but only six dependent variables are available to be their solutions, the 3-vectors E and B. This leaves an apparent redundancy of two degrees of freedom (J. Rosen, AJP 48, 1071 (1980); Jiang, Wu, Povinelli, J. Comp. Phys. 125, 104 (1996)). The observed self-consistency of the eight equations suggests that they contain additional information. This can be sought as a previously unnoticed constraint connecting the space and time variables, r and t. This constraint can be identified. It distorts the otherwise Euclidean 3-space of r with the extremely slight, time dependent curvature k (t) =Rcurv-2 (t) of the 3-space of a hypersphere whose radius has the time dependence dRcurv / dt = +/- c nonrelativistically, or dRcurvLor / dt = +/- ic relativistically. The time dependence is exactly that of the Hubble expansion. Implications of this identification will be explored.
Five-dimensional Monopole Equation with Hedge-Hog Ansatz and Abel's Differential Equation
Kihara, Hironobu
We review the generalized monopole in the five-dimensional Euclidean space. A numerical solution with the Hedge-Hog ansatz is studied. The Bogomol'nyi equation becomes a second order autonomous non-linear differential equation. The equation can be translated into the Abel's differential equation of the second kind and is an algebraic differential equation.
Partial differential equations of mathematical physics and integral equations
Guenther, Ronald B
This book was written to help mathematics students and those in the physical sciences learn modern mathematical techniques for setting up and analyzing problems. The mathematics used is rigorous, but not overwhelming, while the authors carefully model physical situations, emphasizing feedback among a beginning model, physical experiments, mathematical predictions, and the subsequent refinement and reevaluation of the physical model itself. Chapter 1 begins with a discussion of various physical problems and equations that play a central role in applications. The following chapters take up the t
Handbook of differential equations stationary partial differential equations
Chipot, Michel
This handbook is volume III in a series devoted to stationary partial differential quations. Similarly as volumes I and II, it is a collection of self contained state-of-the-art surveys written by well known experts in the field. The topics covered by this handbook include singular and higher order equations, problems near critically, problems with anisotropic nonlinearities, dam problem, T-convergence and Schauder-type estimates. These surveys will be useful for both beginners and experts and speed up the progress of corresponding (rapidly developing and fascinating) areas of mathematics. Ke
Partial differential equations for scientists and engineers
Farlow, Stanley J
Most physical phenomena, whether in the domain of fluid dynamics, electricity, magnetism, mechanics, optics, or heat flow, can be described in general by partial differential equations. Indeed, such equations are crucial to mathematical physics. Although simplifications can be made that reduce these equations to ordinary differential equations, nevertheless the complete description of physical systems resides in the general area of partial differential equations.This highly useful text shows the reader how to formulate a partial differential equation from the physical problem (constructing th
Semilinear Schrödinger equations
Cazenave, Thierry
The nonlinear Schrödinger equation has received a great deal of attention from mathematicians, in particular because of its applications to nonlinear optics. It is also a good model dispersive equation, since it is often technically simpler than other dispersive equations, such as the wave or Korteweg-de Vries equation. Particularly useful tools in studying the nonlinear Schrödinger equation are energy and Strichartz's estimates. This book presents various mathematical aspects of the nonlinear Schrödinger equation. It examines both problems of local nature (local existence of solutions, unique
Functional Fourier transforms and the loop equation
Bershadskii, M.A.; Vaisburd, I.D.; Migdal, A.A.
The Migdal-Makeenko momentum-space loop equation is investigated. This equation is derived from the ordinary loop equation by taking the Fourier transform of the Wilson functional. A perturbation theory is constructed for the new equation and it is proved that the action of the loop operator is determined by vertex functions which coincide with those of the previous equation. It is shown how the ghost loop arises in direct iterations of the momentum-space equation with respect to the coupling constant. A simple example is used to illustrate the mechanism of appearance of an integration in the interior loops in transition to observables
International Workshop on Elliptic and Parabolic Equations
Schrohe, Elmar; Seiler, Jörg; Walker, Christoph
This volume covers the latest research on elliptic and parabolic equations and originates from the international Workshop on Elliptic and Parabolic Equations, held September 10-12, 2013 at the Leibniz Universität Hannover. It represents a collection of refereed research papers and survey articles written by eminent scientist on advances in different fields of elliptic and parabolic partial differential equations, including singular Riemannian manifolds, spectral analysis on manifolds, nonlinear dispersive equations, Brownian motion and kernel estimates, Euler equations, porous medium type equations, pseudodifferential calculus, free boundary problems, and bifurcation analysis.
A generalization of the simplest equation method and its application to (3+1)-dimensional KP equation and generalized Fisher equation
Zhao, Zhonglong; Zhang, Yufeng; Han, Zhong; Rui, Wenjuan
In this paper, the simplest equation method is used to construct exact traveling solutions of the (3+1)-dimensional KP equation and generalized Fisher equation. We summarize the main steps of the simplest equation method. The Bernoulli and Riccati equation are used as simplest equations. This method is straightforward and concise, and it can be applied to other nonlinear partial differential equations
Algorithm for research of mathematical physics equations symmetries. Symmetries of the free Schroedinger equation
Kotel'nikov, G.A.
An algorithm id proposed for research the symmetries of mathematical physics equation. The application of this algorithm to the Schroedinger equation permitted to establish, that in addition to the known symmetry the Schroedinger equation possesses also the relativistic symmetry
On the Inclusion of Difference Equation Problems and Z Transform Methods in Sophomore Differential Equation Classes
Savoye, Philippe
In recent years, I started covering difference equations and z transform methods in my introductory differential equations course. This allowed my students to extend the "classical" methods for (ordinary differential equation) ODE's to discrete time problems arising in many applications.
Reduction of lattice equations to the Painlevé equations: PIV and PV
Nakazono, Nobutaka
In this paper, we construct a new relation between Adler-Bobenko-Suris equations and Painlevé equations. Moreover, using this connection we construct the difference-differential Lax representations of the fourth and fifth Painlevé equations.
New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework
Chen, Haiwen; Holland, Paul
In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…
Ising models and soliton equations
Perk, J.H.H.; Au-Yang, H.
Several new results for the critical point of correlation functions of the Hirota equation are derived within the two-dimensional Ising model. The recent success of the conformal-invariance approach in the determination of a critical two-spin correration function is analyzed. The two-spin correlation function is predicted to be rotationally invariant and to decay with a power law in this approach. In the approach suggested here systematic corrections due to the underlying lattice breaking the rotational invariance are obtained
Linearized gyro-kinetic equation
Catto, P.J.; Tsang, K.T.
An ordering of the linearized Fokker-Planck equation is performed in which gyroradius corrections are retained to lowest order and the radial dependence appropriate for sheared magnetic fields is treated without resorting to a WKB technique. This description is shown to be necessary to obtain the proper radial dependence when the product of the poloidal wavenumber and the gyroradius is large (k rho much greater than 1). A like particle collision operator valid for arbitrary k rho also has been derived. In addition, neoclassical, drift, finite β (plasma pressure/magnetic pressure), and unperturbed toroidal electric field modifications are treated
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
BMN correlators by loop equations
Eynard, Bertrand; Kristjansen, Charlotte
In the BMN approach to N=4 SYM a large class of correlators of interest are expressible in terms of expectation values of traces of words in a zero-dimensional gaussian complex matrix model. We develop a loop-equation based, analytic strategy for evaluating such expectation values to any order in the genus expansion. We reproduce the expectation values which were needed for the calculation of the one-loop, genus one correction to the anomalous dimension of BMN-operators and which were earlier obtained by combinatorial means. Furthermore, we present the expectation values needed for the calculation of the one-loop, genus two correction. (author)
Differential Equations and Computational Simulations
given in (6),(7) in Taylor series of e. Equating coefficients of same power of e in both side of equity , we obtain a sequence of linear boundary value...fields, 3). structural instability and block stability of divergence-free vector fields on 2D compact manifolds with nonzero genus , and 4). structural...circle bands. Definition 3.1 Let N be a compact manifold without boundary and with genus k > 0. A closed domain fi C N is called a pseudo-manifold
Introduction to partial differential equations with applications
Zachmanoglou, E C
This text explores the essentials of partial differential equations as applied to engineering and the physical sciences. Discusses ordinary differential equations, integral curves and surfaces of vector fields, the Cauchy-Kovalevsky theory, more. Problems and answers.
Integrable discretizations of the short pulse equation
Feng Baofeng; Maruno, Ken-ichi; Ohta, Yasuhiro
In this paper, we propose integrable semi-discrete and full-discrete analogues of the short pulse (SP) equation. The key construction is the bilinear form and determinant structure of solutions of the SP equation. We also give the determinant formulas of N-soliton solutions of the semi-discrete and full-discrete analogues of the SP equations, from which the multi-loop and multi-breather solutions can be generated. In the continuous limit, the full-discrete SP equation converges to the semi-discrete SP equation, and then to the continuous SP equation. Based on the semi-discrete SP equation, an integrable numerical scheme, i.e. a self-adaptive moving mesh scheme, is proposed and used for the numerical computation of the short pulse equation.
Random walk and the heat equation
Lawler, Gregory F
The heat equation can be derived by averaging over a very large number of particles. Traditionally, the resulting PDE is studied as a deterministic equation, an approach that has brought many significant results and a deep understanding of the equation and its solutions. By studying the heat equation by considering the individual random particles, however, one gains further intuition into the problem. While this is now standard for many researchers, this approach is generally not presented at the undergraduate level. In this book, Lawler introduces the heat equation and the closely related notion of harmonic functions from a probabilistic perspective. The theme of the first two chapters of the book is the relationship between random walks and the heat equation. The first chapter discusses the discrete case, random walk and the heat equation on the integer lattice; and the second chapter discusses the continuous case, Brownian motion and the usual heat equation. Relationships are shown between the two. For exa...
Oscillations of first order difference equations
Similarly, if yn < 0 for n ! N, then we may show that ... From Theorem 2 it follows that every solution of the equation oscillates. In particular, .... [2] Hartman P, Difference equations: Disconjugacy, principal solutions, Green's functions, complete ...
OSCILLATION OF NONLINEAR DELAY DIFFERENCE EQUATIONS
This paper deals with the oscillatory properties of a class of nonlinear difference equations with several delays. Sufficient criteria in the form of infinite sum for the equations to be oscillatory are obtained.
OSCILLATION CRITERIA FOR FORCED SUPERLINEAR DIFFERENCE EQUATIONS
Using Riccati transformation techniques,some oscillation criteria for the forced second-order superlinear difference equations are established.These criteria are dis- crete analogues of the criteria for differential equations proposed by Yan.
EXACT TRAVELLING WAVE SOLUTIONS TO BBM EQUATION
Abundant new travelling wave solutions to the BBM (Benjamin-Bona-Mahoni) equation are obtained by the generalized Jacobian elliptic function method. This method can be applied to other nonlinear evolution equations.
Time-delay equation governing electron motion
Cohn, J.
A previously proposed differential-difference equation governing the motion of the classical radiating electron is considered further. A set of three assumptions is offered, under which the proposed equation yields asymptotically stable acceleration
dimensional Nizhnik–Novikov–Veselov equations
Mar 22, 2017 ... order differential equations with modified Riemann–Liouville derivatives into integer-order differential equations, ... tered in a variety of scientific and engineering fields ... devoted to the advanced calculus can be easily applied.
Linear superposition solutions to nonlinear wave equations
Liu Yu
The solutions to a linear wave equation can satisfy the principle of superposition, i.e., the linear superposition of two or more known solutions is still a solution of the linear wave equation. We show in this article that many nonlinear wave equations possess exact traveling wave solutions involving hyperbolic, triangle, and exponential functions, and the suitable linear combinations of these known solutions can also constitute linear superposition solutions to some nonlinear wave equations with special structural characteristics. The linear superposition solutions to the generalized KdV equation K(2,2,1), the Oliver water wave equation, and the k(n, n) equation are given. The structure characteristic of the nonlinear wave equations having linear superposition solutions is analyzed, and the reason why the solutions with the forms of hyperbolic, triangle, and exponential functions can form the linear superposition solutions is also discussed
Extreme compression behaviour of equations of state
Shanker, J.; Dulari, P.; Singh, P.K.
The extreme compression (P→∞) behaviour of various equations of state with K' ∞ >0 yields (P/K) ∞ =1/K' ∞ , an algebraic identity found by Stacey. Here P is the pressure, K the bulk modulus, K ' =dK/dP, and K' ∞ , the value of K ' at P→∞. We use this result to demonstrate further that there exists an algebraic identity also between the higher pressure derivatives of bulk modulus which is satisfied at extreme compression by different types of equations of state such as the Birch-Murnaghan equation, Poirier-Tarantola logarithmic equation, generalized Rydberg equation, Keane's equation and the Stacey reciprocal K-primed equation. The identity has been used to find a relationship between λ ∞ , the third-order Grueneisen parameter at P→∞, and pressure derivatives of bulk modulus with the help of the free-volume formulation without assuming any specific form of equation of state.
Partial differential equations of mathematical physics
Sobolev, S L
Partial Differential Equations of Mathematical Physics emphasizes the study of second-order partial differential equations of mathematical physics, which is deemed as the foundation of investigations into waves, heat conduction, hydrodynamics, and other physical problems. The book discusses in detail a wide spectrum of topics related to partial differential equations, such as the theories of sets and of Lebesgue integration, integral equations, Green's function, and the proof of the Fourier method. Theoretical physicists, experimental physicists, mathematicians engaged in pure and applied math
Baecklund transformations for integrable lattice equations
Atkinson, James
We give new Baecklund transformations (BTs) for some known integrable (in the sense of being multidimensionally consistent) quadrilateral lattice equations. As opposed to the natural auto-BT inherent in every such equation, these BTs are of two other kinds. Specifically, it is found that some equations admit additional auto-BTs (with Baecklund parameter), whilst some pairs of apparently distinct equations admit a BT which connects them
New solutions of Heun's general equation
Ishkhanyan, Artur; Suominen, Kalle-Antti
We show that in four particular cases the derivative of the solution of Heun's general equation can be expressed in terms of a solution to another Heun's equation. Starting from this property, we use the Gauss hypergeometric functions to construct series solutions to Heun's equation for the mentioned cases. Each of the hypergeometric functions involved has correct singular behaviour at only one of the singular points of the equation; the sum, however, has correct behaviour. (letter to the editor)
Notes on the infinity Laplace equation
Lindqvist, Peter
This BCAM SpringerBriefs is a treaty of the Infinity-Laplace Equation, which has inherited many features from the ordinary Laplace Equation, and is based on lectures by the author. The Infinity.Laplace Equation has delightful counterparts to the Dirichlet integral, the mean value property, the Brownian motion, Harnack's inequality, and so on. This "fully non-linear" equation has applications to image processing and to mass transfer problems, and it provides optimal Lipschitz extensions of boundary values.
ON DIFFERENTIAL EQUATIONS, INTEGRABLE SYSTEMS, AND GEOMETRY
Enrique Gonzalo Reyes Garcia
ON DIFFERENTIAL EQUATIONS, INTEGRABLE SYSTEMS, AND GEOMETRY Equations in partial derivatives appeared in the 18th century as essential tools for the analytic study of physical models and, later, they proved to be fundamental for the progress of mathematics. For example, fundamental results of modern differential geometry are based on deep theorems on differential equations. Reciprocally, it is possible to study differential equations through geometrical means just like it was done by o...
Hybrid quantum-classical master equations
Diósi, Lajos
We discuss hybrid master equations of composite systems, which are hybrids of classical and quantum subsystems. A fairly general form of hybrid master equations is suggested. Its consistency is derived from the consistency of Lindblad quantum master equations. We emphasize that quantum measurement is a natural example of exact hybrid systems. We derive a heuristic hybrid master equation of time-continuous position measurement (monitoring). (paper)
On a complex differential Riccati equation
Khmelnytskaya, Kira V; Kravchenko, Vladislav V
We consider a nonlinear partial differential equation for complex-valued functions which is related to the two-dimensional stationary Schroedinger equation and enjoys many properties similar to those of the ordinary differential Riccati equation such as the famous Euler theorems, the Picard theorem and others. Besides these generalizations of the classical 'one-dimensional' results, we discuss new features of the considered equation including an analogue of the Cauchy integral theorem
About the solvability of matrix polynomial equations
Netzer, Tim; Thom, Andreas
We study self-adjoint matrix polynomial equations in a single variable and prove existence of self-adjoint solutions under some assumptions on the leading form. Our main result is that any self-adjoint matrix polynomial equation of odd degree with non-degenerate leading form can be solved in self-adjoint matrices. We also study equations of even degree and equations in many variables.
On polynomial solutions of the Heun equation
Gurappa, N; Panigrahi, Prasanta K
By making use of a recently developed method to solve linear differential equations of arbitrary order, we find a wide class of polynomial solutions to the Heun equation. We construct the series solution to the Heun equation before identifying the polynomial solutions. The Heun equation extended by the addition of a term, -σ/x, is also amenable for polynomial solutions. (letter to the editor)
New solutions of the confluent Heun equation
Harold Exton
Full Text Available New compact triple series solutions of the confluent Heun equation (CHE are obtained by the appropriate applications of the Laplace transform and its inverse to a suitably constructed system of soluble differential equations. The computer-algebra package MAPLE V is used to tackle an auxiliary system of non-linear algebraic equations. This study is partly motivated by the relationship between the CHE and certain Schrödininger equations.
Some Aspects of Extended Kinetic Equation
Full Text Available Motivated by the pathway model of Mathai introduced in 2005 [Linear Algebra and Its Applications, 396, 317–328] we extend the standard kinetic equations. Connection of the extended kinetic equation with fractional calculus operator is established. The solution of the general form of the fractional kinetic equation is obtained through Laplace transform. The results for the standard kinetic equation are obtained as the limiting case.
Solutions manual to accompany Ordinary differential equations
PARALLEL SOLUTION METHODS OF PARTIAL DIFFERENTIAL EQUATIONS
Korhan KARABULUT
Full Text Available Partial differential equations arise in almost all fields of science and engineering. Computer time spent in solving partial differential equations is much more than that of in any other problem class. For this reason, partial differential equations are suitable to be solved on parallel computers that offer great computation power. In this study, parallel solution to partial differential equations with Jacobi, Gauss-Siedel, SOR (Succesive OverRelaxation and SSOR (Symmetric SOR algorithms is studied.
Non-markovian boltzmann equation
Kremp, D.; Bonitz, M.; Kraeft, W.D.; Schlanges, M.
A quantum kinetic equation for strongly interacting particles (generalized binary collision approximation, ladder or T-matrix approximation) is derived in the framework of the density operator technique. In contrast to conventional kinetic theory, which is valid on large time scales as compared to the collision (correlation) time only, our approach retains the full time dependencies, especially also on short time scales. This means retardation and memory effects resulting from the dynamics of binary correlations and initial correlations are included. Furthermore, the resulting kinetic equation conserves total energy (the sum of kinetic and potential energy). The second aspect of generalization is the inclusion of many-body effects, such as self-energy, i.e., renormalization of single-particle energies and damping. To this end we introduce an improved closure relation to the Bogolyubov endash Born endash Green endash Kirkwood endash Yvon hierarchy. Furthermore, in order to express the collision integrals in terms of familiar scattering quantities (Mo/ller operator, T-matrix), we generalize the methods of quantum scattering theory by the inclusion of medium effects. To illustrate the effects of memory and damping, the results of numerical simulations are presented. copyright 1997 Academic Press, Inc
Wave-equation Q tomography
Dutta, Gaurav
Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. The amplitude and the dispersion losses from attenuation are often compensated for during prestack depth migration. However, most attenuation compensation or Qcompensation migration algorithms require an estimate of the background Q model. We have developed a wave-equation gradient optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ∈, where ∈ is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early arrivals. The gradient is computed by migrating the observed traces weighted by the frequency shift residuals. The background Q model is perturbed until the predicted and the observed traces have the same peak frequencies or the same centroid frequencies. Numerical tests determined that an improved accuracy of the Q model by wave-equation Q tomography leads to a noticeable improvement in migration image quality. © 2016 Society of Exploration Geophysicists.
Quantization of Equations of Motion
D. Kochan
Full Text Available The Classical Newton-Lagrange equations of motion represent the fundamental physical law of mechanics. Their traditional Lagrangian and/or Hamiltonian precursors when available are essential in the context of quantization. However, there are situations that lack Lagrangian and/or Hamiltonian settings. This paper discusses a description of classical dynamics and presents some irresponsible speculations about its quantization by introducing a certain canonical two-form ?. By its construction ? embodies kinetic energy and forces acting within the system (not their potential. A new type of variational principle employing differential two-form ? is introduced. Variation is performed over "umbilical surfaces" instead of system histories. It provides correct Newton-Lagrange equations of motion. The quantization is inspired by the Feynman path integral approach. The quintessence is to rearrange it into an "umbilical world-sheet" functional integral in accordance with the proposed variational principle. In the case of potential-generated forces, the new approach reduces to the standard quantum mechanics. As an example, Quantum Mechanics with friction is analyzed in detail.Â
Sobolev gradients and differential equations
Neuberger, J W
A Sobolev gradient of a real-valued functional on a Hilbert space is a gradient of that functional taken relative to an underlying Sobolev norm. This book shows how descent methods using such gradients allow a unified treatment of a wide variety of problems in differential equations. For discrete versions of partial differential equations, corresponding Sobolev gradients are seen to be vastly more efficient than ordinary gradients. In fact, descent methods with these gradients generally scale linearly with the number of grid points, in sharp contrast with the use of ordinary gradients. Aside from the first edition of this work, this is the only known account of Sobolev gradients in book form. Most of the applications in this book have emerged since the first edition was published some twelve years ago. What remains of the first edition has been extensively revised. There are a number of plots of results from calculations and a sample MatLab code is included for a simple problem. Those working through a fair p...
Dutta, Gaurav; Schuster, Gerard T.
The Laplace transformation of adjoint transport equations
Hoogenboom, J.E.
A clarification is given of the difference between the equation adjoint to the Laplace-transformed time-dependent transport equation and the Laplace-transformed time-dependent adjoint transport equation. Proper procedures are derived to obtain the Laplace transform of the instantaneous detector response. (author)
Equations of state for light water
Rubin, G.A.; Granziera, M.R.
The equations of state for light water were developed, based on the tables of Keenan and Keyes. Equations are presented, describing the specific volume, internal energy, enthalpy and entropy of saturated steam, superheated vapor and subcooled liquid as a function of pressure and temperature. For each property, several equations are shown, with different precisions and different degress of complexity. (Author) [pt
Cole's ansatz and extensions of Burgers' equation
Tasso, H.
A sequence of nonlinear partial differential equations is constructed. It contains all equation whose solutions can be obtained from applying the Cole-Hopf transformation to linear partial differential equations. An exemple is usub(t) = (u 3 )sub(x) + 3/2(u 2 )sub(xx) + usub(xxx). (orig.) [de
Completely integrable operator evolution equations. II
The author continues the investigation of operator classical completely integrable systems. The main attention is devoted to the stationary operator non-linear Schroedinger equation. It is shown that this equation can be used for separation of variables for a large class of completely integrable equations. (Auth.)
Derivation of the neutron diffusion equation
Mika, J.R.; Banasiak, J.
We discuss the diffusion equation as an asymptotic limit of the neutron transport equation for large scattering cross sections. We show that the classical asymptotic expansion procedure does not lead to the diffusion equation and present two modified approaches to overcome this difficulty. The effect of the initial layer is also discussed. (authors). 9 refs
Skew differential fields, differential and difference equations
van der Put, M
The central question is: Let a differential or difference equation over a field K be isomorphic to all its Galois twists w.r.t. the group Gal(K/k). Does the equation descend to k? For a number of categories of equations an answer is given.
Some Functional Equations Originating from Number Theory
We will introduce new functional equations (3) and (4) which are strongly related to well-known formulae (1) and (2) of number theory, and investigate the solutions of the equations. Moreover, we will also study some stability problems of those equations.
A reliable treatment for nonlinear Schroedinger equations
Khani, F.; Hamedi-Nezhad, S.; Molabahrami, A.
Exp-function method is used to find a unified solution of nonlinear wave equation. Nonlinear Schroedinger equations with cubic and power law nonlinearity are selected to illustrate the effectiveness and simplicity of the method. It is shown that the Exp-function method, with the help of symbolic computation, provides a powerful mathematical tool for solving nonlinear equation
New solitons connected to the Dirac equation
Grosse, H.
Imposing isospectral invariance for the one dimensional Dirac operator leads to systems of nonlinear partial differential equations. By constructing reflectionless potentials of the Dirac equation we obtain a new type of solitons for a system of modified Korteweg-de Vries equations. (Author)
Compositeness condition in the renormalization group equation
Bando, Masako; Kugo, Taichiro; Maekawa, Nobuhiro; Sasakura, Naoki; Watabiki, Yoshiyuki; Suehiro, Kazuhiko
The problems in imposing compositeness conditions as boundary conditions in renormalization group equations are discussed. It is pointed out that one has to use the renormalization group equation directly in cutoff theory. In some cases, however, it can be approximated by the renormalization group equation in continuum theory if the mass dependent renormalization scheme is adopted. (orig.)
Transformation properties of the integrable evolution equations
Konopelchenko, B.G.
Group-theoretical properties of partial differential equations integrable by the inverse scattering transform method are discussed. It is shown that nonlinear transformations typical to integrable equations (symmetry groups, Baecklund-transformations) and these equations themselves are contained in a certain universal nonlinear transformation group. (orig.)
Comparison of the Schrodinger and Salpeter equations
Jacobs, S.; Olsson, M.G.
A unified approach to the solution of the Schrodinger and spinless Salpeter equations is presented. Fits to heavy quark bound state energies using various potential models are employed to determine whether the Salpeter equation provides a better description of heavy quark systems than the Schrodinger equation
Lie symmetries for systems of evolution equations
Paliathanasis, Andronikos; Tsamparlis, Michael
The Lie symmetries for a class of systems of evolution equations are studied. The evolution equations are defined in a bimetric space with two Riemannian metrics corresponding to the space of the independent and dependent variables of the differential equations. The exact relation of the Lie symmetries with the collineations of the bimetric space is determined.
Loop equations in the theory of gravitation
Makeenko, Yu.M.; Voronov, N.A.
Loop-space variables (matrices of parallel transport) for the theory of gravitation are described. Loop equations, which are equivalent to the Einstein equations, are derived in the classical case. Loop equations are derived for gravity with cosmological constant as well. An analogy with the loop-space approach in Yang-Mills theory is discussed [ru
Symmetry properties of fractional diffusion equations
Gazizov, R K; Kasatkin, A A; Lukashchuk, S Yu [Ufa State Aviation Technical University, Karl Marx strausse 12, Ufa (Russian Federation)], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected]
In this paper, nonlinear anomalous diffusion equations with time fractional derivatives (Riemann-Liouville and Caputo) of the order of 0-2 are considered. Lie point symmetries of these equations are investigated and compared. Examples of using the obtained symmetries for constructing exact solutions of the equations under consideration are presented.
More Issues in Observed-Score Equating
van der Linden, Wim J.
This article is a response to the commentaries on the position paper on observed-score equating by van der Linden (this issue). The response focuses on the more general issues in these commentaries, such as the nature of the observed scores that are equated, the importance of test-theory assumptions in equating, the necessity to use multiple…
Solving Absolute Value Equations Algebraically and Geometrically
Shiyuan, Wei
The way in which students can improve their comprehension by understanding the geometrical meaning of algebraic equations or solving algebraic equation geometrically is described. Students can experiment with the conditions of the absolute value equation presented, for an interesting way to form an overall understanding of the concept.
Antishadowing effects in the unitarized BFKL equation
Ruan Jianhong; Shen Zhenqi; Yang Jifeng; Zhu Wei
A unitarized BFKL equation incorporating shadowing and antishadowing corrections of the gluon recombination is proposed. This equation reduces to the Balitsky-Kovchegov evolution equation near the saturation limit. We find that the antishadowing effects have a sizable influence on the gluon distribution function in the preasymptotic regime
Ruan Jianhong [Department of Physics, East China Normal University, Shanghai 200062 (China); Shen Zhenqi [Department of Physics, East China Normal University, Shanghai 200062 (China); Yang Jifeng [Department of Physics, East China Normal University, Shanghai 200062 (China); Zhu Wei [Department of Physics, East China Normal University, Shanghai 200062 (China)]. E-mail: [email protected]
A unitarized BFKL equation incorporating shadowing and antishadowing corrections of the gluon recombination is proposed. This equation reduces to the Balitsky-Kovchegov evolution equation near the saturation limit. We find that the antishadowing effects have a sizable influence on the gluon distribution function in the preasymptotic regime.
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
The Modified Enskog Equation for Mixtures
Beijeren, H. van; Ernst, M.H.
In a previous paper it was shown that a modified form of the Enskog equation, applied to mixtures of hard spheres, should be considered as the correct extension of the usual Enskog equation to the case of mixtures. The main argument was that the modified Enskog equation leads to linear transport
Jacobi equations as Lagrange equations of the deformed Lagrangian
Casciaro, B.
We study higher-order variational derivatives of a generic Lagrangian L 0 = L 0 (t,q,q). We introduce two new Lagrangians, L 1 and L 2 , associated to the first and second-order deformations of the original Lagrangian L 0 . In terms of these Lagrangians, we are able to establish simple relations between the variational derivatives of different orders of a Lagrangian. As a consequence of these relations the Euler-Lagrange and the Jacobi equations are obtained from a single variational principle based on L 1 . We can furthermore introduce an associated Hamiltonian H 1 = H 1 (t,q,q radical,η,η radical) with η equivalent to δq. If L 0 is independent of time then H 1 is a conserved quantity. (author). 15 refs
The Dirac equation and its solutions
Bagrov, Vladislav G. [Tomsk State Univ., Tomsk (Russian Federation). Dept. of Quantum Field Theroy; Gitman, Dmitry [Sao Paulo Univ. (Brazil). Inst. de Fisica; P.N. Lebedev Physical Institute, Moscow (Russian Federation); Tomsk State Univ., Tomsk (Russian Federation). Faculty of Physics
The Dirac equation is of fundamental importance for relativistic quantum mechanics and quantum electrodynamics. In relativistic quantum mechanics, the Dirac equation is referred to as one-particle wave equation of motion for electron in an external electromagnetic field. In quantum electrodynamics, exact solutions of this equation are needed to treat the interaction between the electron and the external field exactly. In particular, all propagators of a particle, i.e., the various Green's functions, are constructed in a certain way by using exact solutions of the Dirac equation.
Bagrov, Vladislav G
Dirac equations are of fundamental importance for relativistic quantum mechanics and quantum electrodynamics. In relativistic quantum mechanics, the Dirac equation is referred to as one-particle wave equation of motion for electron in an external electromagnetic field. In quantum electrodynamics, exact solutions of this equation are needed to treat the interaction between the electron and the external field exactly.In particular, all propagators of a particle, i.e., the various Green's functions, are constructed in a certain way by using exact solutions of the Dirac equation.
An integral transform of the Salpeter equation
Krolikowski, W.
We find a new form of relativistic wave equation for two spin-1/2 particles, which arises by an integral transformation (in the position space) of the wave function in the Salpeter equation. The non-locality involved in this transformation is extended practically over the Compton wavelength of the lighter of two particles. In the case of equal masses the new equation assumes the form of the Breit equation with an effective integral interaction. In the one-body limit it reduces to the Dirac equation also with an effective integral interaction. (author)
Sparse dynamics for partial differential equations.
Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley
We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.
Numerical methods for differential equations and applications
Ixaru, L.G.
This book is addressed to persons who, without being professionals in applied mathematics, are often faced with the problem of numerically solving differential equations. In each of the first three chapters a definite class of methods is discussed for the solution of the initial value problem for ordinary differential equations: multistep methods; one-step methods; and piecewise perturbation methods. The fourth chapter is mainly focussed on the boundary value problems for linear second-order equations, with a section devoted to the Schroedinger equation. In the fifth chapter the eigenvalue problem for the radial Schroedinger equation is solved in several ways, with computer programs included. (Auth.)
Bagrov, Vladislav G.; Gitman, Dmitry; P.N. Lebedev Physical Institute, Moscow; Tomsk State Univ., Tomsk
The Dirac equation is of fundamental importance for relativistic quantum mechanics and quantum electrodynamics. In relativistic quantum mechanics, the Dirac equation is referred to as one-particle wave equation of motion for electron in an external electromagnetic field. In quantum electrodynamics, exact solutions of this equation are needed to treat the interaction between the electron and the external field exactly. In particular, all propagators of a particle, i.e., the various Green's functions, are constructed in a certain way by using exact solutions of the Dirac equation.
Numerical solutions of diffusive logistic equation
Afrouzi, G.A.; Khademloo, S.
In this paper we investigate numerically positive solutions of a superlinear Elliptic equation on bounded domains. The study of Diffusive logistic equation continues to be an active field of research. The subject has important applications to population migration as well as many other branches of science and engineering. In this paper the 'finite difference scheme' will be developed and compared for solving the one- and three-dimensional Diffusive logistic equation. The basis of the analysis of the finite difference equations considered here is the modified equivalent partial differential equation approach, developed from many authors these years
Complex centers of polynomial differential equations
Mohamad Ali M. Alwash
Full Text Available We present some results on the existence and nonexistence of centers for polynomial first order ordinary differential equations with complex coefficients. In particular, we show that binomial differential equations without linear terms do not have complex centers. Classes of polynomial differential equations, with more than two terms, are presented that do not have complex centers. We also study the relation between complex centers and the Pugh problem. An algorithm is described to solve the Pugh problem for equations without complex centers. The method of proof involves phase plane analysis of the polar equations and a local study of periodic solutions.
Monge-Ampere equations and tensorial functors
Tunitsky, Dmitry V
We consider differential-geometric structures associated with Monge-Ampere equations on manifolds and use them to study the contact linearization of such equations. We also consider the category of Monge-Ampere equations (the morphisms are contact diffeomorphisms) and a number of subcategories. We are chiefly interested in subcategories of Monge-Ampere equations whose objects are locally contact equivalent to equations linear in the second derivatives (semilinear equations), linear in derivatives, almost linear, linear in the second derivatives and independent of the first derivatives, linear, linear and independent of the first derivatives, equations with constant coefficients or evolution equations. We construct a number of functors from the category of Monge-Ampere equations and from some of its subcategories to the category of tensorial objects (that is, multi-valued sections of tensor bundles). In particular, we construct a pseudo-Riemannian metric for every generic Monge-Ampere equation. These functors enable us to establish effectively verifiable criteria for a Monge-Ampere equation to belong to the subcategories listed above.
From ordinary to partial differential equations
Esposito, Giampiero
This book is addressed to mathematics and physics students who want to develop an interdisciplinary view of mathematics, from the age of Riemann, Poincaré and Darboux to basic tools of modern mathematics. It enables them to acquire the sensibility necessary for the formulation and solution of difficult problems, with an emphasis on concepts, rigour and creativity. It consists of eight self-contained parts: ordinary differential equations; linear elliptic equations; calculus of variations; linear and non-linear hyperbolic equations; parabolic equations; Fuchsian functions and non-linear equations; the functional equations of number theory; pseudo-differential operators and pseudo-differential equations. The author leads readers through the original papers and introduces new concepts, with a selection of topics and examples that are of high pedagogical value.
Developments in functional equations and related topics
Ciepliński, Krzysztof; Rassias, Themistocles
This book presents current research on Ulam stability for functional equations and inequalities. Contributions from renowned scientists emphasize fundamental and new results, methods and techniques. Detailed examples are given to theories to further understanding at the graduate level for students in mathematics, physics, and engineering. Key topics covered in this book include: Quasi means Approximate isometries Functional equations in hypergroups Stability of functional equations Fischer-Muszély equation Haar meager sets and Haar null sets Dynamical systems Functional equations in probability theory Stochastic convex ordering Dhombres functional equation Nonstandard analysis and Ulam stability This book is dedicated in memory of Staniłsaw Marcin Ulam, who posed the fundamental problem concerning approximate homomorphisms of groups in 1940; which has provided the stimulus for studies in the stability of functional equations and inequalities.
On integrability of the Killing equation
Houri, Tsuyoshi; Tomoda, Kentaro; Yasui, Yukinori
Killing tensor fields have been thought of as describing the hidden symmetry of space(-time) since they are in one-to-one correspondence with polynomial first integrals of geodesic equations. Since many problems in classical mechanics can be formulated as geodesic problems in curved space and spacetime, solving the defining equation for Killing tensor fields (the Killing equation) is a powerful way to integrate equations of motion. Thus it has been desirable to formulate the integrability conditions of the Killing equation, which serve to determine the number of linearly independent solutions and also to restrict the possible forms of solutions tightly. In this paper, we show the prolongation for the Killing equation in a manner that uses Young symmetrizers. Using the prolonged equations, we provide the integrability conditions explicitly.
Generalization of Einstein's gravitational field equations
Moulin, Frédéric
The Riemann tensor is the cornerstone of general relativity, but as is well known it does not appear explicitly in Einstein's equation of gravitation. This suggests that the latter may not be the most general equation. We propose here for the first time, following a rigorous mathematical treatment based on the variational principle, that there exists a generalized 4-index gravitational field equation containing the Riemann curvature tensor linearly, and thus the Weyl tensor as well. We show that this equation, written in n dimensions, contains the energy-momentum tensor for matter and that of the gravitational field itself. This new 4-index equation remains completely within the framework of general relativity and emerges as a natural generalization of the familiar 2-index Einstein equation. Due to the presence of the Weyl tensor, we show that this equation contains much more information, which fully justifies the use of a fourth-order theory.
Stochastic integration and differential equations
Protter, Philip E
It has been 15 years since the first edition of Stochastic Integration and Differential Equations, A New Approach appeared, and in those years many other texts on the same subject have been published, often with connections to applications, especially mathematical finance. Yet in spite of the apparent simplicity of approach, none of these books has used the functional analytic method of presenting semimartingales and stochastic integration. Thus a 2nd edition seems worthwhile and timely, though it is no longer appropriate to call it "a new approach". The new edition has several significant changes, most prominently the addition of exercises for solution. These are intended to supplement the text, but lemmas needed in a proof are never relegated to the exercises. Many of the exercises have been tested by graduate students at Purdue and Cornell Universities. Chapter 3 has been completely redone, with a new, more intuitive and simultaneously elementary proof of the fundamental Doob-Meyer decomposition theorem, t...
Teaching materials of algebraic equation
Widodo, S. A.; Prahmana, R. C. I.; Purnami, A. S.; Turmudi
The purpose of this paper is to know the effectiveness of teaching materials algebraic equation. This type of research used experimental method. The population in this study is all students of mathematics education who take numerical method in sarjanawiyata tamansiswa of university; the sample is taken using cluster random sampling. Instrument used in this research is test and questionnaire. The test is used to know the problem solving ability and achievement, while the questionnaire is used to know the student's response on the teaching materials. Data Analysis technique of quantitative used Wilcoxon test, while the qualitative data used grounded theory. Based on the results of the test can be concluded that the development of teaching materials can improve the ability to solve problems and achievement.
An introduction to differential equations
Ladde, Anil G
This is a twenty-first century book designed to meet the challenges of understanding and solving interdisciplinary problems. The book creatively incorporates "cutting-edge" research ideas and techniques at the undergraduate level. The book also is a unique research resource for undergraduate/graduate students and interdisciplinary researchers. It emphasizes and exhibits the importance of conceptual understandings and its symbiotic relationship in the problem solving process. The book is proactive in preparing for the modeling of dynamic processes in various disciplines. It introduces a "break-down-the problem" type of approach in a way that creates "fun" and "excitement". The book presents many learning tools like "step-by-step procedures (critical thinking)", the concept of "math" being a language, applied examples from diverse fields, frequent recaps, flowcharts and exercises. Uniquely, this book introduces an innovative and unified method of solving nonlinear scalar differential equations. This is called ...
equate: An R Package for Observed-Score Linking and Equating
Anthony D. Albano
Full Text Available The R package equate contains functions for observed-score linking and equating under single-group, equivalent-groups, and nonequivalent-groups with anchor test(s designs. This paper introduces these designs and provides an overview of observed-score equating with details about each of the supported methods. Examples demonstrate the basic functionality of the equate package.
Equating Multidimensional Tests under a Random Groups Design: A Comparison of Various Equating Procedures
Lee, Eunjung
The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including…
The modified simplest equation method to look for exact solutions of nonlinear partial differential equations
Efimova, Olga Yu.
The modification of simplest equation method to look for exact solutions of nonlinear partial differential equations is presented. Using this method we obtain exact solutions of generalized Korteweg-de Vries equation with cubic source and exact solutions of third-order Kudryashov-Sinelshchikov equation describing nonlinear waves in liquids with gas bubbles.
Inferring Mathematical Equations Using Crowdsourcing.
Szymon Wasik
Full Text Available Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Wasik, Szymon; Fratczak, Filip; Krzyskow, Jakub; Wulnikowski, Jaroslaw
Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
INVARIANTS OF GENERALIZED RAPOPORT-LEAS EQUATIONS
Elena N. Kushner
Full Text Available For the generalized Rapoport-Leas equations, algebra of differential invariants is constructed with respect to point transformations, that is, transformations of independent and dependent variables. The finding of a general transformation of this type reduces to solving an extremely complicated functional equation. Therefore, following the approach of Sophus Lie, we restrict ourselves to the search for infinitesimal transformations which are generated by translations along the trajectories of vector fields. The problem of finding these vector fields reduces to the redefined system decision of linear differential equations with respect to their coefficients. The Rapoport-Leas equations arise in the study of nonlinear filtration processes in porous media, as well as in other areas of natural science: for example, these equations describe various physical phenomena: two-phase filtration in a porous medium, filtration of a polytropic gas, and propagation of heat at nuclear explosion. They are vital topic for research: in recent works of Bibikov, Lychagin, and others, the analysis of the symmetries of the generalized Rapoport-Leas equations has been carried out; finite-dimensional dynamics and conditions of attractors existence have been found. Since the generalized RapoportLeas equations are nonlinear partial differential equations of the second order with two independent variables; the methods of the geometric theory of differential equations are used to study them in this paper. According to this theory differential equations generate subvarieties in the space of jets. This makes it possible to use the apparatus of modern differential geometry to study differential equations. We introduce the concept of admissible transformations, that is, replacements of variables that do not derive equations outside the class of the Rapoport-Leas equations. Such transformations form a Lie group. For this Lie group there are differential invariants that separate
A new auxiliary equation and exact travelling wave solutions of nonlinear equations
Sirendaoreji
A new auxiliary ordinary differential equation and its solutions are used for constructing exact travelling wave solutions of nonlinear partial differential equations in a unified way. The main idea of this method is to take full advantage of the auxiliary equation which has more new exact solutions. More new exact travelling wave solutions are obtained for the quadratic nonlinear Klein-Gordon equation, the combined KdV and mKdV equation, the sine-Gordon equation and the Whitham-Broer-Kaup equations
Some New Integrable Equations from the Self-Dual Yang-Mills Equations
Ivanova, T.A.; Popov, A.D.
Using the symmetry reductions of the self-dual Yang-Mills (SDYM) equations in (2+2) dimensions, we introduce new integrable equations which are 'deformations' of the chiral model in (2+1) dimensions, generalized nonlinear Schroedinger, Korteweg-de Vries, Toda lattice, Garnier, Euler-Arnold, generalized Calogero-Moser and Euler-Calogero-Moser equations. The Lax pairs for all of these equations are derived by the symmetry reductions of the Lax pair for the SDYM equations. 34 refs
Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation
Vitanov Nikolay K.
Full Text Available We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
Vitanov, Nikolay K.; Dimitrova, Zlatinka I.
We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
A fractional Dirac equation and its solution
Muslih, Sami I; Agrawal, Om P; Baleanu, Dumitru
This paper presents a fractional Dirac equation and its solution. The fractional Dirac equation may be obtained using a fractional variational principle and a fractional Klein-Gordon equation; both methods are considered here. We extend the variational formulations for fractional discrete systems to fractional field systems defined in terms of Caputo derivatives. By applying the variational principle to a fractional action S, we obtain the fractional Euler-Lagrange equations of motion. We present a Lagrangian and a Hamiltonian for the fractional Dirac equation of order α. We also use a fractional Klein-Gordon equation to obtain the fractional Dirac equation which is the same as that obtained using the fractional variational principle. Eigensolutions of this equation are presented which follow the same approach as that for the solution of the standard Dirac equation. We also provide expressions for the path integral quantization for the fractional Dirac field which, in the limit α → 1, approaches to the path integral for the regular Dirac field. It is hoped that the fractional Dirac equation and the path integral quantization of the fractional field will allow further development of fractional relativistic quantum mechanics.
BCS equations in the continuum
Sandulescu, N.; Liotta, R. J.; Wyss, R.
The properties of nuclei close to the drip line are significantly influenced by the continuum part of the single-particle spectrum. The main role is played by the resonant states which are largely confined in the region of nuclear potential and therefore stronger coupled with the bound states in an excitation process. Resonant states are also important in the nuclei beyond the drip line. In this case the decay properties of the nucleus can be directly related to the widths of the narrow resonances occupied by the unbound nucleons. The aim of this work is to propose an alternative for evaluating the effect of the resonant part of single-particle spectrum on the pairing correlations calculated within the BCS approximation. We estimated the role of resonances in the case of the isotope 170 Sn. The Resonant-BCS (RBCS) equations are solved for the case of a seniority force. The BCS approximation based on a seniority force cannot be applied in the case of a nucleus immersed in a box if all discrete states simulating the continuum are considered. In such a case the pairing correlations will increase with the number of states in the box. In our case one can still apply a seniority force with RBCS because the effect of the continuum appears here through a finite number of physical resonances, well defined by the given mean field. Because these resonances have a spatial distribution concentrated within the region of the nuclear potential, one expects that the localization probability of nucleons, far out from the nuclear surface, to be small. The gap obtained taking correctly the contribution of resonances, according to RBCS equations, is about 1.3 MeV, while pairing gap calculated only with the bound single-particle spectrum has the value Δ = 1.10 MeV. If we introduce also the resonant states, neglecting completely their widths, the gap will increase to the value Δ = 1.880 MeV. Therefore, one cannot estimate properly the pairing correlations by supplementing the spectrum
Kinetic equations for an unstable plasma; Equations cinetiques d'un plasma instable
Laval, G; Pellat, R [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires
In this work, we establish the plasma kinetic equations starting from the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations. We demonstrate that relations existing between correlation functions may help to justify the truncation of the hierarchy. Then we obtain the kinetic equations of a stable or unstable plasma. They do not reduce to an equation for the one-body distribution function, but generally involve two coupled equations for the one-body distribution function and the spectral density of the fluctuating electric field. We study limiting cases where the Balescu-Lenard equation, the quasi-linear theory, the Pines-Schrieffer equations and the equations of weak turbulence in the random phase approximation are recovered. At last we generalise the H-theorem for the system of equations and we define conditions for irreversible behaviour. (authors) [French] Dans ce travail nous etablissons les equations cinetiques d'un plasma a partir des equations de la recurrence de Bogoliubov, Born, Green, Kirkwood et Yvon. Nous demontrons qu'entre les fonctions de correlation d'un plasma existent des relations qui permettent de justifier la troncature de la recurrence. Nous obtenons alors les equations cinetiques d'un plasma stable ou instable. En general elles ne se reduisent pas a une equation d'evolution pour la densite simple, mais se composent de deux equations couplees portant sur la densite simple et la densite spectrale du champ electrique fluctuant. Nous etudions le cas limites ou l'on retrouve l'equation de Balescu-Lenard, les equations de la theorie quasi-lineaire, les equations de Pines et Schrieffer et les equations de la turbulence faible dans l'approximation des phases aleatoires. Enfin, nous generalisons le theoreme H pour ce systeme d'equations et nous precisons les conditions d'evolution irreversible. (auteurs)
Equations of macrotransport in reactor fuel assemblies
Sorokin, A.P.; Zhukov, A.V.; Kornienko, Yu.N.; Ushakov, P.A.
The rigorous statement of equations of macrotransport is obtained. These equations are bases for channel-by-channel methods of thermohydraulic calculations of reactor fuel assemblies within the scope of the model of discontinuous multiphase coolant flow (including chemical reactions); they also describe a wide range of problems on thermo-physical reactor fuel assembly justification. It has been carried out by smoothing equations of mass, momentum and enthalpy transfer in cross section of each phase of the elementary fuel assembly subchannel. The equation for cross section flows is obtaind by smoothing the equation of momentum transfer on the interphase. Interaction of phases on the channel boundary is described using the Stanton number. The conclusion is performed using the generalized equation of substance transfer. The statement of channel-by-channel method without the scope of homogeneous flow model is given
Stochastic differential equation model to Prendiville processes
Granita, E-mail: [email protected] [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); Bahar, Arifah [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); UTM Center for Industrial & Applied Mathematics (UTM-CIAM) (Malaysia)
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.
Darboux transformation for the NLS equation
Aktosun, Tuncay; Mee, Cornelis van der
We analyze a certain class of integral equations associated with Marchenko equations and Gel'fand-Levitan equations. Such integral equations arise through a Fourier transformation on various ordinary differential equations involving a spectral parameter. When the integral operator is perturbed by a finite-rank perturbation, we explicitly evaluate the change in the solution in terms of the unperturbed quantities and the finite-rank perturbation. We show that this result provides a fundamental approach to derive Darboux transformations for various systems of ordinary differential operators. We illustrate our theory by providing the explicit Darboux transformation for the Zakharov-Shabat system and show how the potential and wave function change when a simple discrete eigenvalue is added to the spectrum, and thus we also provide a one-parameter family of Darboux transformations for the nonlinear Schroedinger equation.
Introduction to complex theory of differential equations
Savin, Anton
This book discusses the complex theory of differential equations or more precisely, the theory of differential equations on complex-analytic manifolds. Although the theory of differential equations on real manifolds is well known – it is described in thousands of papers and its usefulness requires no comments or explanations – to date specialists on differential equations have not focused on the complex theory of partial differential equations. However, as well as being remarkably beautiful, this theory can be used to solve a number of problems in real theory, for instance, the Poincaré balayage problem and the mother body problem in geophysics. The monograph does not require readers to be familiar with advanced notions in complex analysis, differential equations, or topology. With its numerous examples and exercises, it appeals to advanced undergraduate and graduate students, and also to researchers wanting to familiarize themselves with the subject.
On stochastic differential equations with random delay
Krapivsky, P L; Luck, J M; Mallick, K
We consider stochastic dynamical systems defined by differential equations with a uniform random time delay. The latter equations are shown to be equivalent to deterministic higher-order differential equations: for an nth-order equation with random delay, the corresponding deterministic equation has order n + 1. We analyze various examples of dynamical systems of this kind, and find a number of unusual behaviors. For instance, for the harmonic oscillator with random delay, the energy grows as exp((3/2) t 2/3 ) in reduced units. We then investigate the effect of introducing a discrete time step ε. At variance with the continuous situation, the discrete random recursion relations thus obtained have intrinsic fluctuations. The crossover between the fluctuating discrete problem and the deterministic continuous one as ε goes to zero is studied in detail on the example of a first-order linear differential equation
Pseudodifferential equations over non-Archimedean spaces
Zúñiga-Galindo, W A
Focusing on p-adic and adelic analogues of pseudodifferential equations, this monograph presents a very general theory of parabolic-type equations and their Markov processes motivated by their connection with models of complex hierarchic systems. The Gelfand-Shilov method for constructing fundamental solutions using local zeta functions is developed in a p-adic setting and several particular equations are studied, such as the p-adic analogues of the Klein-Gordon equation. Pseudodifferential equations for complex-valued functions on non-Archimedean local fields are central to contemporary harmonic analysis and mathematical physics and their theory reveals a deep connection with probability and number theory. The results of this book extend and complement the material presented by Vladimirov, Volovich and Zelenov (1994) and Kochubei (2001), which emphasize spectral theory and evolution equations in a single variable, and Albeverio, Khrennikov and Shelkovich (2010), which deals mainly with the theory and applica...
Diffusion phenomenon for linear dissipative wave equations
In this paper we prove the diffusion phenomenon for the linear wave equation. To derive the diffusion phenomenon, a new method is used. In fact, for initial data in some weighted spaces, we prove that for {equation presented} decays with the rate {equation presented} [0,1] faster than that of either u or v, where u is the solution of the linear wave equation with initial data {equation presented} [0,1], and v is the solution of the related heat equation with initial data v 0 = u 0 + u 1. This result improves the result in H. Yang and A. Milani [Bull. Sci. Math. 124 (2000), 415-433] in the sense that, under the above restriction on the initial data, the decay rate given in that paper can be improved by t -γ/2. © European Mathematical Society.
Granita; Bahar, Arifah
The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution
Perturbation theory for continuous stochastic equations
Chechetkin, V.R.; Lutovinov, V.S.
The various general perturbational schemes for continuous stochastic equations are considered. These schemes have many analogous features with the iterational solution of Schwinger equation for S-matrix. The following problems are discussed: continuous stochastic evolution equations for probability distribution functionals, evolution equations for equal time correlators, perturbation theory for Gaussian and Poissonian additive noise, perturbation theory for birth and death processes, stochastic properties of systems with multiplicative noise. The general results are illustrated by diffusion-controlled reactions, fluctuations in closed systems with chemical processes, propagation of waves in random media in parabolic equation approximation, and non-equilibrium phase transitions in systems with Poissonian breeding centers. The rate of irreversible reaction X + X → A (Smoluchowski process) is calculated with the use of general theory based on continuous stochastic equations for birth and death processes. The threshold criterion and range of fluctuational region for synergetic phase transition in system with Poissonian breeding centers are also considered. (author)
Weak self-adjoint differential equations
Gandarias, M L
The concepts of self-adjoint and quasi self-adjoint equations were introduced by Ibragimov (2006 J. Math. Anal. Appl. 318 742-57; 2007 Arch. ALGA 4 55-60). In Ibragimov (2007 J. Math. Anal. Appl. 333 311-28), a general theorem on conservation laws was proved. In this paper, we generalize the concept of self-adjoint and quasi self-adjoint equations by introducing the definition of weak self-adjoint equations. We find a class of weak self-adjoint quasi-linear parabolic equations. The property of a differential equation to be weak self-adjoint is important for constructing conservation laws associated with symmetries of the differential equation. (fast track communication)
Interactive differential equations modeling program
Rust, B.W.; Mankin, J.B.
Due to the recent emphasis on mathematical modeling, many ecologists are using mathematics and computers more than ever, and engineers, mathematicians and physical scientists are now included in ecological projects. However, the individual ecologist, with intuitive knowledge of the system, still requires the means to critically examine and adjust system models. An interactive program was developed with the primary goal of allowing an ecologist with minimal experience in either mathematics or computers to develop a system model. It has also been used successfully by systems ecologists, engineers, and mathematicians. This program was written in FORTRAN for the DEC PDP-10, a remote terminal system at Oak Ridge National Laboratory. However, with relatively minor modifications, it can be implemented on any remote terminal system with a FORTRAN IV compiler, or equivalent. This program may be used to simulate any phenomenon which can be described as a system of ordinary differential equations. The program allows the user to interactively change system parameters and/or initial conditions, to interactively select a set of variables to be plotted, and to model discontinuities in the state variables and/or their derivatives. One of the most useful features to the non-computer specialist is the ability to interactively address the system parameters by name and to interactively adjust their values between simulations. These and other features are described in greater detail
Integral equations with contrasting kernels
Theodore Burton
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Quantum - statistical equation of state
Kalitkin, N.N.; Kuz'mina, L.V.
An atom model is considered which allows uniform description of the equation of an equilibrium plasma state in the range of densities from gas to superhigh ones and in the temperature range from 1-5 eV to a ten of keV. Quantum and exchange corrections to the Thomas-Fermi thermodynamic functions at non zero temperatures have been calculated. The calculated values have been compared with experimental data and with calculations performed by more accurate models. The differences result from the fact that a quantum approach does not allow for shell effects. The evaluation of these differences makes it possible to indicate the limits of applicability of the Thomas-Fermi model with quantum and exchange corrections. It turns out that if at zero temperature the model may be applied only for high compressions, at the temperature more than 1 eV it well describes the behaviour of plasma in a very wide range of densities and agrees satisfactorily with experiment even for non-ideal plasma
Lorentz-force equations as Heisenberg equations for a quantum system in the euclidean space
Rodriguez D, R.
In an earlier work, the dynamic equations for a relativistic charged particle under the action of electromagnetic fields were formulated by R. Yamaleev in terms of external, as well as internal momenta. Evolution equations for external momenta, the Lorentz-force equations, were derived from the evolution equations for internal momenta. The mapping between the observables of external and internal momenta are related by Viete formulae for a quadratic polynomial, the characteristic polynomial of the relativistic dynamics. In this paper we show that the system of dynamic equations, can be cast into the Heisenberg scheme for a four-dimensional quantum system. Within this scheme the equations in terms of internal momenta play the role of evolution equations for a state vector, whereas the external momenta obey the Heisenberg equation for an operator evolution. The solutions of the Lorentz-force equation for the motion inside constant electromagnetic fields are presented via pentagonometric functions. (Author)
The multiparton distribution equations in QCD
Shelest, V.P.; Snigirev, A.M.; Zinovjev, G.M.
The equations for multiparton distribution functions of deep-inelastic lepton-hadron scattering and fragmentation functions of e + e - annihilation are obtained by using parton interpretation of the leading logarithm diagrams of perturbative QCD theory. These equations have essentially different structute but the solutions are the same on the definite initial conditions and coincide with the jet calculus rules. The difference is crucial when these equations for hadron jets description are generalized [ru
Algebraic quantity equations before Fisher and Pigou
Thomas M. Humphrey
Readers of this Review are doubtlessly familiar with the famous equation of exchange, MV=PQ, frequently employed to analyze the price level effects of monetary shocks. One might think the algebraic formulation of the equation is an outgrowth of the 20th century tendency toward mathematical modeling and statistical testing. Indeed, textbooks typically associate the transaction velocity version of the equation with Irving Fisher and the alternative Cambridge cash balance version with A. C. Pigo...
New exact solutions of the Dirac equation
Bagrov, V.G.; Gitman, D.M.; Zadorozhnyj, V.N.; Lavrov, P.M.; Shapovalov, V.N.
Search for new exact solutions of the Dirac and Klein-Gordon equations are in progress. Considered are general properties of the Dirac equation solutions for an electron in a purely magnetic field, in combination with a longitudinal magnetic and transverse electric fields. New solutions for the equations of charge motion in an electromagnetic field of axial symmetry and in a nonstationary field of a special form have been found for potentials selected concretely
The Boltzmann equation in the difference formulation
Szoke, Abraham [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brooks III, Eugene D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
First we recall the assumptions that are needed for the validity of the Boltzmann equation and for the validity of the compressible Euler equations. We then present the difference formulation of these equations and make a connection with the time-honored Chapman - Enskog expansion. We discuss the hydrodynamic limit and calculate the thermal conductivity of a monatomic gas, using a simplified approximation for the collision term. Our formulation is more consistent and simpler than the traditional derivation.
Stationary axisymmetric Einstein--Maxwell field equations
Catenacci, R.; Diaz Alonso, J.
We show the existence of a formal identity between Einstein's and Ernst's stationary axisymmetric gravitational field equations and the Einstein--Maxwell and the Ernst equations for the electrostatic and magnetostatic axisymmetric cases. Our equations are invariant under very simple internal symmetry groups, and one of them appears to be new. We also obtain a method for associating two stationary axisymmetric vacuum solutions with every electrostatic known
Reaction diffusion equations with boundary degeneracy
Huashui Zhan
Full Text Available In this article, we consider the reaction diffusion equation $$ \\frac{\\partial u}{\\partial t} = \\Delta A(u,\\quad (x,t\\in \\Omega \\times (0,T, $$ with the homogeneous boundary condition. Inspired by the Fichera-Oleinik theory, if the equation is not only strongly degenerate in the interior of $\\Omega$, but also degenerate on the boundary, we show that the solution of the equation is free from any limitation of the boundary condition.
Partial differential equations of parabolic type
Friedman, Avner
This accessible and self-contained treatment provides even readers previously unacquainted with parabolic and elliptic equations with sufficient background to understand research literature. Author Avner Friedman - Director of the Mathematical Biosciences Institute at The Ohio State University - offers a systematic and thorough approach that begins with the main facts of the general theory of second order linear parabolic equations. Subsequent chapters explore asymptotic behavior of solutions, semi-linear equations and free boundary problems, and the extension of results concerning fundamenta
Isomorphism of Intransitive Linear Lie Equations
Jose Miguel Martins Veloso
Full Text Available We show that formal isomorphism of intransitive linear Lie equations along transversal to the orbits can be extended to neighborhoods of these transversal. In analytic cases, the word formal is dropped from theorems. Also, we associate an intransitive Lie algebra with each intransitive linear Lie equation, and from the intransitive Lie algebra we recover the linear Lie equation, unless of formal isomorphism. The intransitive Lie algebra gives the structure functions introduced by É. Cartan.
Asymptotic problems for stochastic partial differential equations
Salins, Michael
Stochastic partial differential equations (SPDEs) can be used to model systems in a wide variety of fields including physics, chemistry, and engineering. The main SPDEs of interest in this dissertation are the semilinear stochastic wave equations which model the movement of a material with constant mass density that is exposed to both determinstic and random forcing. Cerrai and Freidlin have shown that on fixed time intervals, as the mass density of the material approaches zero, the solutions of the stochastic wave equation converge uniformly to the solutions of a stochastic heat equation, in probability. This is called the Smoluchowski-Kramers approximation. In Chapter 2, we investigate some of the multi-scale behaviors that these wave equations exhibit. In particular, we show that the Freidlin-Wentzell exit place and exit time asymptotics for the stochastic wave equation in the small noise regime can be approximated by the exit place and exit time asymptotics for the stochastic heat equation. We prove that the exit time and exit place asymptotics are characterized by quantities called quasipotentials and we prove that the quasipotentials converge. We then investigate the special case where the equation has a gradient structure and show that we can explicitly solve for the quasipotentials, and that the quasipotentials for the heat equation and wave equation are equal. In Chapter 3, we study the Smoluchowski-Kramers approximation in the case where the material is electrically charged and exposed to a magnetic field. Interestingly, if the system is frictionless, then the Smoluchowski-Kramers approximation does not hold. We prove that the Smoluchowski-Kramers approximation is valid for systems exposed to both a magnetic field and friction. Notably, we prove that the solutions to the second-order equations converge to the solutions of the first-order equation in an Lp sense. This strengthens previous results where convergence was proved in probability.
Fractional hydrodynamic equations for fractal media
Tarasov, Vasily E.
We use the fractional integrals in order to describe dynamical processes in the fractal medium. We consider the 'fractional' continuous medium model for the fractal media and derive the fractional generalization of the equations of balance of mass density, momentum density, and internal energy. The fractional generalization of Navier-Stokes and Euler equations are considered. We derive the equilibrium equation for fractal media. The sound waves in the continuous medium model for fractional media are considered
On the equation of motion in electrodynamics
Papas, C.H.
A new vector equation of motion in electrodynamics is proposed by replacing the Schott term in the Lorentz-Dirac equation by an expression depending on the electro-magnetic field vectors E and B and the velocity vector V. It is argued that several conceptual difficulties in the Lorentz-Dirac equation disappear while the results remain the same except for extreme high fields and velocities as could be encountered in astrophysics
Differential equations inverse and direct problems
DEGENERATE FIRST ORDER IDENTIFICATION PROBLEMS IN BANACH SPACES A NONISOTHERMAL DYNAMICAL GINZBURG-LANDAU MODEL OF SUPERCONDUCTIVITY. EXISTENCE AND UNIQUENESS THEOREMSSOME GLOBAL IN TIME RESULTS FOR INTEGRODIFFERENTIAL PARABOLIC INVERSE PROBLEMSFOURTH ORDER ORDINARY DIFFERENTIAL OPERATORS WITH GENERAL WENTZELL BOUNDARY CONDITIONSTUDY OF ELLIPTIC DIFFERENTIAL EQUATIONS IN UMD SPACESDEGENERATE INTEGRODIFFERENTIAL EQUATIONS OF PARABOLIC TYPE EXPONENTIAL ATTRACTORS FOR SEMICONDUCTOR EQUATIONSCONVERGENCE TO STATIONARY STATES OF SOLUTIONS TO THE SEMILINEAR EQUATION OF VISCOELASTICITY ASYMPTOTIC BEHA
Statistical Methods for Stochastic Differential Equations
Kessler, Mathieu; Sorensen, Michael
The seventh volume in the SemStat series, Statistical Methods for Stochastic Differential Equations presents current research trends and recent developments in statistical methods for stochastic differential equations. Written to be accessible to both new students and seasoned researchers, each self-contained chapter starts with introductions to the topic at hand and builds gradually towards discussing recent research. The book covers Wiener-driven equations as well as stochastic differential equations with jumps, including continuous-time ARMA processes and COGARCH processes. It presents a sp
General particle transport equation. Final report
Lafi, A.Y.; Reyes, J.N. Jr.
The general objectives of this research are as follows: (1) To develop fundamental models for fluid particle coalescence and breakage rates for incorporation into statistically based (Population Balance Approach or Monte Carlo Approach) two-phase thermal hydraulics codes. (2) To develop fundamental models for flow structure transitions based on stability theory and fluid particle interaction rates. This report details the derivation of the mass, momentum and energy conservation equations for a distribution of spherical, chemically non-reacting fluid particles of variable size and velocity. To study the effects of fluid particle interactions on interfacial transfer and flow structure requires detailed particulate flow conservation equations. The equations are derived using a particle continuity equation analogous to Boltzmann's transport equation. When coupled with the appropriate closure equations, the conservation equations can be used to model nonequilibrium, two-phase, dispersed, fluid flow behavior. Unlike the Eulerian volume and time averaged conservation equations, the statistically averaged conservation equations contain additional terms that take into account the change due to fluid particle interfacial acceleration and fluid particle dynamics. Two types of particle dynamics are considered; coalescence and breakage. Therefore, the rate of change due to particle dynamics will consider the gain and loss involved in these processes and implement phenomenological models for fluid particle breakage and coalescence
Method of controlling chaos in laser equations
Duong-van, M.
A method of controlling chaotic to laminar flows in the Lorenz equations using fixed points dictated by minimizing the Lyapunov functional was proposed by Singer, Wang, and Bau [Phys. Rev. Lett. 66, 1123 (1991)]. Using different fixed points, we find that the solutions in a chaotic regime can also be periodic. Since the laser equations are isomorphic to the Lorenz equations we use this method to control chaos when the laser is operated over the pump threshold. Furthermore, by solving the laser equations with an occasional proportional feedback mechanism, we recover the essential laser controlling features experimentally discovered by Roy, Murphy, Jr., Maier, Gills, and Hunt [Phys. Rev. Lett. 68, 1259 (1992)
Duong-van, Minh
A method of controlling chaotic to laminar flows in the Lorenz equations using fixed points dictated by minimizing the Lyapunov functional was proposed by Singer, Wang, and Bau [Phys. Rev. Lett. 66, 1123 (1991)]. Using different fixed points, we find that the solutions in a chaotic regime can also be periodic. Since the laser equations are isomorphic to the Lorenz equations we use this method to control chaos when the laser is operated over the pump threshold. Furthermore, by solving the laser equations with an occasional proportional feedback mechanism, we recover the essential laser controlling features experimentally discovered by Roy, Murphy, Jr., Maier, Gills, and Hunt [Phys. Rev. Lett. 68, 1259 (1992)].
Hartree--Fock density matrix equation
Cohen, L.; Frishberg, C.
An equation for the Hartree--Fock density matrix is discussed and the possibility of solving this equation directly for the density matrix instead of solving the Hartree--Fock equation for orbitals is considered. Toward that end the density matrix is expanded in a finite basis to obtain the matrix representative equation. The closed shell case is considered. Two numerical schemes are developed and applied to a number of examples. One example is given where the standard orbital method does not converge while the method presented here does
Chew-Low equations as Cremoma transformations
Rerikh, K.V.
The Chew-Low equations for the p-wave pion-nucleon scattering with the crossing-symmetry matrix (3x3) are investigated in their well-known formulation as a system of nonlinear difference equations. These equations interpreted as geometrical transformations are shown to be a special case of the Cremona transformaions. Using the properties of the Cremona transformations we obtain the general 3-parametric functional equation on invariant algebraic and nonalgebraic curves in the space solutions of the Chew- Low equations. It is proved that there exists only one invariant algebraic curve, the parabola corresponding to the well-known solution. Analysis of the general functional equation on invariant nonalgebraic curves makes it possible to select in addition to this parabola 3 invariant forms defining implicitly 3 nonalgebraic curves and to concretize for them the general equation by means of fixing the parameters. From the transformational properties of the invariant forms with respect to the Cremona transformations, there follows an important result that the ration of these forms in proper powers is the general integral of the nonlinear system of the Chew-Low equations, which is an even antiperiodic function. The structure of the second general integral is given and the functional equations which determinne this integral are presented [ru
Attractors for equations of mathematical physics
Chepyzhov, Vladimir V
One of the major problems in the study of evolution equations of mathematical physics is the investigation of the behavior of the solutions to these equations when time is large or tends to infinity. The related important questions concern the stability of solutions or the character of the instability if a solution is unstable. In the last few decades, considerable progress in this area has been achieved in the study of autonomous evolution partial differential equations. For a number of basic evolution equations of mathematical physics, it was shown that the long time behavior of their soluti
Some remarks on unilateral matrix equations
Cerchiai, Bianca L.; Zumino, Bruno
We briefly review the results of our paper LBNL-46775: We study certain solutions of left-unilateral matrix equations. These are algebraic equations where the coefficients and the unknown are square matrices of the same order, or, more abstractly, elements of an associative, but possibly noncommutative algebra, and all coefficients are on the left. Recently such equations have appeared in a discussion of generalized Born-Infeld theories. In particular, two equations, their perturbative solutions and the relation between them are studied, applying a unified approach based on the generalized Bezout theorem for matrix polynomials
Lax representations for matrix short pulse equations
Popowicz, Z.
The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.
Trajectory attractors of equations of mathematical physics
Vishik, Marko I; Chepyzhov, Vladimir V
In this survey the method of trajectory dynamical systems and trajectory attractors is described, and is applied in the study of the limiting asymptotic behaviour of solutions of non-linear evolution equations. This method is especially useful in the study of dissipative equations of mathematical physics for which the corresponding Cauchy initial-value problem has a global (weak) solution with respect to the time but the uniqueness of this solution either has not been established or does not hold. An important example of such an equation is the 3D Navier-Stokes system in a bounded domain. In such a situation one cannot use directly the classical scheme of construction of a dynamical system in the phase space of initial conditions of the Cauchy problem of a given equation and find a global attractor of this dynamical system. Nevertheless, for such equations it is possible to construct a trajectory dynamical system and investigate a trajectory attractor of the corresponding translation semigroup. This universal method is applied for various types of equations arising in mathematical physics: for general dissipative reaction-diffusion systems, for the 3D Navier-Stokes system, for dissipative wave equations, for non-linear elliptic equations in cylindrical domains, and for other equations and systems. Special attention is given to using the method of trajectory attractors in approximation and perturbation problems arising in complicated models of mathematical physics. Bibliography: 96 titles.
FDTD for Hydrodynamic Electron Fluid Maxwell Equations
Yingxue Zhao
Full Text Available In this work, we develop a numerical method for solving the three dimensional hydrodynamic electron fluid Maxwell equations that describe the electron gas dynamics driven by an external electromagnetic wave excitation. Our numerical approach is based on the Finite-Difference Time-Domain (FDTD method for solving the Maxwell's equations and an explicit central finite difference method for solving the hydrodynamic electron fluid equations containing both electron density and current equations. Numerical results show good agreement with the experiment of studying the second-harmonic generation (SHG from metallic split-ring resonator (SRR.
Optimal Control for Stochastic Delay Evolution Equations
Meng, Qingxin, E-mail: [email protected] [Huzhou University, Department of Mathematical Sciences (China); Shen, Yang, E-mail: [email protected] [York University, Department of Mathematics and Statistics (Canada)
In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin's maximum principles. To illustrate the theoretical results, we apply stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.
On implicit abstract neutral nonlinear differential equations
Hernández, Eduardo, E-mail: [email protected] [Universidade de São Paulo, Departamento de Computação e Matemática, Faculdade de Filosofia Ciências e Letras de Ribeirão Preto (Brazil); O'Regan, Donal, E-mail: [email protected] [National University of Ireland, School of Mathematics, Statistics and Applied Mathematics (Ireland)
In this paper we continue our developments in Hernández and O'Regan (J Funct Anal 261:3457–3481, 2011) on the existence of solutions for abstract neutral differential equations. In particular we extend the results in Hernández and O'Regan (J Funct Anal 261:3457–3481, 2011) for the case of implicit nonlinear neutral equations and we focus on applications to partial "nonlinear� neutral differential equations. Some applications involving partial neutral differential equations are presented.
Kinetic Boltzmann, Vlasov and Related Equations
Sinitsyn, Alexander; Vedenyapin, Victor
Boltzmann and Vlasov equations played a great role in the past and still play an important role in modern natural sciences, technique and even philosophy of science. Classical Boltzmann equation derived in 1872 became a cornerstone for the molecular-kinetic theory, the second law of thermodynamics (increasing entropy) and derivation of the basic hydrodynamic equations. After modifications, the fields and numbers of its applications have increased to include diluted gas, radiation, neutral particles transportation, atmosphere optics and nuclear reactor modelling. Vlasov equation was obtained in
Exponentially Convergent Algorithms for Abstract Differential Equations
Gavrilyuk, Ivan; Vasylyk, Vitalii
This book presents new accurate and efficient exponentially convergent methods for abstract differential equations with unbounded operator coefficients in Banach space. These methods are highly relevant for the practical scientific computing since the equations under consideration can be seen as the meta-models of systems of ordinary differential equations (ODE) as well as the partial differential equations (PDEs) describing various applied problems. The framework of functional analysis allows one to obtain very general but at the same time transparent algorithms and mathematical results which
Multidimensional singular integrals and integral equations
Mikhlin, Solomon Grigorievich; Stark, M; Ulam, S
Multidimensional Singular Integrals and Integral Equations presents the results of the theory of multidimensional singular integrals and of equations containing such integrals. Emphasis is on singular integrals taken over Euclidean space or in the closed manifold of Liapounov and equations containing such integrals. This volume is comprised of eight chapters and begins with an overview of some theorems on linear equations in Banach spaces, followed by a discussion on the simplest properties of multidimensional singular integrals. Subsequent chapters deal with compounding of singular integrals
Geometrical and Graphical Solutions of Quadratic Equations.
Hornsby, E. John, Jr.
Presented are several geometrical and graphical methods of solving quadratic equations. Discussed are Greek origins, Carlyle's method, von Staudt's method, fixed graph methods and imaginary solutions. (CW)
Numerical Methods for Partial Differential Equations
Guo, Ben-yu
These Proceedings of the first Chinese Conference on Numerical Methods for Partial Differential Equations covers topics such as difference methods, finite element methods, spectral methods, splitting methods, parallel algorithm etc., their theoretical foundation and applications to engineering. Numerical methods both for boundary value problems of elliptic equations and for initial-boundary value problems of evolution equations, such as hyperbolic systems and parabolic equations, are involved. The 16 papers of this volume present recent or new unpublished results and provide a good overview of current research being done in this field in China.
On the Existence and the Applications of Modified Equations for Stochastic Differential Equations
Zygalakis, K. C.
In this paper we describe a general framework for deriving modified equations for stochastic differential equations (SDEs) with respect to weak convergence. Modified equations are derived for a variety of numerical methods, such as the Euler or the Milstein method. Existence of higher order modified equations is also discussed. In the case of linear SDEs, using the Gaussianity of the underlying solutions, we derive an SDE which the numerical method solves exactly in the weak sense. Applications of modified equations in the numerical study of Langevin equations is also discussed. © 2011 Society for Industrial and Applied Mathematics.
On a functional equation related to the intermediate long wave equation
Hone, A N W; Novikov, V S
We resolve an open problem stated by Ablowitz et al (1982 J. Phys. A: Math. Gen. 15 781) concerning the integral operator appearing in the intermediate long wave equation. We explain how this is resolved using the perturbative symmetry approach introduced by one of us with Mikhailov. By solving a certain functional equation, we prove that the intermediate long wave equation and the Benjamin-Ono equation are the unique integrable cases within a particular class of integro-differential equations. Furthermore, we explain how the perturbative symmetry approach is naturally extended to treat equations on a periodic domain. (letter to the editor)
An Auxiliary Equation for the Bellman Equation in a One-Dimensional Ergodic Control
Fujita, Y.
In this paper we consider the Bellman equation in a one-dimensional ergodic control. Our aim is to show the existence and the uniqueness of its solution under general assumptions. For this purpose we introduce an auxiliary equation whose solution gives the invariant measure of the diffusion corresponding to an optimal control. Using this solution, we construct a solution to the Bellman equation. Our method of using this auxiliary equation has two advantages in the one-dimensional case. First, we can solve the Bellman equation under general assumptions. Second, this auxiliary equation gives an optimal Markov control explicitly in many examples
A Priori Regularity of Parabolic Partial Differential Equations
Berkemeier, Francisco
In this thesis, we consider parabolic partial differential equations such as the heat equation, the Fokker-Planck equation, and the porous media equation. Our aim is to develop methods that provide a priori estimates for solutions with singular
Singularly perturbed Burger-Huxley equation: Analytical solution ...
solutions of singularly perturbed nonlinear differential equations. ... for solving generalized Burgers-Huxley equation but this equation is not singularly ...... Solitary waves solutions of the generalized Burger Huxley equations, Journal of.
Minimal solution for inconsistent singular fuzzy matrix equations
M. Nikuie
Full Text Available The fuzzy matrix equations $Ailde{X}=ilde{Y}$ is called a singular fuzzy matrix equations while the coefficients matrix of its equivalent crisp matrix equations be a singular matrix. The singular fuzzy matrix equations are divided into two parts: consistent singular matrix equations and inconsistent fuzzy matrix equations. In this paper, the inconsistent singular fuzzy matrix equations is studied and the effect of generalized inverses in finding minimal solution of an inconsistent singular fuzzy matrix equations are investigated.
The Dirac equation in classical statistical mechanics
The Dirac equation, usually obtained by 'quantizing' a classical stochastic model is here obtained directly within classical statistical mechanics. The special underlying space-time geometry of the random walk replaces the missing analytic continuation, making the model 'self-quantizing'. This provides a new context for the Dirac equation, distinct from its usual context in relativistic quantum mechanics
Kinetic equation of heterogeneous catalytic isotope exchange
Trokhimets, A I [AN Belorusskoj SSR, Minsk. Inst. Fiziko-Organicheskoj Khimii
A kinetic equation is derived for the bimolecular isotope exchange reaction between AXsub(n)sup(*) and BXsub(m)sup(o), all atoms of element X in each molecule being equivalent. The equation can be generalized for homogeneous and heterogeneous catalytic isotope exchange.
Nonlinear scalar field equations. Pt. 1
Berestycki, H.; Lions, P.L.
This paper as well as a subsequent one is concerned with the existence of nontrivial solutions for some semi-linear elliptic equations in Rsup(N). Such problems are motivated in particular by the search for certain kinds of solitary waves (stationary states) in nonlinear equations of the Klein-Gordon or Schroedinger type. (orig./HSI)
Some aspects of equations of state
Frisch, H.L.
Some elementary properties of the equation of state of molecules repulsing each other as point centers of force are developed briefly. An inequality for the Lennard--Jones gas is presented. The scaled particle theory equation of state of hard spheres is also reviewed briefly. Means of possibly applying these concepts to represent thermodynamic data on model detonating gases are suggested
Distributed Approximating Functional Approach to Burgers' Equation ...
This equation is similar to, but simpler than, the Navier-Stokes equation in fluid dynamics. To verify this advantage through some comparison studies, an exact series solution are also obtained. In addition, the presented scheme has numerically stable behavior. After demonstrating the convergence and accuracy of the ...
Special solutions of neutral functional differential equations
Győri István
Full Text Available For a system of nonlinear neutral functional differential equations we prove the existence of an -parameter family of "special solutions" which characterize the asymptotic behavior of all solutions at infinity. For retarded functional differential equations the special solutions used in this paper were introduced by Ryabov.
Subordination principle for fractional evolution equations
Bazhlekova, E.G.
The abstract Cauchy problem for the fractional evolution equation Daa = Au, a > 0, (1) where A is a closed densely de??ned operator in a Banach space, is investigated. The subordination principle, presented earlier in [J. P r ??u s s, Evolutionary In- tegral Equations and Applications. Birkh??auser,
A Local Net Volume Equation for Iowa
Jerold T. Hahn
As a part of the 1974 Forest Survey of Iowa, the Station''s Forst Resources Evaluatioin Research Staff developed a merchantable tree volume equation and tables of coefficients for Iowa. They were developed for both board-foot (International ?-inch rule) and cubic foot volumes, for several species and species groups of growing-stock trees. The equation and...
Multicomponent equations of state for electrolytes
Lin, Yi; Thomsen, Kaj; Hemptinne, Jean-Charles de
. The parameters in the equations of state were fitted to experimental data consisting of apparent molar volumes, osmotic coefficients, mean ionic activity coefficients, and solid-liquid equilibrium data. The results of the parameter fitting are presented. The ability of the equations of state to reproduce...
New exact solutions for two nonlinear equations
Wang Quandi; Tang Minying
In this Letter, we investigate two nonlinear equations given by u t -u xxt +3u 2 u x =2u x u xx +uu xxx and u t -u xxt +4u 2 u x =3u x u xx +uu xxx . Through some special phase orbits we obtain four new exact solutions for each equation above. Some previous results are extended
Modeling animal movements using stochastic differential equations
Haiganoush K. Preisler; Alan A. Ager; Bruce K. Johnson; John G. Kie
We describe the use of bivariate stochastic differential equations (SDE) for modeling movements of 216 radiocollared female Rocky Mountain elk at the Starkey Experimental Forest and Range in northeastern Oregon. Spatially and temporally explicit vector fields were estimated using approximating difference equations and nonparametric regression techniques. Estimated...
Scattering integral equations and four nucleon problem
Narodetskii, I.M.
Existing results from the application of integral equation technique to the four-nucleon bound states and scattering are reviewed. The first numerical calculations of the four-body integral equations have been done ten years ago. Yet, it is still widely believed that these equations are too complicated to solve numerically. The purpose of this review is to provide a clear and elementary introduction in the integral equation method and to demonstrate its usefulness in physical applications. The presentation is based on the quasiparticle approach. This permits a simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt method of the Fredholm integral equation theory. The first part of this review contains a detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the kernel of the four-body equations. The second part contains the discussion of the four-body quasiparticle equations and of the resed forullts obtain bound states and scattering
Coupling Integrable Couplings of an Equation Hierarchy
Wang Hui; Xia Tie-Cheng
Based on a kind of Lie algebra G proposed by Zhang, one isospectral problem is designed. Under the framework of zero curvature equation, a new kind of integrable coupling of an equation hierarchy is generated using the methods proposed by Ma and Gao. With the help of variational identity, we get the Hamiltonian structure of the hierarchy. (general)
Selected papers on analysis and differential equations
Society, American Mathematical
This volume contains translations of papers that originally appeared in the Japanese journal Sūgaku. These papers range over a variety of topics in ordinary and partial differential equations, and in analysis. Many of them are survey papers presenting new results obtained in the last few years. This volume is suitable for graduate students and research mathematicians interested in analysis and differential equations.
Discrete Riccati equation solutions: Distributed algorithms
D. G. Lainiotis
Full Text Available In this paper new distributed algorithms for the solution of the discrete Riccati equation are introduced. The algorithms are used to provide robust and computational efficient solutions to the discrete Riccati equation. The proposed distributed algorithms are theoretically interesting and computationally attractive.
Semigroup methods for evolution equations on networks
Mugnolo, Delio
This concise text is based on a series of lectures held only a few years ago and originally intended as an introduction to known results on linear hyperbolic and parabolic equations. Yet the topic of differential equations on graphs, ramified spaces, and more general network-like objects has recently gained significant momentum and, well beyond the confines of mathematics, there is a lively interdisciplinary discourse on all aspects of so-called complex networks. Such network-like structures can be found in virtually all branches of science, engineering and the humanities, and future research thus calls for solid theoretical foundations. This book is specifically devoted to the study of evolution equations – i.e., of time-dependent differential equations such as the heat equation, the wave equation, or the Schrödinger equation (quantum graphs) – bearing in mind that the majority of the literature in the last ten years on the subject of differential equations of graphs has been devoted to ellip...
Local p-Adic Differential Equations
Put, Marius van der; Taelman, Lenny
This paper studies divergence in solutions of p-adic linear local differential equations. Such divergence is related to the notion of p-adic Liouville numbers. Also, the influence of the divergence on the differential Galois groups of such differential equations is explored. A complete result is
Solving Differential Equations Using Modified Picard Iteration
Robin, W. A.
Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…
Quantum derivatives and the Schroedinger equation
Ben Adda, Faycal; Cresson, Jacky
We define a scale derivative for non-differentiable functions. It is constructed via quantum derivatives which take into account non-differentiability and the existence of a minimal resolution for mean representation. This justify heuristic computations made by Nottale in scale-relativity. In particular, the Schroedinger equation is derived via the scale-relativity principle and Newton's fundamental equation of dynamics
On the Mo-Papas equation
Aguirregabiria, J. M.; Chamorro, A.; Valle, M. A.
A new heuristic derivation of the Mo-Papas equation for charged particles is given. It is shown that this equation cannot be derived for a point particle by closely following Dirac's classical treatment of the problem. The Mo-Papas theory and the Bonnor-Rowe-Marx variable mass dynamics are not compatible.
xRage Equation of State
Grove, John W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.
The circle equation over finite fields
Aabrandt, Andreas; Hansen, Vagn Lundsgaard
Interesting patterns in the geometry of a plane algebraic curve C can be observed when the defining polynomial equation is solved over the family of finite fields. In this paper, we examine the case of C the classical unit circle defined by the circle equation x2 + y2 = 1. As a main result, we es...
Global existence proof for relativistic Boltzmann equation
Dudynski, M.; Ekiel-Jezewska, M.L.
The existence and causality of solutions to the relativistic Boltzmann equation in L 1 and in L loc 1 are proved. The solutions are shown to satisfy physically natural a priori bounds, time-independent in L 1 . The results rely upon new techniques developed for the nonrelativistic Boltzmann equation by DiPerna and Lions
Entropy viscosity method applied to Euler equations
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
On Fractional Order Hybrid Differential Equations
Mohamed A. E. Herzallah
Full Text Available We develop the theory of fractional hybrid differential equations with linear and nonlinear perturbations involving the Caputo fractional derivative of order 0<1. Using some fixed point theorems we prove the existence of mild solutions for two types of hybrid equations. Examples are given to illustrate the obtained results.
Oscillation results for certain fractional difference equations
Zhiyun WANG
Full Text Available Fractional calculus is a theory that studies the properties and application of arbitrary order differentiation and integration. It can describe the physical properties of some systems more accurately, and better adapt to changes in the system, playing an important role in many fields. For example, it can describe the process of tumor growth (growth stimulation and growth inhibition in biomedical science. The oscillation of solutions of two kinds of fractional difference equations is studied, mainly using the proof by contradiction, that is, assuming the equation has a nonstationary solution. For the first kind of equation, the function symbol is firstly determined, and by constructing the Riccati function, the difference is calculated. Then the condition of the function is used to satisfy the contradiction, that is, the assumption is false, which verifies the oscillation of the solution. For the second kind of equation with initial condition, the equivalent fractional sum form of the fractional difference equation are firstly proved. With considering 01, respectively, by using the properties of Stirling formula and factorial function, the contradictory is got through enhanced processing, namely the assuming is not established, and the sufficient condition for the bounded solutions of the fractional difference equation is obtained. The above results will optimize the relevant conclusions and enrich the relevant results. The results are applied to the specific equations, and the oscillation of the solutions of equations is proved.
Structural Equation Modeling of Multivariate Time Series
du Toit, Stephen H. C.; Browne, Michael W.
The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…
P-adic Schroedinger type equation
Vladimirov, V.S.; Volovich, I.V.
In p-adic quantum mechanics a Schroedinger type equation is considered. We discuss the appropriate notion of differential operators. A solution of the Schroedinger type equation is given. A new set of vacuum states for the p-adic quantum harmonic oscillator is presented. The correspondence principle with the standard quantum mechanics is discussed. (orig.)
Oscillation theory of linear differential equations
Došlý, Ondřej
Ro�. 36, �. 5 (2000), s. 329-343 ISSN 0044-8753 R&D Projects: GA ČR GA201/98/0677 Keywords : discrete oscillation theory %Sturm-Liouville equation%Riccati equation Subject RIV: BA - General Mathematics
Dual exponential polynomials and linear differential equations
Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne
We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.
Regression Equations for Birth Weight Estimation using ...
In this study, Birth Weight has been estimated from anthropometric measurements of hand and foot. Linear regression equations were formed from each of the measured variables. These simple equations can be used to estimate Birth Weight of new born babies, in order to identify those with low birth weight and referred to ...
Relativistic wave equations and compton scattering
Sutanto, S.H.; Robson, B.A.
Full text: Recently an eight-component relativistic wave equation for spin-1/2 particles was proposed.This equation was obtained from a four-component spin-1/2 wave equation (the KG1/2 equation), which contains second-order derivatives in both space and time, by a procedure involving a linearisation of the time derivative analogous to that introduced by Feshbach and Villars for the Klein-Gordon equation. This new eight-component equation gives the same bound-state energy eigenvalue spectra for hydrogenic atoms as the Dirac equation but has been shown to predict different radiative transition probabilities for the fine structure of both the Balmer and Lyman a-lines. Since it has been shown that the new theory does not always give the same results as the Dirac theory, it is important to consider the validity of the new equation in the case of other physical problems. One of the early crucial tests of the Dirac theory was its application to the scattering of a photon by a free electron: the so-called Compton scattering problem. In this paper we apply the new theory to the calculation of Compton scattering to order e 2 . It will be shown that in spite of the considerable difference in the structure of the new theory and that of Dirac the cross section is given by the Klein-Nishina formula
Constitutive equations for two-phase flows
Boure, J.A.
The mathematical model of a system of fluids consists of several kinds of equations complemented by boundary and initial conditions. The first kind equations result from the application to the system, of the fundamental conservation laws (mass, momentum, energy). The second kind equations characterize the fluid itself, i.e. its intrinsic properties and in particular its mechanical and thermodynamical behavior. They are the mathematical model of the particular fluid under consideration, the laws they expressed are so called the constitutive equations of the fluid. In practice the constitutive equations cannot be fully stated without reference to the conservation laws. Two classes of model have been distinguished: mixture model and two-fluid models. In mixture models, the mixture is considered as a single fluid. Besides the usual friction factor and heat transfer correlations, a single constitutive law is necessary. In diffusion models, the mixture equation of state is replaced by the phasic equations of state and by three consitutive laws, for phase change mass transfer, drift velocity and thermal non-equilibrium respectively. In the two-fluid models, the two phases are considered separately; two phasic equations of state, two friction factor correlations, two heat transfer correlations and four constitutive laws are included [fr
Lie algebras and linear differential equations.
Brockett, R. W.; Rahimi, A.
Certain symmetry properties possessed by the solutions of linear differential equations are examined. For this purpose, some basic ideas from the theory of finite dimensional linear systems are used together with the work of Wei and Norman on the use of Lie algebraic methods in differential equation theory.
Singularities in the nonisotropic Boltzmann equation
Garibotti, C.R.; Martiarena, M.L.; Zanette, D.
We consider solutions of the nonlinear Boltzmann equation (NLBE) with anisotropic singular initial conditions, which give a simplified model for the penetration of a monochromatic beam on a rarified target. The NLBE is transformed into an integral equation which is solved iteratively and the evolution of the initial singularities is discussed. (author). 5 refs
Using fundamental equations to describe basic phenomena
Jakobsen, Arne; Rasmussen, Bjarne D.
When the fundamental thermodynamic balance equations (mass, energy, and momentum) are used to describe the processes in a simple refrigeration system, then one finds that the resulting equation system will have a degree of freedom equal to one. Further investigations reveal that it is the equatio...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Fermat type differential and difference equations
Kai Liu
Full Text Available This article we explore the relationship between the number of differential and difference operators with the existence of meromorphic solutions of Fermat type differential and difference equations. Some Fermat differential and difference equations of certain types are also considered.
How Should Equation Balancing Be Taught?
Porter, Spencer K.
Matrix methods and oxidation-number methods are currently advocated and used for balancing equations. This article shows how balancing equations can be introduced by a third method which is related to a fundamental principle, is easy to learn, and is powerful in its application. (JN)
Student Understanding of Chemical Equation Balancing.
Yarroch, W. L.
Results of interviews with high school chemistry students (N=14) during equation-solving sessions indicate that those who were able to construct diagrams consistent with notation of their balanced equation possessed good concepts of subscript and the balancing rule. Implications for chemistry teaching are discussed. (DH)
Construction of Chained True Score Equipercentile Equatings under the Kernel Equating (KE) Framework and Their Relationship to Levine True Score Equating. Research Report. ETS RR-09-24
In this paper, we develop a new chained equipercentile equating procedure for the nonequivalent groups with anchor test (NEAT) design under the assumptions of the classical test theory model. This new equating is named chained true score equipercentile equating. We also apply the kernel equating framework to this equating design, resulting in a…
Equations for the stochastic cumulative multiplying chain
Lewins, J D [Cambridge Univ. (UK). Dept. of Engineering
The forward and backward equations for the conditional probability of the neutron multiplying chain are derived in a new generalization accounting for the chain length and admitting time dependent properties. These Kolmogorov equations form the basis of a variational and hence complete description of the 'lumped' multiplying system. The equations reduce to the marginal distribution, summed over all chain lengths, and to the simpler equations previously derived for that problem. The method of derivation, direct and in the probability space with the minimum of mathematical manipulations, is perhaps the chief attraction: the equations are also displayed in conventional generating function form. As such, they appear to apply to number of problems in areas of social anthropology, polymer chemistry, genetics and cell biology as well as neutron reactor theory and radiation damage.
The forward and backward equations for the conditional probability of the neutron multiplying chain are derived in a new generalization accounting for the chain length and admitting time dependent properties. These Kolmogorov equations form the basis of a variational and hence complete description of the 'lumped' multiplying system. The equations reduce to the marginal distribution, summed over all chain lengths, and to the simpler equations previously derived for that problem. The method of derivation, direct and in the probability space with the minimum of mathematical manipulations, is perhaps the chief attraction: the equations are also displayed in conventional generating function form. As such, they appear to apply to number of problems in areas of social anthropology, polymer chemistry, genetics and cell biology as well as neutron reactor theory and radiation damage. (author)
Integrable peakon equations with cubic nonlinearity
Hone, Andrew N W; Wang, J P
We present a new integrable partial differential equation found by Vladimir Novikov. Like the Camassa-Holm and Degasperis-Procesi equations, this new equation admits peaked soliton (peakon) solutions, but it has nonlinear terms that are cubic, rather than quadratic. We give a matrix Lax pair for V Novikov's equation, and show how it is related by a reciprocal transformation to a negative flow in the Sawada-Kotera hierarchy. Infinitely many conserved quantities are found, as well as a bi-Hamiltonian structure. The latter is used to obtain the Hamiltonian form of the finite-dimensional system for the interaction of N peakons, and the two-body dynamics (N = 2) is explicitly integrated. Finally, all of this is compared with some analogous results for another cubic peakon equation derived by Zhijun Qiao. (fast track communication)
Ordinary differential equation for local accumulation time.
Berezhkovskii, Alexander M
Cell differentiation in a developing tissue is controlled by the concentration fields of signaling molecules called morphogens. Formation of these concentration fields can be described by the reaction-diffusion mechanism in which locally produced molecules diffuse through the patterned tissue and are degraded. The formation kinetics at a given point of the patterned tissue can be characterized by the local accumulation time, defined in terms of the local relaxation function. Here, we show that this time satisfies an ordinary differential equation. Using this equation one can straightforwardly determine the local accumulation time, i.e., without preliminary calculation of the relaxation function by solving the partial differential equation, as was done in previous studies. We derive this ordinary differential equation together with the accompanying boundary conditions and demonstrate that the earlier obtained results for the local accumulation time can be recovered by solving this equation. © 2011 American Institute of Physics
Computing with linear equations and matrices
Churchhouse, R.F.
Systems of linear equations and matrices arise in many disciplines. The equations may accurately represent conditions satisfied by a system or, more likely, provide an approximation to a more complex system of non-linear or differential equations. The system may involve a few or many thousand unknowns and each individual equation may involve few or many of them. Over the past 50 years a vast literature on methods for solving systems of linear equations and the associated problems of finding the inverse or eigenvalues of a matrix has been produced. These lectures cover those methods which have been found to be most useful for dealing with such types of problem. References are given where appropriate and attention is drawn to the possibility of improved methods for use on vector and parallel processors. (orig.)
Covariant Conformal Decomposition of Einstein Equations
Gourgoulhon, E.; Novak, J.
It has been shown1,2 that the usual 3+1 form of Einstein's equations may be ill-posed. This result has been previously observed in numerical simulations3,4. We present a 3+1 type formalism inspired by these works to decompose Einstein's equations. This decomposition is motivated by the aim of stable numerical implementation and resolution of the equations. We introduce the conformal 3-``metric'' (scaled by the determinant of the usual 3-metric) which is a tensor density of weight -2/3. The Einstein equations are then derived in terms of this ``metric'', of the conformal extrinsic curvature and in terms of the associated derivative. We also introduce a flat 3-metric (the asymptotic metric for isolated systems) and the associated derivative. Finally, the generalized Dirac gauge (introduced by Smarr and York5) is used in this formalism and some examples of formulation of Einstein's equations are shown.
Nonlinear elliptic equations of the second order
Nonlinear elliptic differential equations are a diverse subject with important applications to the physical and social sciences and engineering. They also arise naturally in geometry. In particular, much of the progress in the area in the twentieth century was driven by geometric applications, from the Bernstein problem to the existence of Kähler-Einstein metrics. This book, designed as a textbook, provides a detailed discussion of the Dirichlet problems for quasilinear and fully nonlinear elliptic differential equations of the second order with an emphasis on mean curvature equations and on Monge-Ampère equations. It gives a user-friendly introduction to the theory of nonlinear elliptic equations with special attention given to basic results and the most important techniques. Rather than presenting the topics in their full generality, the book aims at providing self-contained, clear, and "elementary" proofs for results in important special cases. This book will serve as a valuable resource for graduate stu...
Introduction to differential equations with dynamical systems
Campbell, Stephen L
Many textbooks on differential equations are written to be interesting to the teacher rather than the student. Introduction to Differential Equations with Dynamical Systems is directed toward students. This concise and up-to-date textbook addresses the challenges that undergraduate mathematics, engineering, and science students experience during a first course on differential equations. And, while covering all the standard parts of the subject, the book emphasizes linear constant coefficient equations and applications, including the topics essential to engineering students. Stephen Campbell and Richard Haberman--using carefully worded derivations, elementary explanations, and examples, exercises, and figures rather than theorems and proofs--have written a book that makes learning and teaching differential equations easier and more relevant. The book also presents elementary dynamical systems in a unique and flexible way that is suitable for all courses, regardless of length.
Advanced functional evolution equations and inclusions
Benchohra, Mouffak
This book presents up-to-date results on abstract evolution equations and differential inclusions in infinite dimensional spaces. It covers equations with time delay and with impulses, and complements the existing literature in functional differential equations and inclusions. The exposition is devoted to both local and global mild solutions for some classes of functional differential evolution equations and inclusions, and other densely and non-densely defined functional differential equations and inclusions in separable Banach spaces or in Fréchet spaces. The tools used include classical fixed points theorems and the measure-of non-compactness, and each chapter concludes with a section devoted to notes and bibliographical remarks. This monograph is particularly useful for researchers and graduate students studying pure and applied mathematics, engineering, biology and all other applied sciences.
Controllability and stabilization of parabolic equations
This monograph presents controllability and stabilization methods in control theory that solve parabolic boundary value problems. Starting from foundational questions on Carleman inequalities for linear parabolic equations, the author addresses the controllability of parabolic equations on a variety of domains and the spectral decomposition technique for representing them. This method is, in fact, designed for use in a wider class of parabolic systems that include the heat and diffusion equations. Later chapters develop another process that employs stabilizing feedback controllers with a finite number of unstable modes, with special attention given to its use in the boundary stabilization of Navier–Stokes equations for the motion of viscous fluid. In turn, these applied methods are used to explore related topics like the exact controllability of stochastic parabolic equations with linear multiplicative noise. Intended for graduate students and researchers working on control problems involving nonlinear diff...
Asymptotic integration of differential and difference equations
Bodine, Sigrun
This book presents the theory of asymptotic integration for both linear differential and difference equations. This type of asymptotic analysis is based on some fundamental principles by Norman Levinson. While he applied them to a special class of differential equations, subsequent work has shown that the same principles lead to asymptotic results for much wider classes of differential and also difference equations. After discussing asymptotic integration in a unified approach, this book studies how the application of these methods provides several new insights and frequent improvements to results found in earlier literature. It then continues with a brief introduction to the relatively new field of asymptotic integration for dynamic equations on time scales. Asymptotic Integration of Differential and Difference Equations is a self-contained and clearly structured presentation of some of the most important results in asymptotic integration and the techniques used in this field. It will appeal to researchers i...
Partial Differential Equations Modeling and Numerical Simulation
Glowinski, Roland
This book is dedicated to Olivier Pironneau. For more than 250 years partial differential equations have been clearly the most important tool available to mankind in order to understand a large variety of phenomena, natural at first and then those originating from human activity and technological development. Mechanics, physics and their engineering applications were the first to benefit from the impact of partial differential equations on modeling and design, but a little less than a century ago the Schrödinger equation was the key opening the door to the application of partial differential equations to quantum chemistry, for small atomic and molecular systems at first, but then for systems of fast growing complexity. Mathematical modeling methods based on partial differential equations form an important part of contemporary science and are widely used in engineering and scientific applications. In this book several experts in this field present their latest results and discuss trends in the numerical analy...
Integrable systems of partial differential equations determined by structure equations and Lax pair
Bracken, Paul
It is shown how a system of evolution equations can be developed both from the structure equations of a submanifold embedded in three-space as well as from a matrix SO(6) Lax pair. The two systems obtained this way correspond exactly when a constraint equation is selected and imposed on the system of equations. This allows for the possibility of selecting the coefficients in the second fundamental form in a general way.
Solving polynomial differential equations by transforming them to linear functional-differential equations
Nahay, John Michael
We present a new approach to solving polynomial ordinary differential equations by transforming them to linear functional equations and then solving the linear functional equations. We will focus most of our attention upon the first-order Abel differential equation with two nonlinear terms in order to demonstrate in as much detail as possible the computations necessary for a complete solution. We mention in our section on further developments that the basic transformation idea can be generali...
Equations of motion derived from a generalization of Einstein's equation for the gravitational field
Mociutchi, C.
The extended Einstein's equation, combined with a vectorial theory of maxwellian type of the gravitational field, leads to: a) the equation of motion; b) the equation of the trajectory for the static case of spherical symmetry, the test particle having a rest mass other than zero, and c) the propagation of light on null geodesics. All the basic tests of the theory given by Einstein's extended equation. Thus, the new theory of gravitation suggested by us is competitive. (author)
Polygons of differential equations for finding exact solutions
Kudryashov, Nikolai A.; Demina, Maria V.
A method for finding exact solutions of nonlinear differential equations is presented. Our method is based on the application of polygons corresponding to nonlinear differential equations. It allows one to express exact solutions of the equation studied through solutions of another equation using properties of the basic equation itself. The ideas of power geometry are used and developed. Our approach has a pictorial interpretation, which is illustrative and effective. The method can be also applied for finding transformations between solutions of differential equations. To demonstrate the method application exact solutions of several equations are found. These equations are: the Korteveg-de Vries-Burgers equation, the generalized Kuramoto-Sivashinsky equation, the fourth-order nonlinear evolution equation, the fifth-order Korteveg-de Vries equation, the fifth-order modified Korteveg-de Vries equation and the sixth-order nonlinear evolution equation describing turbulent processes. Some new exact solutions of nonlinear evolution equations are given
Equations of motion in phase space
Broucke, R.
The article gives a general review of methods of constructing equations of motion of a classical dynamical system. The emphasis is however on the linear Lagrangian in phase space and the corresponding form of Pfaff's equations of motion. A detailed examination of the problem of changes of variables in phase space is first given. It is shown that the Linear Lagrangian theory falls very naturally out of the classical quadratic Lagrangian theory; we do this with the use of the well-known Lagrange multiplier method. Another important result is obtained very naturally as a by-product of this analysis. If the most general set of 2n variables (coordinates in phase space) is used, the coefficients of the equations of motion are the Poisson Brackets of these variables. This is therefore the natural way of introducing not only Poisson Brackets in Dynamics formulations but also the associated Lie Algebras and their important properties and consequences. We give then several examples to illustrate the first-order equations of motion and their simplicity in relation to general changes of variables. The first few examples are elementary (the harmonic Oscillator) while the last one concerns the motion of a rigid body about a fixed point. In the next three sections we treat the first-order equations of motion as derived from a Linear differential form, sometimes called Birkhoff's equations. We insist on the generality of the equations and especially on the unity of the space-time concept: the time t and the coordinates are here completely identical variables, without any privilege to t. We give a brief review of Cartan's 2-form and the corresponding equations of motion. As an illustration the standard equations of aircraft flight in a vertical plane are derived from Cartan's exterior differential 2-form. Finally we mention in the last section the differential forms that were proposed by Gallissot for the derivation of equations of motion
Banking on the equator. Are banks that adopted the equator principles different from non-adopters?
Scholtens, B.; Dam, L.
We analyze the performance of banks that adopted the Equator Principles. The Equator Principles are designed to assure sustainable development in project finance. The social, ethical, and environmental policies of the adopters differ significantly from those of banks that did not adopt the Equator
Invalidity of the spectral Fokker-Planck equation forCauchy noise driven Langevin equation
Ditlevsen, Ove Dalager
-called alpha-stable noise (or Levy noise) the Fokker-Planck equation no longer exists as a partial differential equation for the probability density because the property of finite variance is lost. In stead it has been attempted to formulate an equation for the characteristic function (the Fourier transform...
Comparing the IRT Pre-equating and Section Pre-equating: A Simulation Study.
Hwang, Chi-en; Cleary, T. Anne
The results obtained from two basic types of pre-equatings of tests were compared: the item response theory (IRT) pre-equating and section pre-equating (SPE). The simulated data were generated from a modified three-parameter logistic model with a constant guessing parameter. Responses of two replication samples of 3000 examinees on two 72-item…
A novel numerical flux for the 3D Euler equations with general equation of state
Toro, Eleuterio F.; Castro, Cristó bal E.; Bok Jik, Lee
Euler equations for ideal gases and its extension presented in this paper is threefold: (i) we solve the three-dimensional Euler equations on general meshes; (ii) we use a general equation of state; and (iii) we achieve high order of accuracy in both
Every Equation Tells a Story: Using Equation Dictionaries in Introductory Geophysics
Caplan-Auerbach, Jacqueline
Many students view equations as a series of variables and operators into which numbers should be plugged rather than as representative of a physical process. To solve a problem they may simply look for an equation with the correct variables and assume it meets their needs, rather than selecting an equation that represents the appropriate physical…
Inverse scattering transform for the time dependent Schroedinger equation with applications to the KPI equation
Xin, Zhou [Wisconsin Univ., Madison (USA). Dept. of Mathematics
For the direct-inverse scattering transform of the time dependent Schroedinger equation, rigorous results are obtained based on an operator-triangular-factorization approach. By viewing the equation as a first order operator equation, similar results as for the first order n x n matrix system are obtained. The nonlocal Riemann-Hilbert problem for inverse scattering is shown to have solution. (orig.).
|
CommonCrawl
|
Phytochemical, anti-inflammatory, anti-ulcerogenic and hypoglycemic activities of Periploca angustifolia L extracts in rats
Khaled Abo-EL-Sooud ORCID: orcid.org/0000-0001-7636-70181,
Fatma A. Ahmed2,
Sayed A. El-Toumy3,
Hanona S. Yaecob2 &
Hanan M. ELTantawy2
Clinical Phytoscience volume 4, Article number: 27 (2018) Cite this article
In traditional North Africa, medicine decoctions of the leaves of Periploca angustifolia are used to treat diarrhea, inflammation, ulcers, edema and diabetes. The aim of the study was to evaluate the phytochemical, anti-inflammatory, anti-ulcerogenic, and hypoglycemic activities of an ethanolic extract of P. angustifolia L. in rats.
An extract of air-dried powdered P. angustifolia plant was obtained using 96% ethanol. The extract was concentrated and the total phenolic and flavonoids contents were estimated colorimetrically. The phenolic and flavonoid compounds were quantified and identified using high performance liquid chromatography (HPLC). The anti-inflammatory, anti-ulcerogenic and hypoglycemic activities of the extract were evaluated in three rat models respectively: formaldehyde-induced paw edema, ethanol induced gastric damage and alloxan induced hyperglycemia.
The total flavonoids and total phenolics constituted 15% and 2.69% of the extract, respectively and are expressed as quercetin equivalent and μg/mg gallic acid equivalent (GAE). Coumarin, resorcinol, isorhamnetin, quercetin, and naphthalene were isolated from the ethanolic extract of P. angustifolia. Oral administration of the ethanolic extract at 500 mg/kg body weight (b.wt.) significantly reduced paw inflammation, gastric lesions, ulcer index scores and blood glucose levels in normal and diabetic rats.
The crude ethanolic extract of P. angustifolia exhibited promising anti-inflammatory, anti-ulcerogenic, and hypoglycemic activities in accordance with the plant's uses in folk medicine suggesting that P. angustifolia may be a safe alternative to chemical drugs.
Periploca is a genus of plants from the Asclepiadaceae family in the major group of angiosperms. Several species of this family, such as P. angustifolia are widely used in traditional medicine as anti-diabetic, anti-mutagenic, and anti-rheumatic agents [1]. In Egypt, the leaves of P. angustifolia are used to treat rheumatic diseases and the roots are used for hemorrhoids, gastric ulcer and diabetes. Its resin is used as a hypotensive [2] a masticator when burning. P. angustifolia L was used by the Bedouins as animal food and herbal remedies [3]. Different plant extracts of the Asclepiadaceae family have shown significant anti-inflammatory [4] and anti-ulcerogenic properties [5]. In addition, the methanolic extract of P. angustifolia leaves has been shown to have antioxidant effects and exert antidotal effects on cadmium-induced hepatotoxicity [6]. There is a relationship between the antioxidant capacity and anti-hyperglycemic potential of Periploca sylvestre and it may be due to flavonoids and phenolic contents in the plant that impart these properties [7]. Gymnema sylvestre (Asclepiadaceae) has been used since ancient times as a folk medicine for the treatment of diabetes, obesity, and stomach stimulation uses [8]. Aerial parts of P. angustifolia L. collected from southern Tunisia possess antioxidant activity against 2, 2′-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid (ABTS) and 2,2-diphenyl-1-picrylhydrazyl radical (DPPH) [9]. Samples of Hemidesmus indicus var. indicus and var. pubescens during the flowering season possess higher antiulcer and anti- hepato-carcinogenic effects [10, 11]. The chemical composition of the root bark of P. angustifolia, at the flowering stage showed the presence of C-heterosids (anthracenic derivatives), anthocyans, saponins, free quinons and proanthocyanidols [12]. In this study we determined the phytochemical composition and, in particular the phenolic and flavonoids composition of the ethanolic extract of P. angustifolia L., and we assessed anti-inflammatory, anti-ulcerogenic, and hypoglycemic activities.
Plant material
P. angustifolia L. was collected from the Sallum Plateau (northwestern coast of Egypt) during 2012–2013. The plants were air dried at lab-temperature until their weight plateaued, and then ground to a fine powder. The different parts of the plants were identified, confirmed and authenticated by comparing with an authentic specimen at the Plant Taxonomy Unit, Desert Research Center, Cairo, Egypt. The samples were extracted by percolation in ethanol 70%, filtered and this step was repeated several times. The ethanolic extract was concentrated under reduced pressure at temperatures not exceeding 40 °C. The obtained ethanol extract of P. angustifolia L. constituted 10% from the entire dried plant and was used for subsequent investigations. Scheme of separation for flavonoids and phenolics from the whole plant of P. angustifolia L was illustrated in Fig. 1.
Scheme of Separation of flavonoids and phenolics from the whole plant of P. angustifolia L
Diclofenac sodium (Voltarin®) was obtained from Novartis Pharma Co. (Cairo, Egypt) under license from Novartis Pharma AG, (Basle, Switzerland). Ranitidine hydrochloride tablets (Zantac® Batch No. 001716C) were manufactured by Glaxo-Wellcome Egypt (Elsalam City, Cairo, Egypt, each tablet contained 150 mg ranitidine). Glibenclamide (Daonil®) was purchased from Aventis Co., under license from Aventis Pharma Co., West Germany.
Wistar Albino rats (150–170 g) and Swiss mice (18–22 g) were obtained from the Laboratory Animal Colony, Helwan, Egypt. Animals were maintained in the Animal House of the Pharmacology Department (Faculty of Veterinary Medicine, Cairo University) under controlled conditions [temperature 23 ± 2 °C, humidity 50 ± 5% and 12-h light-dark cycles]. All animals were acclimatized for 7 days before the study. The animals were housed in sanitized polypropylene cages, containing sterile paddy husk as bedding. Animals were habituated to laboratory conditions for 48 h prior to the experimental protocol to minimize non-specific stress. All animals were fed a balanced diet of wheat bran, soybean powder, fish-meal and dietary fibers (manufactured by Cairo Agricultural Development Co.). Water was provided ad libitum. The Institutional Animal Care and Use Committee (IACUC), Cairo University approved this study.
Estimation of Total flavonoids
The flavonoid content in extract was determined spectrophotometrically according to the method described by Djeridane et al. [13], which is based on the formation of a flavonoid–aluminum complex with a maximum absorbance at 430 nm. Total flavonoids are expressed as mg quercetin equivalent.
Estimation of Total phenolic content (TPC)
The Folin-Ciocalteu method [14] was used to determine the TPC spectrophotometrically in the different extracts using gallic acid as standard. The TPC was expressed as μg/mg gallic acid equivalent (GAE).
Identification of phenolic and flavonoids
HPLC was used to identify phenolics and flavonoids. A known weight of air-dried plant powder was soaked in 25 ml sterilized water and agitated on a rotary shaker for 24 h at 200 rpm. Slurry was filtered through Whatmann 3MM filter paper under a vacuum, followed by centrifugation at 12.5 rpm for 30 min at 80 °C. The aqueous extract was acidified to pH 2.5 using diluted phosphoric acid. The sample was sacked three times through a separating funnel with an equal volume of diethyl ether. The combined diethyl ether layer was evaporated to dryness under reduced pressure at 30 °C. The resulting residue was re-dissolved in 3 ml of HPLC-grade methanol and filtered through a sterile membrane with a pore size of a 0.2 μm prior to HPLC analysis [15]. Identification of individual phenolic compounds of the plant sample was performed using a Dionex (Model 3000) HPLC, using a BDS Hypersil C18 reversed-phase column (250 × 4.6 mm) with 10 μm particle size. Injection by means of Rheodyne injection valve (Model 7125) with a 50 μl fixed loop. A constant flow rate of 1 ml/min was used with two mobile phases: distilled water (A), and acetonitrile (B), using a UV detector set at wavelength 254 nm. Phenolic compounds of each sample were identified by comparing their relative retention times with those of the standard mixture chromatogram. The concentration of an individual compound was calculated on the basis of peak area measurements, and then converted to μg/g phenolic dry weight. All chemicals and solvents used were HPLC spectral grade. Standards phenolic compounds were obtained from Sigma (St. Louis, USA) and Merck (Munich, Germany).
The acute toxicity (LD50) of ethanolic extract of P. angustifolia administered orally was estimated in mice using Lorke [16] method. Three groups of five animals received 10, 100, 1000 mg/kg of the extract suspended in Tween80 (vehicle 3% v/v). The animals were observed for 72 h for signs of toxicity and death. When no deaths were recorded another four groups of five mice were administrated 2000, 3000, 4000 and 5000 mg/kg of the extract orally. The animals were observed for 72 h for signs of toxicity and the number of deaths was recorded. Control animals were received the equivalent volume of vehicle. The LD50 values were calculated as the geometric mean of the highest non-lethal and the lowest lethal doses mathematically according to the Kerber method [17] using the following formula:
$$ \mathrm{LD}50=\mathrm{LD}100-\Sigma\ \left(\mathrm{z}\ \mathrm{x}\ \mathrm{d}\right)/\mathrm{m} $$
where z is half of the sum animals that died with the two next doses; d is the interval between two next doses and m is the number of animals/group.
Anti-inflammatory activity
The extract was evaluated for its anti-inflammatory activity in rats using the formaldehyde-induced paw edema method [18]. Acute inflammation was produced by sub-plantar injection of 0.2 ml formaldehyde (1% w/v) into the hind paw 1 h after oral administration of ethanolic extract of P. angustifolia (500 mg/kg b.wt.) or diclofenac sodium (50 mg/kg b.wt.) as a standard anti-inflammatory agent. The paw volume was measured in mm by a plethysmometer (Ugo-Basile, Italy) at 1, 2, 3, and 4 h after the formaldehyde injection. Inhibition of inflammation was calculated using the following formula: % inhibition = 100 (1-Vt/Vc), where 'Vc' represents edema volume in the control group and 'Vt' represents edema volume in the test group.
Anti-ulcerogenic activity
All rats were fasted for 48 h but were given water ad libitum till the start of the experiment. To prevent excessive dehydration during the fasting period, rats were supplied with sucrose (BDH) 8% (w/v) solution in NaCl (BDH) 0.2% (w/v), which was removed 1 h before experiments [19]. The animals were randomly separated into three groups of six rats. One group was pretreated with ethanolic extract of P. angustifolia orally at 500 mg/kg b.wt., another group received ranitidine (100 mg/kg orally) and the control group received equivalent volumes of saline instead of plant extract. One hour later, all groups were treated with ethanol (50%) at a dose of 10 ml/kg. One hour after ethanol administration, all rats were euthanized by an overdose of chloroform and the abdomen was opened. The stomach was removed, opened along the greater curvature, and gently rinsed under running water. The tissues were fixed with 10% formaldehyde in saline. Macroscopic examination was carried out under a hand lens and the presence of ulcer lesions was scored [20]. Lesions in the glandular part of the stomach were measured under an illuminated magnifying microscope (10 x). Long lesions were counted and measured along their greater length. Petechial lesions were counted with the aid of 1-mm squares grid [21]. Each five petechial lesions were considered to represent a 1-mm ulcer. The ulcer index (%) for each group was calculated as the sum of the lengths of long ulcers and petechial lesions divided by its number.
Hypoglycemic effect of P. angustifolia L. extract
Induction of diabetes
Rats were rendered diabetic by subcutaneous injection of alloxan monohydrate (Oxford) at a dose of 150 mg/kg/day for 3 days (early ketosis) and normal feeding was maintained [22]. Five days later, blood samples were drawn and the blood glucose level was measured to establish the occurrence of diabetes. The threshold for diabetes in the present study was a glucose level of ≥225 mg/dl.
Effect of P. angustifolia L. extract on hyperglycemic rats
The hypoglycemic effect of P. angustifolia L. extract was studied in alloxan-induced diabetic rats. The animals were fasted for 8 h but allowed free access to water. The diabetic animals were randomly divided into three groups of 10 rats and received oral P. angustifolia L. extract (500 mg/kg b.wt.), glibenclamide (0.2 mg/kg) or 20% v/v Tween 80 (5 ml/kg b.wt.).
Effect of P. angustifolia L. extract on Normoglycemic rats
Non-diabetic rats were fasted overnight and then, randomly divided into three groups of 10 rats. As with the diabetic rats, the non-diabetic received oral P. angustifolia L. extract (500 mg/kg b.wt.), glibenclamide (0.2 mg/kg b.wt.) or 20% v/v Tween 80 (5 ml/kg b.wt.). One milliliter of blood was collected before and after two hours of treatments from each rat in all groups. Blood glucose was estimated by the glucose oxidase method using the Randox kit (Randox Laboratories Ltd., Ardmore, UK) according to the manufacturer's instructions.
The results are expressed as the mean ± standard error (SE). The differences between the experimental groups were analyzed by one-way analysis of variance (ANOVA), followed by Bonferroni's test using SPSS 16.0 (SPSS Inc., Chicago, IL, USA). The results were considered statistically significant when p values less than 0.05 and 0.01 were considered significant (*p < 0.05, **p < 0.01).
There were no changes in the general behavior of the animals at any dose and, there were no deaths after 72 h at the highest administered dose (5000 mg/kg) of the extract. The safety of the extract was shown by the high LD50 value of the extract (> 5 g/kg).
Total flavonoids and phenolic contents
The total flavonoids and phenolic contents of P. angustifolia L were 3.15 ± 0.7% as quercetin equivalent and 2.69 ± 0.6% as gallic acid equivalent, respectively.
Identification and quantification of phenolic and flavonoid compounds
Quantitative and qualitative analysis of the phenolic and flavonoid compounds in the ethanolic extract of P. angustifolia L. was performed using HPLC, where each compound was separated and identified using authentic standard. The compounds were, coumarin, resorcinol, isorhamnetin, quercetin, and naphthalene with different concentration ranges (Table 1 and Fig. 2). Resorcinol reached its maximum value of 56.54% in ethanolic extract of P. angustifolia L. followed by isorhamnetin at 40.53%, while quercetin was estimated to have a minimum value of 0.002%.
Table 1 Phenolic and flavonoid compounds identified in the ethanolic extract of P. angustifolia L. using HPLC
HPLC chromatogram of the phenolic and flavonoid compounds of the P. angustifolia L
The formaldehyde-induced paw edema model showed that sub-plantar injection of formaldehyde in rats caused a time-dependent increase in paw thickness and the maximal increase was observed 4 h after formaldehyde administration (Table 2). However, rats that received the ethanolic extract of P. angustifolia pretreatment showed significantly less (P < 0.05, P < 0.01) formaldehyde-induced inflammation at each time point than the animals that received formaldehyde only as did those that received the reference anti-inflammatory drug diclofenac sodium (P < 0.01). Oral administration of P. angustifolia L. extract at 500 mg/kg resulted in a maximal inhibition of paw inflammation 42.42% that was close to that of diclofenac sodium 45.45% at 50 mg/kg 4 h post formaldehyde administration (Fig. 3).
Table 2 Formaldehyde induced paw edema measured as paw thickness in rats treated with P. angustifolia L. extract and in control rats (n = 5)
Anti-inflammatory activity of Periploca angustifolia L. extract by formaldehyde induced rat paw edema (n = 5)
Anti-ulcer activity
Ethanol caused extensive gastric damage in the mucosa of the control animals. The lesions were characterized by multiple long hemorrhagic red bands of different sizes along the axis of the glandular stomach with petechial patches. By contrast, oral treatment with the ethanolic extract of P. angustifolia L showed significantly fewer (p < 0.01) the gastric lesions and lower ulcer index than those observed in the control animals. The crude ethanolic extract of P. angustifolia L. had a protective index of 44.93%, whereas ranitidine as a reference standard (100 mg/kg) exhibited a protective index of 46.99% indicating a potent anti-ulcerogenic effect of P. angustifolia L. extract (Table 3 and Fig. 4).
Table 3 The effect of P. angustifolia L. extract on alcohol-induce ulcer in rats (n = 5)
The anti-ulcerogenic effect of Periploca angustifolia L. on Alcohol -induce ulcer in rats (n = 5)
Hypoglycemic activity
Subcutaneous injection of rats with alloxan resulted in a significant increase in serum glucose levels. Administration of the crude ethanolic extract of P. angustifolia significantly reduced blood glucose levels at 2 h compared with those untreated diabetic rats. Specifically, the ethanolic extract significantly reduced the postprandial blood glucose level of diabetic rats from 211.16 to 124.33 mg/dl (Table 4 and Fig. 5). Similarly, the extract of P. angustifolia at the same dose significantly reduced blood glucose level in normoglycemic rats (Table 5 and Fig. 6).
Table 4 Hypoglycemic activity of P. angustifolia L. extract in alloxan-diabetic rats (n = 10)
Hypoglycemic activity of Periploca angustifolia L extract in alloxan-diabetic rats (n = 10)
Table 5 Hypoglycemic activity of P. angustifolia L. extract in non-diabetic rats (n = 10)
Hypoglycemic activity of Periploca angustifolia L extract in non-diabetic rats (n = 10)
P. angustifolia extract had a high safety margin in mice as the LD50 was > 5 g/kg. Similarly, Sunil et al. [23] found that alcoholic extract of another plant in the Asclepiadaceae family Holostemma ada kodien was non toxic at a dose of 5 g/kg b.wt.
Phenolic compounds increase a plant biological value because they exhibit a range of pharmacological properties, such as anti-diabetic, anti-allergenic, anti-atherogenic, anti-inflammatory, antioxidant, anti-thrombotic, and vasodilator effects [24, 25]. Oxidative stress activates inflammatory pathways in stem cells and progenitor cells, leading to exhaustion of these cells due to increased levels of reactive oxygen species (ROS) [26]. Cellular exhaustion in turn leads to the development of several diseases, such as gastrointestinal ulcers, hyperglycemia, and hepatic dysfunction [27]. Thus, natural antioxidants provide cellular protection and lead to favorable effects in diabetes mellitus [28] and the majority of inflammatory and cardiovascular diseases [29]. Examples of naturally occurring antioxidants are flavonoids, phenolic acids, coumarin, isorhamnetin and quercetin that were separated and identified from P. angustifolia extract with promising anti-inflammatory agents. This activity may be due to their inhibitory action on neutrophils infiltration, cyclooxygenase-2 activity and inflammatory cytokines release [30,31,32]. The anti-ulcerogenic activity of P. angustifolia L. may also be due to the presence of quercetin, as it prevents gastric mucosal damage by increasing mucus production with a comparable regression of gastric lesions [33]. Hyperglycemia is a metabolic disorder that occurs due to the excess production of ROS, which destroy pancreatic β-cells and is associated with vascular complications including neuropathy, retinopathy, and nephropathy [34]. Herbal medicines have long been used for the treatment of diabetes mellitus and have fewer side-effects of toxicity than other hypoglycemic drugs. In the present study, administration of the ethanolic extract of P. angustifolia significantly reduced blood glucose levels in normoglycemic and diabetic rats. This promising effect may be attributed to the inhibition of aldose reductase which converts glucose to sorbitol and α-amylase and α-glucosidase (key enzymes linked to type-2 diabetes) by the phenols and flavonoids [35, 36].
Because of its basic chemical structure, quercetin is a antioxidant activity and it is now used as a nutritional supplement for a variety of diseases such as diabetes/obesity and circulatory dysfunction, including inflammation as well as mood disorders [37].
The crude ethanolic extract of P. angustifolia exhibited promising anti-inflammatory, anti-ulcerogenic, and hypoglycemic activities, which are in accordance with its use folk medicine. As the extract showed a high, safety profile this study may serve as a guideline for the standardization and validation of natural drugs containing selected medicinal plants ingredients. Moreover, further investigation for the selective activities is required to determine the exact mechanism of action.
ABTS:
2, 2′-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid
DPPH:
2,2-diphenyl-1-picrylhydrazyl radical
GAE:
Gallic acid equivalent
LD50:
P. angustifolia :
Periploca angustifolia
ROS:
Rabei S, Khalik KA. Conventional keys for Convolvulaceae in the flora of Egypt. Flora Mediterr. 2012;22:45–62.
Hammiche V, Maiza K. Traditional medicine in Central Sahara: pharmacopoeia of Tassili N'ajjer. J Ethnopharmacol. 2006;105:358–67.
Bouhouche N. Conservation and multiplication of an endangered medicinal plant – Caralluma arabica – using tissue culture. Planta Med [Internet]. 2011;77:PB49. Available from: https://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0031-1282303
Laupattarakasem P, Wangsrimongkol T, Surarit R, Hahnvajanawong C. In vitro and in vivo anti-inflammatory potential of Cryptolepis buchanani. J Ethnopharmacol. 2006;108:349–54.
Pandya D, Anand I. A complete review on Oxystelma esculentum R. Br. Pharmacogn J. 2011;3:87–90.
Athmouni K, Belhaj D, Mkadmini Hammi K, El Feki A, Ayadi H. Phenolic compounds analysis, antioxidant, and hepatoprotective effects of Periploca angustifolia extract on cadmium-induced oxidative damage in HepG2 cell line and rats. Arch Physiol Biochem. 2017:1–14.
Ibrahim A, E O, A. J N, IA U. Combined effect on antioxidant properties of Gymnema Sylvestre and Combretum Micranthum leaf extracts and the relationship to hypoglycemia. Eur Sci J. 2017;13:266–81.
Al-Rejaie SS, Abuohashish HM, Ahmed MM, Aleisa AM, Alkhamees O. Possible biochemical effects following inhibition of ethanol-induced gastric mucosa damage by Gymnema sylvestre in male Wistar albino rats. Pharm Biol. 2012;50:1542–50.
Bouaziz M, Dhouib A, Loukil S. Polyphenols content, antioxidant and antimicrobial activities of extracts of some wild plants collected from the south of Tunisia. African J. Biotechnol. [Internet]. 2009;8:7017–7027. Available from: http://www.ajol.info/index.php/ajb/article/view/68789
Anoop A, Jegadeesan M. Biochemical studies on the anti-ulcerogenic potential of Hemidesmus indicus r.Br. Var. indicus. J Ethnopharmacol. 2003;84:149–56.
Galhena P, Thabrew I, Tammitiyagodage M, Hanna RV. Anti-hepatocarcinogenic Ayurvedic herbal remedy reduces the extent of diethylnitrosamine-induced oxidative stress in rats. Pharmacogn Mag. 2009;5:19–27.
Fairouz D, Sami Z, Mekki B, Mohamed N. Chemical composition of root bark of Periploca angustifolia growing wild in Saharian Tunisia. J Essent Oil-Bearing Plants. 2013;16:338–45.
Djeridane A, Yousfi M, Nadjemi B, Boutassouna D, Stocker P, Vidal N. Antioxidant activity of some algerian medicinal plants extracts containing phenolic compounds. Food Chem. 2006;97:654–60.
Li C, Feng J, Huang WY, An XT. Composition of polyphenols and antioxidant activity of rabbiteye blueberry (Vaccinium ashei) in Nanjing. J Agric Food Chem. 2013;61:523–31.
Ma Y, Kosinska-Cagnazzo A, Kerr WL, Amarowicz R, Swanson RB, Pegg RB. Separation and characterization of phenolic compounds from dry-blanched peanut skins by liquid chromatography-electrospray ionization mass spectrometry. J Chromatogr A. 2014;1356:64–81.
Lorke D. A new approach to practical acute toxicity testing. Arch Toxicol. 1983;54:275–87.
Dhanarasu S, Selvam M, Al-Shammari NKA. Evaluating the pharmacological dose (Oral ld50) and antibacterial activity of leaf extracts of Mentha piperita Linn. Grown in Kingdom of Saudi Arabia: a pilot study for nephrotoxicity. Int J Pharmacol. 2016;12:195–200.
Nalini GK, Patil VM, Ramabhimaiah S, Patil P, Vijayanath V. Anti-inflammatory activity of wheatgrass juice in albino rats. Biomed Pharmacol J. 2011;4:301–4.
Hironaka A, Susumu O, Yoshihiko I, Masahiro O, Kazuei I, Seiyu H. Polyamine inhibition of gastric ulceration and secretion in rats. Biochem Pharmacol. 1983;32:1733–6.
Nordin N, Salama SM, Golbabapour S, Hajrezaie M, Hassandarvish P, Kamalidehghan B, et al. Anti-ulcerogenic effect of methanolic extracts from Enicosanthellum pulchrum (king) HEUSDEN against ethanol-induced acute gastric lesion in animal models. PLoS One. 2014;9:e111925.
Chen S-H, Liang Y-C, Chao JCJ, Tsai L-H, Chang C-C, Wang C-C, et al. Protective effects of Ginkgo biloba extract on the ethanol-induced gastric ulcer in rats. World J Gastroenterol [Internet]. 2005;11:3746–3750. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15968732
Tang LQ, Wei W, Chen LM, Liu S. Effects of berberine on diabetes induced by alloxan and a high-fat/high-cholesterol diet in rats. J Ethnopharmacol. 2006;108:109–15.
Sunil J, Krishna J, Bramhachari P. Hepatoprotective activity of Holostemma ada Kodien shcult, extract against paracetamol induced hepatic damage in rats. European J Med Plants [Internet]. 2015;6:45–54. Available from: http://www.sciencedomain.org/abstract.php?iid=793&id=13&aid=7471.
Abushouk AI, Ismail A, AMA S, Afifi AM, Abdel-Daim MM. Cardioprotective mechanisms of phytochemicals against doxorubicin-induced cardiotoxicity. Biomed Pharmacother. 2017;90:935–46.
Ganguly S, Kumar TG, Mantha S, Panda K. Simultaneous determination of black tea-derived catechins and theaflavins in tissues of tea consuming animals using ultra-performance liquid-chromatography tandem mass spectrometry. PLoS One. 2016;11:e0163498.
Oh J, Lee YD, Wagers AJ. Stem cell aging: mechanisms, regulators and therapeutic opportunities. Nat Med. 2014;20:870–80.
Morris G, Maes M. Oxidative and Nitrosative stress and immune-inflammatory pathways in patients with Myalgic encephalomyelitis (ME)/chronic fatigue syndrome (CFS). Curr Neuropharmacol [Internet]. 2014;12:168–85 Available from: http://www.eurekaselect.com/openurl/content.php?genre=article&issn=1570-159X&volume=12&issue=2&spage=168.
Youn J-Y, Siu KL, Lob H, Itani H, Harrison DG, Cai H. Role of vascular oxidative stress in obesity and metabolic syndrome. Diabetes [Internet]. 2014;63:2344–2355. Available from: http://www.ncbi.nlm.nih.gov/pubmed/24550188
Bu J, Dou Y, Tian X, Wang Z, Chen G. The role of Omega-3 polyunsaturated fatty acids in stroke. Oxidative Med Cell Logevity. 2016;2016:1–8.
Nguyen PH, Zhao BT, Kim O, Lee JH, Choi JS, Min BS, et al. Anti-inflammatory terpenylated coumarins from the leaves of Zanthoxylum schinifolium with α-glucosidase inhibitory activity. J Nat Med. 2016;70:276–81.
Antunes-Ricardo M, Gutiérrez-Uribe JA, López-Pacheco F, Alvarez MM, Serna-Saldívar SO. In vivo anti-inflammatory effects of isorhamnetin glycosides isolated from Opuntia ficus-indica (L.) mill cladodes. Ind Crop Prod. 2015;76:803–8.
Wang L, Wang B, Li H, Lu H, Qiu F, Xiong L, et al. Quercetin, a flavonoid with anti-inflammatory activity, suppresses the development of abdominal aortic aneurysms in mice. Eur J Pharmacol. 2012;690:133–41.
De La Lastra CA, Martín MJ, Motilva V. Antiulcer and gastroprotective effects of quercetin: a gross and histologic study. Pharmacology. 1994;48:56–62.
Su S-L, Liao P-Y, Tu S-T, Lin K-C, Tsai D-H, Sia H-K, et al. Correlation analysis of HbAlc and preprandial plasma glucose in diabetes complications. Diabetes [Internet]. 2009:58 Available from: http://www.embase.com/search/results?subaction=viewrecord&from=export&id=L70135710%5Cnhttp://professional.diabetes.org/Abstracts-Display.aspx?TYP=1&CID=73906%5Cnhttp://sfx.library.uu.nl/utrecht?sid=EMBASE&issn=00121797&id=doi:&atitle=Correlation+analysis+.
Lee YS, Lee S, Lee HS, Kim BK, Ohuchi K, Shin KH. Inhibitory effects of isorhamnetin-3-O-beta-D-glucoside from Salicornia herbacea on rat lens aldose reductase and sorbitol accumulation in streptozotocin-induced diabetic rat tissues. Biol Pharm Bull. 2005;28:916–8.
Adedayo BC, Ademiluyi AO, Oboh G, Akindahunsi AA. Interaction of aqueous extracts of two varieties of yam tubers (Dioscorea spp) on some key enzymes linked to type 2 diabetes in vitro. Int J Food Sci Technol. 2012;47:703–9.
D'Andrea G. Quercetin: a flavonol with multifaceted therapeutic applications? Fitoterapia. 2015;106:256–71.
The corresponding author.
Prof. Khaled Abo-EL-Sooud.
Professor of Pharmacology, Faculty of Veterinary Medicine, Cairo University from 2005 till now. Ph.D. Canadian-Egyptian Scholarship, Cairo University, 1995 at centre for food and animal research, Agriculture Canada, Ottawa, Canada. Teaching Undergraduate and graduate Courses in University of Science and Technology, Irbid, Jordan (2000–2002) and in Qassim University, Buraidah, Saudi Arabia (2005–2007). Supervising and discussing several Master and Ph.D. theses in Egypt and Arabian Countries. Expertise in Radioisotopes and different types of Chromatography (GC-HPLC-TLC etc.) for detection of drug residues in tissues and food. Publishing about 70 papers in different international journals enclosed list of publications. Member of the promotion committee of Supreme Council committee 100 B for Veterinary Pharmacology, Toxicology and Forensic Medicine from 2013 to 2019. Nowadays the research is shifted to ethnopharmacology. Attained a lot of international conferences and obtained several awards and prizes. Member of veterinary drug administration's committee, ministry of Health, Egypt.
ASSOCIATE EDITOR.
International Journal of Veterinary Science and Medicine.
GUEST EDITORS IN.
Oxidative Medicine and Cellular Longevity.
https://mts.hindawi.com/guest.editor/journals/omcl/adct/
Evidence-based Complementary and Alternative Medicine.
https://mts.hindawi.com/guest.editor
http://scholar.cu.edu.eg/kasooud
http://scholar.google.com/citations?user=Ww4Vqd8AAAAJ
https://www.scopus.com/authid/detail.uri?authorId=6603356090
Desert Research Center, Medicinal and Aromatic Plants Department, Cairo, Egypt Supporting Ph.D. Study of Dr. Hanan M. ELTantawy.
Pharmacology Department, Faculty of Veterinary Medicine, Cairo University, Giza, 12211, Egypt
Khaled Abo-EL-Sooud
Medicinal and Aromatic Plants Department, Desert Research Center, Cairo, Egypt
Fatma A. Ahmed
, Hanona S. Yaecob
& Hanan M. ELTantawy
Chemistry of Tannins Department, National Research Center, Dokki, Giza, Egypt
Sayed A. El-Toumy
Search for Khaled Abo-EL-Sooud in:
Search for Fatma A. Ahmed in:
Search for Sayed A. El-Toumy in:
Search for Hanona S. Yaecob in:
Search for Hanan M. ELTantawy in:
KA-E Sooud performed the pharmacological evaluation on animal studies and data collection. Other authors performed the phytochemical analysis. All authors read and approved the final manuscript.
Correspondence to Khaled Abo-EL-Sooud.
The Institutional Animal Care and Use Committee (IACUC), Cairo University approved the animal study.
Abo-EL-Sooud, K., Ahmed, F.A., El-Toumy, S.A. et al. Phytochemical, anti-inflammatory, anti-ulcerogenic and hypoglycemic activities of Periploca angustifolia L extracts in rats. Clin Phytosci 4, 27 (2018) doi:10.1186/s40816-018-0087-6
Anti-ulcerogeni
|
CommonCrawl
|
Convergence analysis of the vortex blob method for the $b$-equation
An extended discrete Hardy-Littlewood-Sobolev inequality
May 2014, 34(5): 1961-1993. doi: 10.3934/dcds.2014.34.1961
Multi-existence of multi-solitons for the supercritical nonlinear Schrödinger equation in one dimension
Vianney Combet 1,
Université Lille 1, U.F.R. de Mathématiques, 59 655 Villeneuve d'Ascq Cédex,, France
Received September 2010 Revised July 2013 Published October 2013
For the $L^2$ supercritical generalized Korteweg-de Vries equation, we proved in [2] the existence and uniqueness of an $N$-parameter family of $N$-solitons. Recall that, for any $N$ given solitons, we call $N$-soliton a solution of the equation which behaves as the sum of these $N$ solitons asymptotically as $t \to +\infty$. In the present paper, we also construct an $N$-parameter family of $N$-solitons for the supercritical nonlinear Schrödinger equation in dimension $1$. Nevertheless, we do not obtain any classification result; but recall that, even in subcritical and critical cases, no general uniqueness result has been proved yet.
Keywords: supercritical, asymptotic behavior, multi-solitons, instability., NLS.
Mathematics Subject Classification: Primary: 35Q55, 35Q51; Secondary: 35B40, 37K4.
Citation: Vianney Combet. Multi-existence of multi-solitons for the supercritical nonlinear Schrödinger equation in one dimension. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 1961-1993. doi: 10.3934/dcds.2014.34.1961
T. Cazenave and F. Weissler, The Cauchy problem for the critical nonlinear Schrödinger equation in $H^s$, Nonlinear Analysis, 14 (1990), 807-836. doi: 10.1016/0362-546X(90)90023-A. Google Scholar
V. Combet, Multi-soliton solutions for the supercritical gKdV equations, Communications in Partial Differential Equations, 36 (2011), 380-419. doi: 10.1080/03605302.2010.503770. Google Scholar
R. Côte, Y. Martel and F. Merle, Construction of multi-soliton solutions for the $L^2$-supercritical gKdV and NLS equations, Revista Matematica Iberoamericana, 27 (2011), 273-302. doi: 10.4171/RMI/636. Google Scholar
T. Duyckaerts and F. Merle, Dynamic of threshold solutions for energy-critical NLS, Geometric and Functional Analysis, 18 (2009), 1787-1840. doi: 10.1007/s00039-009-0707-x. Google Scholar
T. Duyckaerts and S. Roudenko, Threshold solutions for the focusing 3d cubic Schrödinger equation, Revista Matematica Iberoamericana, 26 (2010), 1-56. doi: 10.4171/RMI/592. Google Scholar
J. Ginibre and G. Velo, On a class of nonlinear Schrödinger equations. I. The Cauchy problem, general case, Journal of Functional Analysis, 32 (1979), 1-32. doi: 10.1016/0022-1236(79)90076-4. Google Scholar
M. Grillakis, Analysis of the linearization around a critical point of an infinite dimensional hamiltonian system, Communications on Pure and Applied Mathematics, 43 (1990), 299-333. doi: 10.1002/cpa.3160430302. Google Scholar
M. Grillakis, J. Shatah and W. A. Strauss, Stability theory of solitary waves in the presence of symmetry. I, Journal of Functional Analysis, 74 (1987), 160-197. doi: 10.1016/0022-1236(87)90044-9. Google Scholar
Y. Martel, Asymptotic N-soliton-like solutions of the subcritical and critical generalized Korteweg-de Vries equations, American Journal of Mathematics, 127 (2005), 1103-1140. doi: 10.1353/ajm.2005.0033. Google Scholar
Y. Martel and F. Merle, Multi solitary waves for nonlinear Schrödinger equations, Annales de l'Institut Henri Poincaré/Analyse non linéaire, 23 (2006), 849-864. doi: 10.1016/j.anihpc.2006.01.001. Google Scholar
Y. Martel, F. Merle and T.-P. Tsai, Stability in $H^1$ of the sum of $K$ solitary waves for some nonlinear Schrödinger equations, Duke Mathematical Journal, 133 (2006), 405-466. doi: 10.1215/S0012-7094-06-13331-8. Google Scholar
F. Merle, Construction of solutions with exactly $k$ blow-up points for the Schrödinger equation with critical nonlinearity, Communications in Mathematical Physics, 129 (1990), 223-240. doi: 10.1007/BF02096981. Google Scholar
F. Merle and H. Zaag, Stability of the blow-up profile for equations of the type $u_t = \Delta u + |u|^{p-1}u$, Duke Mathematical Journal, 86 (1997), 143-195. doi: 10.1215/S0012-7094-97-08605-1. Google Scholar
G. Perelman, Some results on the scattering of weakly interacting solitons for nonlinear Schrödinger equations, Mathematical Topics, 14 (1997), 78-137. Google Scholar
G. Perelman, Asymptotic stability of multi-soliton solutions for nonlinear Schrödinger equations, Communications in Partial Differential Equations, 29 (2004), 1051-1095. doi: 10.1081/PDE-200033754. Google Scholar
I. Rodnianski, W. Schlag and A. Soffer, Asymptotic Stability of N-soliton States of NLS,, preprint, (). Google Scholar
M. I. Weinstein, Modulational stability of ground states of nonlinear Schrödinger equations, SIAM Journal on Mathematical Analysis, 16 (1985), 472-491. doi: 10.1137/0516034. Google Scholar
Daehwan Kim, Juncheol Pyo. Existence and asymptotic behavior of helicoidal translating solitons of the mean curvature flow. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5897-5919. doi: 10.3934/dcds.2018256
Zongming Guo, Xiaohong Guan, Yonggang Zhao. Uniqueness and asymptotic behavior of solutions of a biharmonic equation with supercritical exponent. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2613-2636. doi: 10.3934/dcds.2019109
Bixiang Wang. Asymptotic behavior of supercritical wave equations driven by colored noise on unbounded domains. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021223
Jaime Angulo Pava, Nataliia Goloshchapova. On the orbital instability of excited states for the NLS equation with the δ-interaction on a star graph. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5039-5066. doi: 10.3934/dcds.2018221
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Yuming Qin, Jianlin Zhang. Global existence and asymptotic behavior of spherically symmetric solutions for the multi-dimensional infrarelativistic model. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2529-2574. doi: 10.3934/cpaa.2019115
Fujun Zhou, Junde Wu, Shangbin Cui. Existence and asymptotic behavior of solutions to a moving boundary problem modeling the growth of multi-layer tumors. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1669-1688. doi: 10.3934/cpaa.2009.8.1669
Kazuhiro Kurata, Kotaro Morimoto. Construction and asymptotic behavior of multi-peak solutions to the Gierer-Meinhardt system with saturation. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1443-1482. doi: 10.3934/cpaa.2008.7.1443
J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao. Polynomial upper bounds for the orbital instability of the 1D cubic NLS below the energy norm. Discrete & Continuous Dynamical Systems, 2003, 9 (1) : 31-54. doi: 10.3934/dcds.2003.9.31
Zhipeng Qiu, Jun Yu, Yun Zou. The asymptotic behavior of a chemostat model. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 721-727. doi: 10.3934/dcdsb.2004.4.721
Sergey Zelik. Asymptotic regularity of solutions of singularly perturbed damped wave equations with supercritical nonlinearities. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 351-392. doi: 10.3934/dcds.2004.11.351
Mykhailo Potomkin. Asymptotic behavior of thermoviscoelastic Berger plate. Communications on Pure & Applied Analysis, 2010, 9 (1) : 161-192. doi: 10.3934/cpaa.2010.9.161
Hunseok Kang. Asymptotic behavior of a discrete turing model. Discrete & Continuous Dynamical Systems, 2010, 27 (1) : 265-284. doi: 10.3934/dcds.2010.27.265
Philippe Chartier, Norbert J. Mauser, Florian Méhats, Yong Zhang. Solving highly-oscillatory NLS with SAM: Numerical efficiency and long-time behavior. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1327-1349. doi: 10.3934/dcdss.2016053
Riccardo Adami, Diego Noja, Cecilia Ortoleva. Asymptotic stability for standing waves of a NLS equation with subcritical concentrated nonlinearity in dimension three: Neutral modes. Discrete & Continuous Dynamical Systems, 2016, 36 (11) : 5837-5879. doi: 10.3934/dcds.2016057
G. A. Braga, A. Procacci, R. Sanchis. Ornstein-Zernike behavior for the Bernoulli bond percolation on $\mathbb Z^d$ in the supercritical regime. Communications on Pure & Applied Analysis, 2004, 3 (4) : 581-606. doi: 10.3934/cpaa.2004.3.581
Chunpeng Wang. Boundary behavior and asymptotic behavior of solutions to a class of parabolic equations with boundary degeneracy. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 1041-1060. doi: 10.3934/dcds.2016.36.1041
Yong Liu. Even solutions of the Toda system with prescribed asymptotic behavior. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1779-1790. doi: 10.3934/cpaa.2011.10.1779
M. Grasselli, V. Pata. Asymptotic behavior of a parabolic-hyperbolic system. Communications on Pure & Applied Analysis, 2004, 3 (4) : 849-881. doi: 10.3934/cpaa.2004.3.849
Jingyu Li. Asymptotic behavior of solutions to elliptic equations in a coated body. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1251-1267. doi: 10.3934/cpaa.2009.8.1251
Vianney Combet
|
CommonCrawl
|
The probabilities that a man and his wife live for 80 years are $\tfrac{2}{3}$and $\tfrac{3}{5}$respectively. Find the probability that at least one of them will live up to 80 years
The probability that a student passes a physics test$\frac{2}{3}$. If he takes three physics test, what is the probability that he passes two of the tests?
In how many ways can the letters of the word TOTALITY be arranged?
Evaluate $^{n+1}{{C}_{n-2}}$ if n = 15
46 Find the standard deviation of 2, 3, 8, 10 and 12
Find the range of 4, 9, 6, 3, 2, 8, 10 and 11
Find the median of 2, 3, 7, 3, 4, 5, 8, 9, 9, 4, 5,3, 4, 2, 4 and 5
The mean of seven numbers is 96. If an eight number is added, the mean becomes 112. Find the eighth number.
The bar chart above shows the distribution of marks in a class test. If the pass mark is 5, what percentage of the students failed the test?
The grades of 36 students in a class test are as shown in the pie chart above. How many students have excellent?
|
CommonCrawl
|
Magnetic anisotropy of textured CrO2 thin films investigated by X-ray magnetic circular dichroism
Goering, E., Justen, M., Geissler, J., Rüdiger, U., Rabe, M., Güntherodt, G., Schütz, G.
{Applied Physics A-Materials Science \& Processing}, 74(6):747-753, 2002 (article)
mms Goering, E., Justen, M., Geissler, J., Rüdiger, U., Rabe, M., Güntherodt, G., Schütz, G. Magnetic anisotropy of textured CrO2 thin films investigated by X-ray magnetic circular dichroism {Applied Physics A-Materials Science \& Processing}, 74(6):747-753, 2002 (article)
Identification of nanocrystal nucleation and growth in Al85Ni5Y8CO2 metallic glass with quenched-in nuclei
Wang, J. Q., Zhang, H. W., Gu, X. J., Lu, K., Sommer, F., Mittemeijer, E. J.
{Applied Physics Letters}, 80(18):3319-3321, 2002 (article)
mms Wang, J. Q., Zhang, H. W., Gu, X. J., Lu, K., Sommer, F., Mittemeijer, E. J. Identification of nanocrystal nucleation and growth in Al85Ni5Y8CO2 metallic glass with quenched-in nuclei {Applied Physics Letters}, 80(18):3319-3321, 2002 (article)
Master equations for the concentrations of atomic defects in B2 compounds
Meyer, B., Fähnle, M.
{Physica Status Solidi B-Basic Research}, 229(3):1139-1143, 2002 (article)
mms Meyer, B., Fähnle, M. Master equations for the concentrations of atomic defects in B2 compounds {Physica Status Solidi B-Basic Research}, 229(3):1139-1143, 2002 (article)
On the electronic structure of the pure and oxygen covered Ru(0001) surface
Bester, G., Fähnle, M.
{Surface Science}, 497(1-3):305-310, 2002 (article)
mms Bester, G., Fähnle, M. On the electronic structure of the pure and oxygen covered Ru(0001) surface {Surface Science}, 497(1-3):305-310, 2002 (article)
Atomic defects and diffusion in intermetallic compounds: The impact of the ab initio electron theory
Fähnle, M.
{Diffusion and Defect Forum}, 203-205, pages: 37-46, 2002 (article)
mms Fähnle, M. Atomic defects and diffusion in intermetallic compounds: The impact of the ab initio electron theory {Diffusion and Defect Forum}, 203-205, pages: 37-46, 2002 (article)
Switching behavior of single nanowires inside dense nickel nanowire arrays
Nielsch, K., Hertel, R., Wehrspohn, R. B., Barthel, J., Kirschner, J., Gösele, U., Fischer, S. F., Kronmüller, H.
mms Nielsch, K., Hertel, R., Wehrspohn, R. B., Barthel, J., Kirschner, J., Gösele, U., Fischer, S. F., Kronmüller, H. Switching behavior of single nanowires inside dense nickel nanowire arrays {IEEE Transactions on Magnetics}, 38(5):2571-2573, 2002 (article)
On the magnetoelastic contribution to the magnetic anisotropy of thin epitaxial Permalloy films: an ab initio study
Komelj, M., Fähnle, M.
{Journal of Magnetism and Magnetic Materials}, 238(2-3):L125-L128, 2002 (article)
mms Komelj, M., Fähnle, M. On the magnetoelastic contribution to the magnetic anisotropy of thin epitaxial Permalloy films: an ab initio study {Journal of Magnetism and Magnetic Materials}, 238(2-3):L125-L128, 2002 (article)
Nanostructured graphite-hydrogen systems prepared by mechanical milling method
Orimo, S. I., Matsushima, T., Fujii, H., Fukunaga, T., Majer, G., Zuttel, A., Schlapbach, L.
{Molecular Crystals and Liquid Crystals}, 386, pages: 173-178, 2002 (article)
mms Orimo, S. I., Matsushima, T., Fujii, H., Fukunaga, T., Majer, G., Zuttel, A., Schlapbach, L. Nanostructured graphite-hydrogen systems prepared by mechanical milling method {Molecular Crystals and Liquid Crystals}, 386, pages: 173-178, 2002 (article)
Micromagnetic analysis of pinning-hardened nanostructured, nanocrystalline Sm2Co17 based alloys
Kronmüller, H., Goll, D.
{Scripta Materialia}, 47, pages: 545-550, 2002 (article)
mms Kronmüller, H., Goll, D. Micromagnetic analysis of pinning-hardened nanostructured, nanocrystalline Sm2Co17 based alloys {Scripta Materialia}, 47, pages: 545-550, 2002 (article)
A micromechanical flying insect thorax
Fearing, R., Avadhanula, S., Campolo, D., Sitti, M., Yan, J., Wood, R.
Neurotechnology for Biomimetic Robots, pages: 469-480, The MIT Press Cambridge, MA, 2002 (article)
pi Fearing, R., Avadhanula, S., Campolo, D., Sitti, M., Yan, J., Wood, R. A micromechanical flying insect thorax Neurotechnology for Biomimetic Robots, pages: 469-480, The MIT Press Cambridge, MA, 2002 (article)
Second-order magnetoelastic effects: From the Dirac equation to the magnetic properties of ultrathin epitaxial films for magnetic thin-film applications
Fähnle, M., Komelj, M.
{Zeitschrift f\"ur Metallkunde}, 93(10):970-973, 2002 (article)
mms Fähnle, M., Komelj, M. Second-order magnetoelastic effects: From the Dirac equation to the magnetic properties of ultrathin epitaxial films for magnetic thin-film applications {Zeitschrift f\"ur Metallkunde}, 93(10):970-973, 2002 (article)
Study of magnetic domains by magnetic soft x-ray transmission microscopy
Fischer, P., Denbeaux, G., Ono, T., Okuno, T., Eimüller, T., Goll, D., Schütz, G.
{Journal of Physics D-Applied Physics}, 35(19):2391-2397, 2002 (article)
mms Fischer, P., Denbeaux, G., Ono, T., Okuno, T., Eimüller, T., Goll, D., Schütz, G. Study of magnetic domains by magnetic soft x-ray transmission microscopy {Journal of Physics D-Applied Physics}, 35(19):2391-2397, 2002 (article)
Ion beam-induced sintering of near-surface layers
Föhl, A., Carstanjen, H. D.
{Surface \& Coatings Technology}, 158, pages: 69-74, 2002 (article)
mms Föhl, A., Carstanjen, H. D. Ion beam-induced sintering of near-surface layers {Surface \& Coatings Technology}, 158, pages: 69-74, 2002 (article)
High density hexagonal nickel nanowire array
Nielsch, K., Wehrspohn, R. B., Barthel, J., Kirschner, J., Fischer, S. F., Kronmüller, H., Schweinbock, T., Weiss, D., Gösele, U.
{Journal of Magnetism and Magnetic Materials}, 249(1-2):234-240, 2002 (article)
mms Nielsch, K., Wehrspohn, R. B., Barthel, J., Kirschner, J., Fischer, S. F., Kronmüller, H., Schweinbock, T., Weiss, D., Gösele, U. High density hexagonal nickel nanowire array {Journal of Magnetism and Magnetic Materials}, 249(1-2):234-240, 2002 (article)
Strong anisotropy of projected 3d moments in epitaxial CrO2 films
Goering, E., Bayer, A., Gold, S., Schütz, G., Rabe, M., Rüdiger, U., Güntherodt, G.
mms Goering, E., Bayer, A., Gold, S., Schütz, G., Rabe, M., Rüdiger, U., Güntherodt, G. Strong anisotropy of projected 3d moments in epitaxial CrO2 films {Physical Review Letters}, 88(20), 2002 (article)
Density-functional study of Fe3Al: LSDA versus GGA
Lechermann, F., Welsch, F., Elsässer, C., Ederer, C., Fähnle, M., Sanchez, J. M., Meyer, B.
mms Lechermann, F., Welsch, F., Elsässer, C., Ederer, C., Fähnle, M., Sanchez, J. M., Meyer, B. Density-functional study of Fe3Al: LSDA versus GGA {Physical Review B}, 65(13), 2002 (article)
Critical magnetic properties of disordered polycrystalline Cr75Fe25 and Cr70Fe30 alloys
Fischer, S. F., Kaul, S. N., Kronmüller, H.
{Physical Review B}, 65(6), 2002 (article)
mms Fischer, S. F., Kaul, S. N., Kronmüller, H. Critical magnetic properties of disordered polycrystalline Cr75Fe25 and Cr70Fe30 alloys {Physical Review B}, 65(6), 2002 (article)
Magnetic properties of ion-beam sputtered Cr3Fe-alloy films
mms Fischer, S. F., Kaul, S. N., Kronmüller, H. Magnetic properties of ion-beam sputtered Cr3Fe-alloy films {Journal of Magnetism and Magnetic Materials}, 240(1-3):374-376, 2002 (article)
Model-independent measurements of hydrogen diffusivity in the yttrium dihydrides
Majer, G., Gottwald, J., Peterson, D. T., Barnes, R. G.
{Journal of Alloys and Compounds}, 330-332, pages: 438-442, 2002 (article)
mms Majer, G., Gottwald, J., Peterson, D. T., Barnes, R. G. Model-independent measurements of hydrogen diffusivity in the yttrium dihydrides {Journal of Alloys and Compounds}, 330-332, pages: 438-442, 2002 (article)
Strong anisotropy of projected Cr3d moments of epitaxial grown CrO2-films
Goering, E., Bayer, A., Gold, S., Schütz, G.
{Bessy-Highlights}, pages: 26-27, 2002 (article)
mms Goering, E., Bayer, A., Gold, S., Schütz, G. Strong anisotropy of projected Cr3d moments of epitaxial grown CrO2-films {Bessy-Highlights}, pages: 26-27, 2002 (article)
Change from a bulk discontinuous phase transition in V2H to a contiuous transition in a defective near-surface skin layer
Trenkler, J., Abe, H., Wochner, P., Haeffner, D., Bail, J., Carstanjen, H. D., Moss, S.
{Modelling and Simulation in Materials Science and Engineering}, 8, pages: 269-275, 2002 (article)
mms Trenkler, J., Abe, H., Wochner, P., Haeffner, D., Bail, J., Carstanjen, H. D., Moss, S. Change from a bulk discontinuous phase transition in V2H to a contiuous transition in a defective near-surface skin layer {Modelling and Simulation in Materials Science and Engineering}, 8, pages: 269-275, 2002 (article)
Evidence for van der Waals adhesion in gecko setae
Autumn, K., Sitti, M., Liang, Y. A., Peattie, A. M., Hansen, W. R., Sponberg, S., Kenny, T. W., Fearing, R., Israelachvili, J. N., Full, R. J.
Proceedings of the National Academy of Sciences, 99(19):12252-12256, National Acad Sciences, 2002 (article)
pi Autumn, K., Sitti, M., Liang, Y. A., Peattie, A. M., Hansen, W. R., Sponberg, S., Kenny, T. W., Fearing, R., Israelachvili, J. N., Full, R. J. Evidence for van der Waals adhesion in gecko setae Proceedings of the National Academy of Sciences, 99(19):12252-12256, National Acad Sciences, 2002 (article)
Vacancies in thermal equilibrium and ferromagnetism near the Curie temperature
Seeger, A., Fähnle, M.
mms Seeger, A., Fähnle, M. Vacancies in thermal equilibrium and ferromagnetism near the Curie temperature {Zeitschrift f\"ur Metallkunde}, 93(10):1030-1042, 2002 (article)
Magnetic imaging with soft X-ray microscopy
Fischer, P., Denbeaux, G., Eimüller, T., Goll, D., Schütz, G.
mms Fischer, P., Denbeaux, G., Eimüller, T., Goll, D., Schütz, G. Magnetic imaging with soft X-ray microscopy {IEEE Transactions on Magnetics}, 38(5):2427-2431, 2002 (article)
Theory of induced magnetic moments and x-ray magnetic circular dichroism in Co-Pt multilayers
Ederer, C., Komelj, M., Fähnle, M., Schütz, G.
mms Ederer, C., Komelj, M., Fähnle, M., Schütz, G. Theory of induced magnetic moments and x-ray magnetic circular dichroism in Co-Pt multilayers {Physical Review B}, 66(9), 2002 (article)
From the electronic structure to the macroscopic magnetic behaviour of rare-earth intermetallics: a combination of ab initio electron theory with statistical mechanics and elasticity theory
Fähnle, M., Welsch, F.
{Physica B}, 321(1-4):198-203, 2002 (article)
mms Fähnle, M., Welsch, F. From the electronic structure to the macroscopic magnetic behaviour of rare-earth intermetallics: a combination of ab initio electron theory with statistical mechanics and elasticity theory {Physica B}, 321(1-4):198-203, 2002 (article)
Determination of the complete set of second-order magnetoelastic coupling constants on epitaxial films
mms Komelj, M., Fähnle, M. Determination of the complete set of second-order magnetoelastic coupling constants on epitaxial films {Physical Review B}, 65(21), 2002 (article)
Undulation instabilities in laterally structured magnetic multilayers
Eimüller, T., Scholz, M., Guttmann, P., Köhler, M., Bayreuther, G., Schmahl, G., Fischer, P., Schütz, G.
{Journal of Applied Physics}, 91(10):7334-7336, 2002 (article)
mms Eimüller, T., Scholz, M., Guttmann, P., Köhler, M., Bayreuther, G., Schmahl, G., Fischer, P., Schütz, G. Undulation instabilities in laterally structured magnetic multilayers {Journal of Applied Physics}, 91(10):7334-7336, 2002 (article)
Initial oxidation of AlPdMn quasicrystals - A study by high-resolution RBS and ERDA
Plachke, D., Khellaf, A., Carstanjen, H. D.
{Nuclear Instruments \& Methods in Physics Research B}, 190, pages: 646-651, 2002 (article)
mms Plachke, D., Khellaf, A., Carstanjen, H. D. Initial oxidation of AlPdMn quasicrystals - A study by high-resolution RBS and ERDA {Nuclear Instruments \& Methods in Physics Research B}, 190, pages: 646-651, 2002 (article)
The Verwey transition - a topical review
Walz, F.
{Journal of Physics-Condensed Matter}, 14(12):R285-R340, 2002 (article)
mms Walz, F. The Verwey transition - a topical review {Journal of Physics-Condensed Matter}, 14(12):R285-R340, 2002 (article)
Composition dependence of the Zener relaxation in high-purity FeCr single crystals
Hirscher, M., Ege, M.
{Materials \textquotesingleTransactions JIM}, 43(2):182-185, 2002 (article)
mms Hirscher, M., Ege, M. Composition dependence of the Zener relaxation in high-purity FeCr single crystals {Materials \textquotesingleTransactions JIM}, 43(2):182-185, 2002 (article)
Hydrogen storage in carbon nanostructures
Hirscher, M., Becher, M., Haluska, M., Quintel, A., Skakalova, V., Choi, Y. M., Dettlaff-Weglikowska, U., Roth, S., Stepanek, I., Bernier, P., Leonhardt, A., Fink, J.
{Journal of Alloys and Compounds}, 330, pages: 654-658, 2002 (article)
mms Hirscher, M., Becher, M., Haluska, M., Quintel, A., Skakalova, V., Choi, Y. M., Dettlaff-Weglikowska, U., Roth, S., Stepanek, I., Bernier, P., Leonhardt, A., Fink, J. Hydrogen storage in carbon nanostructures {Journal of Alloys and Compounds}, 330, pages: 654-658, 2002 (article)
Micromagnetic investigation of sub-100-nm magnetic domains in atomically stacked Fe(001)/Au(001) multilayers
Köhler, M., Zweck, J., Bayreuther, G., Fischer, P., Schütz, G., Denbeaux, G., Attwood, D.
{Journal of Magnetism and Magnetic Materials}, 240, pages: 79-82, 2002 (article)
mms Köhler, M., Zweck, J., Bayreuther, G., Fischer, P., Schütz, G., Denbeaux, G., Attwood, D. Micromagnetic investigation of sub-100-nm magnetic domains in atomically stacked Fe(001)/Au(001) multilayers {Journal of Magnetism and Magnetic Materials}, 240, pages: 79-82, 2002 (article)
Experimental Study of a Crystal Positron Source
Chehab, R., Cizeron, R., Sylvia, C., Baier, V., Beloborodov, K., Bukin, A., Burdin, S., Dimova, T., Drozdetsky, A., Druzhinin, V., Dubrovin, M., Golubev, V., Serednyakov, S., Shary, V., Strakhovenko, V., Artru, X., Chevallier, M., Dauvergne, D., Kirsch, R., Lautesse, P., Poizat, J. C., Remillieux, J., Jejcic, A., Keppler, P., Major, J., Gatignon, L., Bochek, G., Kulibaba, V., Maslov, N., Bogdanov, A., Potylitsin, A., Vnukov, I.
{Physics Letters B}, 525, pages: 41-48, 2002 (article)
mms Chehab, R., Cizeron, R., Sylvia, C., Baier, V., Beloborodov, K., Bukin, A., Burdin, S., Dimova, T., Drozdetsky, A., Druzhinin, V., Dubrovin, M., Golubev, V., Serednyakov, S., Shary, V., Strakhovenko, V., Artru, X., Chevallier, M., Dauvergne, D., Kirsch, R., Lautesse, P., Poizat, J. C., Remillieux, J., Jejcic, A., Keppler, P., Major, J., Gatignon, L., Bochek, G., Kulibaba, V., Maslov, N., Bogdanov, A., Potylitsin, A., Vnukov, I. Experimental Study of a Crystal Positron Source {Physics Letters B}, 525, pages: 41-48, 2002 (article)
Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten
Schölkopf, B., Müller, K., Smola, A.
Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)
We describe recent developments and results of statistical learning theory. In the framework of learning from examples, two factors control generalization ability: explaining the training data by a learning machine of a suitable complexity. We describe kernel algorithms in feature spaces as elegant and efficient methods of realizing such machines. Examples thereof are Support Vector Machines (SVM) and Kernel PCA (Principal Component Analysis). More important than any individual example of a kernel algorithm, however, is the insight that any algorithm that can be cast in terms of dot products can be generalized to a nonlinear setting using kernels. Finally, we illustrate the significance of kernel algorithms by briefly describing industrial and academic applications, including ones where we obtained benchmark record results.
ei Schölkopf, B., Müller, K., Smola, A. Lernen mit Kernen: Support-Vektor-Methoden zur Analyse hochdimensionaler Daten Informatik - Forschung und Entwicklung, 14(3):154-163, September 1999 (article)
Input space versus feature space in kernel-based methods
Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.
IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)
This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data.
ei Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A. Input space versus feature space in kernel-based methods IEEE Transactions On Neural Networks, 10(5):1000-1017, September 1999 (article)
p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53.
Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, ..
Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)
Mutations in the p53 tumor suppressor gene are the most frequent genetic alterations found in human cancers. Recent identification of two human homologues of p53 has raised the prospect of functional interactions between family members via a conserved oligomerization domain. Here we report in vitro and in vivo analysis of homo- and hetero-oligomerization of p53 and its homologues, p63 and p73. The oligomerization domains of p63 and p73 can independently fold into stable homotetramers, as previously observed for p53. However, the oligomerization domain of p53 does not associate with that of either p73 or p63, even when p53 is in 15-fold excess. On the other hand, the oligomerization domains of p63 and p73 are able to weakly associate with one another in vitro. In vivo co-transfection assays of the ability of p53 and its homologues to activate reporter genes showed that a DNA-binding mutant of p53 was not able to act in a dominant negative manner over wild-type p73 or p63 but that a p73 mutant could inhibit the activity of wild-type p63. These data suggest that mutant p53 in cancer cells will not interact with endogenous or exogenous p63 or p73 via their respective oligomerization domains. It also establishes that the multiple isoforms of p63 as well as those of p73 are capable of interacting via their common oligomerization domain.
ei Davison, T., Vagner, C., Kaghad, M., Ayed, A., Caput, D., CH, .. p73 and p63 are homotetramers capable of weak heterotypic interactions with each other but not with p53. Journal of Biological Chemistry, 274(26):18709-18714, June 1999 (article)
Estimating the support of a high-dimensional distribution
Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R.
(MSR-TR-99-87), Microsoft Research, 1999 (techreport)
ei Schölkopf, B., Platt, J., Shawe-Taylor, J., Smola, A., Williamson, R. Estimating the support of a high-dimensional distribution (MSR-TR-99-87), Microsoft Research, 1999 (techreport)
Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots
Balakrishnan, K., Bousquet, O., Honavar, V.
Adaptive Behavior, 7(2):173-216, 1999 (article)
ei Balakrishnan, K., Bousquet, O., Honavar, V. Spatial Learning and Localization in Animals: A Computational Model and Its Implications for Mobile Robots Adaptive Behavior, 7(2):173-216, 1999 (article)
SVMs for Histogram Based Image Classification
Chapelle, O., Haffner, P., Vapnik, V.
IEEE Transactions on Neural Networks, (9), 1999 (article)
Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that Support Vector Machines (SVM) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavy-tailed RBF kernels of the form $K(mathbf{x},mathbf{y})=e^{-rhosum_i |x_i^a-y_i^a|^{b}}$ with $aleq 1$ and $b leq 2$ are evaluated on the classification of images extracted from the Corel Stock Photo Collection and shown to far outperform traditional polynomial or Gaussian RBF kernels. Moreover, we observed that a simple remapping of the input $x_i rightarrow x_i^a$ improves the performance of linear SVMs to such an extend that it makes them, for this problem, a valid alternative to RBF kernels.
GZIP [BibTex]
ei Chapelle, O., Haffner, P., Vapnik, V. SVMs for Histogram Based Image Classification IEEE Transactions on Neural Networks, (9), 1999 (article)
Generalization Bounds via Eigenvalues of the Gram matrix
Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R.
(99-035), NeuroCOLT, 1999 (techreport)
ei Schölkopf, B., Shawe-Taylor, J., Smola, A., Williamson, R. Generalization Bounds via Eigenvalues of the Gram matrix (99-035), NeuroCOLT, 1999 (techreport)
Sparse kernel feature analysis
Smola, A., Mangasarian, O., Schölkopf, B.
(99-04), Data Mining Institute, 1999, 24th Annual Conference of Gesellschaft f{\"u}r Klassifikation, University of Passau (techreport)
ei Smola, A., Mangasarian, O., Schölkopf, B. Sparse kernel feature analysis (99-04), Data Mining Institute, 1999, 24th Annual Conference of Gesellschaft f{\"u}r Klassifikation, University of Passau (techreport)
Parameterized modeling and recognition of activities
Yacoob, Y., Black, M. J.
Computer Vision and Image Understanding, 73(2):232-247, 1999 (article)
In this paper we consider a class of human activities—atomic activities—which can be represented as a set of measurements over a finite temporal window (e.g., the motion of human body parts during a walking cycle) and which has a relatively small space of variations in performance. A new approach for modeling and recognition of atomic activities that employs principal component analysis and analytical global transformations is proposed. The modeling of sets of exemplar instances of activities that are similar in duration and involve similar body part motions is achieved by parameterizing their representation using principal component analysis. The recognition of variants of modeled activities is achieved by searching the space of admissible parameterized transformations that these activities can undergo. This formulation iteratively refines the recognition of the class to which the observed activity belongs and the transformation parameters that relate it to the model in its class. We provide several experiments on recognition of articulated and deformable human motions from image motion parameters.
ps Yacoob, Y., Black, M. J. Parameterized modeling and recognition of activities Computer Vision and Image Understanding, 73(2):232-247, 1999 (article)
< 研究速報>(< 小特集> マイクロマシン)
Sitti, M., 橋本秀紀,
生産研究, 51(8):651-653, 東京大学, 1999 (article)
pi Sitti, M., 橋本秀紀, < 研究速報>(< 小特集> マイクロマシン) 生産研究, 51(8):651-653, 東京大学, 1999 (article)
Micro/Nano Manipulation Using Atomic Force Microscope.
Sitti, M., Hashimoto, H.
生産研究, 51(8):651-653, 東京大学生産技術研究所, 1999 (article)
pi Sitti, M., Hashimoto, H. Micro/Nano Manipulation Using Atomic Force Microscope. 生産研究, 51(8):651-653, 東京大学生産技術研究所, 1999 (article)
Is imitation learning the route to humanoid robots?
Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)
This review will focus on two recent developments in artificial intelligence and neural computation: learning from imitation and the development of humanoid robots. It will be postulated that the study of imitation learning offers a promising route to gain new insights into mechanisms of perceptual motor control that could ultimately lead to the creation of autonomous humanoid robots. This hope is justified because imitation learning channels research efforts towards three important issues: efficient motor learning, the connection between action and perception, and modular motor control in form of movement primitives. In order to make these points, first, a brief review of imitation learning will be given from the view of psychology and neuroscience. In these fields, representations and functional connections between action and perception have been explored that contribute to the understanding of motor acts of other beings. The recent discovery that some areas in the primate brain are active during both movement perception and execution provided a first idea of the possible neural basis of imitation. Secondly, computational approaches to imitation learning will be described, initially from the perspective of traditional AI and robotics, and then with a focus on neural network models and statistical learning research. Parallels and differences between biological and computational approaches to imitation will be highlighted. The review will end with an overview of current projects that actually employ imitation learning for humanoid robots.
am Schaal, S. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences, 3(6):233-242, 1999, clmc (article)
Virtual Reality-Based Teleoperation in the Micro/Nano World.
pi Sitti, M., Hashimoto, H. Virtual Reality-Based Teleoperation in the Micro/Nano World. 生産研究, 51(8):654-656, 東京大学生産技術研究所, 1999 (article)
Segmentation of endpoint trajectories does not imply segmented control
Sternad, D., Schaal, D.
Experimental Brain Research, 124(1):118-136, 1999, clmc (article)
While it is generally assumed that complex movements consist of a sequence of simpler units, the quest to define these units of action, or movement primitives, still remains an open question. In this context, two hypotheses of movement segmentation of endpoint trajectories in 3D human drawing movements are re-examined: (1) the stroke-based segmentation hypothesis based on the results that the proportionality coefficient of the 2/3 power law changes discontinuously with each new â??strokeâ?, and (2) the segmentation hypothesis inferred from the observation of piecewise planar endpoint trajectories of 3D drawing movements. In two experiments human subjects performed a set of elliptical and figure-8 patterns of different sizes and orientations using their whole arm in 3D. The kinematic characteristics of the endpoint trajectories and the seven joint angles of the arm were analyzed. While the endpoint trajectories produced similar segmentation features as reported in the literature, analyses of the joint angles show no obvious segmentation but rather continuous oscillatory patterns. By approximating the joint angle data of human subjects with sinusoidal trajectories, and by implementing this model on a 7-degree-of-freedom anthropomorphic robot arm, it is shown that such a continuous movement strategy can produce exactly the same features as observed by the above segmentation hypotheses. The origin of this apparent segmentation of endpoint trajectories is traced back to the nonlinear transformations of the forward kinematics of human arms. The presented results demonstrate that principles of discrete movement generation may not be reconciled with those of rhythmic movement as easily as has been previously suggested, while the generalization of nonlinear pattern generators to arm movements can offer an interesting alternative to approach the question of units of action.
am Sternad, D., Schaal, D. Segmentation of endpoint trajectories does not imply segmented control Experimental Brain Research, 124(1):118-136, 1999, clmc (article)
Teleoperated nano scale object manipulation
Recent Advances on Mechatronics, pages: 322-335, Singapore: Springer-Verlag, 1999 (article)
pi Sitti, M., Hashimoto, H. Teleoperated nano scale object manipulation Recent Advances on Mechatronics, pages: 322-335, Singapore: Springer-Verlag, 1999 (article)
View-based cognitive mapping and path planning
Schölkopf, B., Mallot, H.
(7), Max Planck Institute for Biological Cybernetics Tübingen, November 1994, This technical report has also been published elsewhere (techreport)
We present a scheme for learning a cognitive map of a maze from a sequence of views and movement decisions. The scheme is based on an intermediate representation called the view graph. We show that this representation carries sufficient information to reconstruct the topological and directional structure of the maze. Moreover, we present a neural network that learns the view graph during a random exploration of the maze. We use a unsupervised competitive learning rule which translates temporal sequence (rather than similarity) of views into connectedness in the network. The network uses its knowledge of the topological and directional structure of the maze to generate expectations about which views are likely to be perceived next, improving the view recognition performance. We provide an additional mechanism which uses the map to find paths between arbitrary points of the previously explored environment. The results are compared to findings of behavioural neuroscience.
ei Schölkopf, B., Mallot, H. View-based cognitive mapping and path planning (7), Max Planck Institute for Biological Cybernetics Tübingen, November 1994, This technical report has also been published elsewhere (techreport)
|
CommonCrawl
|
PFA toolbox: a MATLAB tool for Metabolic Flux Analysis
Yeimy Morales ORCID: orcid.org/0000-0003-0245-38151,
Gabriel Bosque2,
Josep Vehí1,
Jesús Picó2 &
Francisco Llaneras1
BMC Systems Biology volume 10, Article number: 46 (2016) Cite this article
Metabolic Flux Analysis (MFA) is a methodology that has been successfully applied to estimate metabolic fluxes in living cells. However, traditional frameworks based on this approach have some limitations, particularly when measurements are scarce and imprecise. This is very common in industrial environments. The PFA Toolbox can be used to face those scenarios.
Here we present the PFA (Possibilistic Flux Analysis) Toolbox for MATLAB, which simplifies the use of Interval and Possibilistic Metabolic Flux Analysis. The main features of the PFA Toolbox are the following: (a) It provides reliable MFA estimations in scenarios where only a few fluxes can be measured or those available are imprecise. (b) It provides tools to easily plot the results as interval estimates or flux distributions. (c) It is composed of simple functions that MATLAB users can apply in flexible ways. (d) It includes a Graphical User Interface (GUI), which provides a visual representation of the measurements and their uncertainty. (e) It can use stoichiometric models in COBRA format. In addition, the PFA Toolbox includes a User's Guide with a thorough description of its functions and several examples.
The PFA Toolbox for MATLAB is a freely available Toolbox that is able to perform Interval and Possibilistic MFA estimations.
The problem of estimating unknown metabolic fluxes in living cells has been tackled using several methodologies. MFA is one of the most extensively and successfully applied approaches to estimating fluxes [1]. Usually MFA refers to 13C-MFA which uses stable isotopically labeled substrates (e.g., 13C-labeled glucose) combined with stoichiometric balancing to estimate the metabolic fluxes in steady state systems [2, 3]. However, in this study we refer to non-13C-MFA methods. These methods mainly rely on measurements of external fluxes (uptake and production rates) to estimate the flux state of cells. Traditional MFA methods present some limitations when accounting for irreversible reactions [4], underdetermined problems [5], and lack of measurements [6]. To reduce these limitations we have developed Interval [7] and Possibilistic [8] MFA methods, which are well-suited methodologies for scenarios with limited available data. Their main benefits are the following [6–10]: (a) They can consider the irreversibility of the reactions and other inequality constraints. (b) They are able to represent the measured fluxes as intervals and even distributions to describe the uncertainty of the system. (c) They provide interval estimates, which are more reliable and more informative than pointwise solutions, particularly when multiple flux values are possible. (d) They are able to perform estimations in scenarios of high uncertainty or lack of measurements, being those estimates as reliable as possible. In addition, (e) Possibilistic MFA allows the detection and handling of inconsistencies between a model and a set of measurements. The PFA Toolbox provides all these features while preserving computational efficiency.
In the last years, several published works have used these methodologies to perform interval estimations of metabolic fluxes [9, 11–18] and consistency analysis with Possibilistic MFA [9, 17, 18]. Interval MFA was also implemented in FASIMU [16]. However, any intermediate user of MATLAB, Mathematica, R, etc. can easily implement Interval MFA. The easily implementation of Interval MFA has led to be used more often than Possibilistic MFA, which requires more mathematical development and additional linear optimizations. The PFA Toolbox presented here simplifies the use of both methods.
The PFA Toolbox provides a comprehensive set of MATLAB functions to easily and quickly apply Interval and Possibilistic MFA. The PFA Toolbox is completely free and open source; users are welcome to modify and adapt the toolbox code to build their own particular functions to fulfill specific requirements under the mild conditions described in the accompanying license. In the following subsections, we briefly describe the methods implemented in the toolbox: Interval MFA and Possibilistic MFA. A detailed description of both methods can be found in [6].
Interval MFA
Interval MFA is a simple yet powerful extension of traditional MFA methods. It starts with a stoichiometric model or providing model-based constraints, denoted in the sequel as MOC, defined by a stoichiometric matrix N and a set of irreversibility constraints. These together define a space of feasible steady-state flux distributions [19, 20] (matrices and vector are denoted in bold):
$$ MOC=\left\{\begin{array}{c}\hfill N\cdot v=0\hfill \\ {}\hfill\ D\cdot v\ge 0\ \hfill \end{array}\right. $$
where, considering a system with n metabolites and r reactions, N ∈ R {nxr} and D ∈ R {rxr} is a diagonal matrix with D ii = 1 if the flux is reversible (0 otherwise), and v ∈ R {r} is the vector of metabolic fluxes. The values of v that are solution of (1) define a flux distribution.
Consider now a subset v m ∈ R m of measured fluxes in v with m typically much smaller than r. Following the interval approach, we represent each measured flux as an interval with inequalities:
$$ {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{m}}\le {\mathbf{v}}_{\boldsymbol{m}}\le {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{M}} $$
where v m m and v M m are vectors with the minimum and maximum possible values that the measured fluxes v m can take due to measurement's uncertainty.
Equations (1–2) describe a constraint-based model (CB) that defines the space of feasible fluxes. From this CB, the interval of feasible (possible) values for any flux vi in the flux distribution v can be obtained solving two Linear Programming (LP) problems, as follows:
$$ \begin{array}{c}\hfill {\mathrm{v}}_{\mathrm{i}}^{\mathrm{m}}= \min {\mathrm{v}}_{\mathrm{i}}\kern0.5em s.t.\ v\ \in \left\{\begin{array}{c}\hfill MOC\hfill \\ {}\hfill {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{m}}\le {\mathbf{v}}_{\mathbf{m}}\le {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{M}}\hfill \end{array}\right.\hfill \\ {}\hfill {\mathrm{v}}_{\mathrm{i}}^{\mathrm{M}}= \max {\mathrm{v}}_{\mathrm{i}}\kern0.5em s.t.\ v\ \in \left\{\begin{array}{c}\hfill MOC\hfill \\ {}\hfill {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{m}}\le {\mathbf{v}}_{\mathbf{m}}\le {\mathbf{v}}_{\boldsymbol{m}}^{\boldsymbol{M}}\hfill \end{array}\right.\hfill \end{array} $$
This procedure provides an interval estimate for any flux of interest. These interval estimates are particularly useful in the two situations of having imprecise measurements and/or when few measures are available. Extra details about Interval MFA can be found in [6, 7, 10].
Possibilistic MFA
Possibilistic MFA may be seen as a more flexible and powerful extension of Interval MFA. The methodology is based on two ideas: (a) Representing knowledge with constraints satisfied to a certain degree, thus transforming the feasibility of a potential solution into a gradual notion of "possibility" that accounts for uncertainty, and (b) using computationally efficient optimization-based methods, such as Linear Programming, to query for the "most possible" solutions. This methodology is able to face two different problems: (a) To evaluate the consistency between a model and a set of measurements, and (b) to obtain rich estimates of metabolic fluxes. Instead of pointwise estimates, it computes interval estimations for a desired degree of possibility and for entire possibility distributions.
Possibilistic MFA starts with a set of model-based constraints (MOC) defined in (1).
In this case, however, instead of using the simple inequalities (2), the measurements are incorporated in possibilistic terms by means of a set of constraints and two non-negative slack variables that represent the measurement's uncertainty. These constraints, which we call measurement constraints (MEC), can be expressed as:
$$ MEC = \left\{\begin{array}{c}\hfill {\boldsymbol{w}}_{\boldsymbol{m}}={\mathbf{v}}_{\mathbf{m}}+{\boldsymbol{\varepsilon}}_1-{\boldsymbol{\mu}}_1+{\boldsymbol{\varepsilon}}_2-{\boldsymbol{\mu}}_2\hfill \\ {}\hfill {\boldsymbol{\varepsilon}}_1,{\boldsymbol{\mu}}_1\ge 0\hfill \\ {}\hfill 0\le {\boldsymbol{\varepsilon}}_2\le {\boldsymbol{\varepsilon}}_2^{\boldsymbol{m}\boldsymbol{ax}}\hfill \\ {}\hfill 0\le {\boldsymbol{\mu}}_2\le {\boldsymbol{\mu}}_2^{\boldsymbol{m}\boldsymbol{ax}}\hfill \end{array}\right. $$
where v m is the vector of the actual values of the measured fluxes, and w m is the vector of the measured values for them. Both differ due to errors and imprecisions. This uncertainty is represented by the slack variables ε 1 , μ 1 , ε 2 and μ 2 . The bounds ε 2 and μ 2 define a band of fully possible values for v m around the measured values w m . The components ε 1 and μ 1 are penalized in a cost index (5) to assign a decreasing possibility to larger errors. Each candidate solution of (1) and (4) can be denoted as δ = {v, w m , ε 1 , μ 1 , ε 2 , μ 2 }.
Now, we define a function, π (δ):∆ → [0,1] that assigns possibility π in [0, 1] to each solution, ranging from impossible to fully possible. A simple way to build this function is using a linear cost index J to penalize large deviations between the actual values of the fluxes and their measured ones:
$$ J=\boldsymbol{\alpha} \cdot {\boldsymbol{\varepsilon}}_1+\boldsymbol{\beta} \cdot {\boldsymbol{\mu}}_1 $$
The possibility of each solution is defined as:
$$ \pi \left(\delta \right)= \exp \left(-J\left(\delta \right)\right)\ \delta\ \epsilon\ MEC\ {\displaystyle \cap }MOC $$
Where α and β are row vectors of accuracy coefficients or weights that define each measurement's a priori accuracy. These weights need to be defined by the user, e.g., if sensor error is «symmetric», α and β should be defined to be equal.
From this point, Possibilistic MFA calculates different estimates by solving LP problems. You can compute the set of flux values with maximum possibility (a pointwise estimation) or a more informative estimation with intervals or flux distributions.
Pointwise estimations
The simplest outcome of a Possibilistic MFA problem is a pointwise estimate. It corresponds to the flux values with the maximum possibility (minimum cost), which are obtained by minimizing J and solving the LP problem:
$$ {J}_{\min }=\underset{\boldsymbol{\varepsilon}, \boldsymbol{\mu}, \boldsymbol{v}}{ \min }J=\boldsymbol{\alpha} \cdot {\boldsymbol{\varepsilon}}_1+\boldsymbol{\beta} \cdot {\boldsymbol{\mu}}_1\ s.t\ \left\{MOC{\displaystyle \cap }MEC\right.\Big\} $$
The solution flux vector v, that we call v mp, contains the most possible values that are consistent with both the model and the measurements.
This pointwise estimation may be unreliable when multiple solutions are reasonably possible. In these instances, distributions and interval estimates can be computed instead.
Interval estimates
The interval estimate [v mγ , v Mγ ] for a flux v, with a conditional possibility higher than γ, can be computed solving two extra LP's:
$$ {\mathbf{v}}_{\gamma}^m={ \min}_{\boldsymbol{\upvarepsilon}, \boldsymbol{\upmu}, \mathbf{v}}\mathrm{v}\ s.t\left\{\begin{array}{c}\hfill MOC{\displaystyle \cap }MEC\hfill \\ {}\hfill J-{J}_{\min }<- \ln \gamma \hfill \end{array}\right. $$
The upper bound is defined by replacing minimum for maximum.
Distributions as estimates
The complete possibility distribution of a flux can also be obtained for marginal and conditional possibilities. Marginal possibilities provide the degree of possibility of each value for a given flux. Conditional distributions are equivalent to normalizing the marginal possibility distribution to a maximum equal to one.
Possibilistic MFA was casted as a linear optimization problem, for which widely known and efficient tools exist. This great computational performance makes the methodology suitable —in principle— for large-scale metabolic networks.
More information about the methods and a deeper discussion about the strengths and limitations of each approach can be found in our previous works [6–8, 10] and in the toolbox User's Guide (http://kikollan.github.io/PFA-Toolbox/).
The PFA Toolbox has been developed to run in MATLAB. Its core is a set of MATLAB functions that solve each step in a typical MFA problem. The code for all functions is provided with the toolbox. The PFA Toolbox also includes a Graphical User Interface (GUI) to represent the measurements in possibilistic terms. The GUI runs within MATLAB.
The toolbox requires solving LP problems, and those are solved with a flexible and efficient external optimizer, YALMIP [21]. We provide a copy of YALMIP within the PFA Toolbox, but further information about it can be found at the YALMIP website [22]. YALMIP can use different LP solvers, and so does the PFA Toolbox. Three LP solvers were tested: IBM ILOG CPLEX by IBM [23], GLPK [24], and Linprog, the LP solver included in MATLAB. However, we do not recommend the use of Linprog, which proved unreliable, especially for larger MFA problems. Instead, CPLEX or GLPK showed excellent performance. CPLEX has a 90-day free evaluation version, and can be used free for research and academic purposes. GLPK is freely available.
In this section, we show how to use the PFA Toolbox for MATLAB. A list of the functions provided by the toolbox is shown in Table 1. These functions simplify the process of (1) defining the MFA problem, (2) computing different types of estimates (pointwise, interval or distributions) and (3) plotting the results. There is also a function to plot the measurements defined in possibilistic terms, and a GUI to define those measurements. Advanced users can modify and extend each function.
Table 1 List of functions in the PFA Toolbox
The main features of the PFA Toolbox are the following:
» It gives reliable MFA estimations even in uncertain or underdetermined scenarios (those where only a few fluxes can be measured).
» It provides MFA estimations accounting for measurement's imprecision.
» It provides functions to plot interval estimates and distributions.
» It is composed of simple, free and open functions.
A step-by-step protocol to apply Interval or Possibilistic MFA is presented in Fig. 1.
Protocol to use the PFA Toolbox. A step by step to use the PFA Toolbox. Protocol is the same to solve the MFA problems with Interval and possibilistic MFA. Possibilistic has two additional steps, which are optional, a Graphical User Interface (GUI) to represent graphically the measures in possibilistic terms and a function to check if the measures and their uncertainties are well-defined
In addition, the toolbox is developed to use stoichiometric models with the format of the COBRA Toolbox (Constraint-Based Reconstruction and Analysis). This format is widely used due to the popularity of COBRA. As an alternative, the user can simply define a model by providing a stoichiometric matrix.
The main features of the toolbox are shown in the next three examples. Additional examples and a thorough description of all functionalities of the toolbox are provided in the User's Guide. The details about the mathematical methods implemented in the toolbox can be found in [7, 8, 10], and in [6].
Example of flux estimation under data scarcity
We use a toy metabolic network to illustrate how to use the PFA Toolbox in scenarios of data scarcity. The first step is to formulate the problem. Consider the metabolic network shown in Fig. 2a. The network has six fluxes and three balanced metabolites. One of the fluxes is reversible. Additionally, the fluxes v4 and v6 have been measured, with values w4 = 9.5 mmol/h, and w6 = 10.5 mmol/h.
PFA Toolbox methodology to solve example of flux estimation under data scarcity. a Upper panel present a simple metabolic network. Metabolites are in capital letters, each vj represent a flux and the double arrows indicate a reversible reaction. b The step-by-step procedure follow to solve the MFA problem where only two measures are known. c Right panel shows the MATLAB code used to perform the computations
The MFA problem consists in the estimation of all six fluxes. Notice, however, that traditional MFA cannot be performed because the problem is undetermined: any pointwise estimate will be only a particular solution of a group of possible ones [5]. The methods in the PFA Toolbox tackle this situation and provide reliable and informative estimates.
In this case, we choose to apply Possibilistic MFA to estimate the fluxes. The first step to solve the problem is to define the model-based constraints (MOC). Stoichiometric model can be directly defined in the code or be provided in COBRA format.
The next step is the addition of measurements and their uncertainties (in this example, we assume that the measurement w4 is very accurate, but w6 is not. In agreement with the problem formulation, we assign values to the slack variables μ2 and ε2, and the weights α and β (details about this process can be found in the User's Guide).
Once the MOC and MEC constraints have been defined, the third step is to obtain the estimates. Possibilistic MFA methodology calculates three types of estimations. In this case, we compute three interval estimates for each flux, for conditional possibilities of 0.5, 0.8 and 1.
Finally, we plot the interval estimates using the function plot_intervals. The metabolic network and the main features of the algorithm to solve the problem with the PFA Toolbox are shown in Fig. 2. Figure 3a shows the interval estimations for each dataset. Notice that even if only two measurements are available, the estimation is reliable.
Flux estimation. Estimations for every flux were obtained with the PFA Toolbox. a Three interval estimates are given, for maximum conditional possibility (box), possibility of 0.8 (black line), and 0.5 (gray line). b Possibility distributions are depicted with solid lines and dashed lines represent measured values
This same procedure can be applied to obtain other types of estimates, such as the complete possibility distribution for a flux. Those computations can be performed using the function solve_PossInterval. The obtained distributions are for conditional possibilities (see [8] for a detailed explanation of the notion of conditional possibility). These possibilistic distributions can be plotted with the fuction plot_distribution. As an example, Fig. 3b shows the distribution estimation for all the six fluxes. The results show, for instance, that the most possible value for v1 is 2.75 mmol/h (π = 1), that v1 being equal to 6.1 mmol/h is a less possible situation (π = 0.6), and that a v1 being larger than 18 mmol/h is very unlikely (π < 0.1).
The model and the code for all the computations are provided as (Additional file 1a).
Note: to apply Interval MFA a similar protocol can be followed. The main difference is that the measures will be represented as intervals instead of being represented in possibilistic terms.
Example of flux estimation: biomass growth of Pichia pastoris
In this example, we estimate the growth of several chemostat cultures of P. pastoris. For each chemostat only a few extracellular fluxes are measured (mainly substrates uptakes and secretion rates) and the aim is to estimate the cellular growth.
The constraint-based model for P. pastoris used is presented in [18] (see Additional file 2). It is a relatively small representation including only the main catabolic pathways considering the uptake of the usual carbon sources: methanol, glucose and glycerol. The stoichiometric model contains 37 metabolites and 48 reactions, with reversibility accounted for. The stoichiometric matrix and all the measurements can be found in the (Additional file 3) [31-35].
We select to apply Possibilistic MFA to perform the estimation. As before, we start by defining the MOC and MEC constraints. In this example, we assign the same uncertainty to all the measurements: a deviation of 5 % around the measured value is assumed to be fully possible, while a deviation larger than 20 % is assumed to be an event of low possibility (π = 0.1). The next step is to estimate the growth for each experiment. We compute three interval estimates for conditional possibilities of 0.99, 0.5 and 0.1. Finally, we plot the interval estimates, results are shown in Fig. 4a.
Growth estimations with possibilistic MFA for P. pastoris and E. coli. a Example with six P. pastoris experiments. b Example with E. coli experiments. In both cases, three interval estimates are represented, for conditional possibilities equal to 0.99 (box), 0.5 (bar) and 0.1 (line). The crosses represent the actual experimental values
The estimations show good agreement with the experimental growth rates (as expected, since this model and the data have been tested previously). Notice that the interval estimates not only predict the growth rates but also provide an indication of the estimation reliability. The complete code for all computations can be found in the (Additional file 1b).
Example of flux estimation: growth of Escherichia coli
Here we use a well-known model of E. coli, taken from [25] and illustrated in the (Additional file 4). It is a relatively compact model containing 72 metabolites and 95 reactions. We consider six chemostat experiments of E. coli growing in glucose [26]. The datasets contain information only for a handful of extracellular measurements (growth rate, substrate uptake, oxygen uptake, CO2 production and acetate and pyruvate secretion). The model and the measurements can be found in the (Additional file 5).
Possibilistic MFA is applied again to estimate the growth rate for all six scenarios. The problem is similar to the previous one, and we assume the same uncertainty for each measurement. However, we now consider a larger model for a different and widely used organism. The computation procedure is analogous to the one previously described. The complete code for all computations can be found in the (Additional file 1c).
The flux estimates computed with the toolbox are compatible with the actual growth rate in all scenarios (Fig. 4b). Notice, however, that the estimates are wider than in the first example (no-growth is possible in all of them, but the maximum possible growth is near the actual one). The model is larger and the available measurements are not enough to determine completely the flux state of cells. This illustrates one limitation of Interval and Possibilistic MFA: the estimates are only as precise as the uncertainty and the available measurements allow.
Example of consistency analysis with P. pastoris
The last example illustrates how the PFA Toolbox can be used for another purpose: to evaluate the degree of consistency between a given model and a set of experimental measurements. Consider the data of six chemostat experiments with P. pastoris taken from the literature (Table 2). We test how consistent the data for each experiment are against the model of P. pastoris described previously. We assume that the model is reliable and therefore it can be used to evaluate the validity of each dataset. Notice that this is a strong assumption, valid here for the purpose of this example. It is indeed possible to perform the exact opposite analysis: to obtain several experimental datasets and use them to assess the quality of a metabolic model. We use Possibilistic MFA to validate the model of P. pastoris [9, 18]. The objective of the analysis performed here is to detect if there are (larger than expected) errors in the measurements.
Table 2 Experimental data for six chemostat experiments with Pichia pastoris and an analysis of its consistency against a model
We start as in previous examples by defining MOC and MEC constraints. The next step is to compute the estimation. In this example, we compute the most possible solution for each experiment with the solve_maxPoss function. This provides the maximum possibility flux vector and the associated degree of possibility (πmp) between [0, 1] of the most possible solution. This value provides an indication of the agreement between the model-based constraints (MOC) and the measurements constraints (MEC).
A possibility equal to one is interpreted as a complete consistency; a lower value implies that there are errors in one (or more) of the measurements or in the model. The complete MATLAB code for this computation can be found in (Additional file 1b).
The results presented in Table 2 show that all datasets except one are highly consistent with the model. The dataset 1 has a low degree of possibility (lower 0.2). This suggests that one or more of the measured fluxes in that experiment is unreliable and may contain errors.
All the computations of these four examples were performed with the PFA Toolbox. The computations take approximately 13 s in a 64-bit Windows PC (Intel Core™ i5 2.5 GHz processor), using MATLAB R2012a with IBM ILOG CPLEX Optimizer as the solver for Linear Programming problems.
Notes on computational efficiency and large networks
The methods used by the PFA Toolbox, Possibilistic MFA and interval MFA, have been cast as linear optimization problems, and thus they can be solved with computational efficiency. This makes these methodologies suitable for large-scale metabolic networks. For instance, when tested on a genome-scale E. coli model (iJO1366) that contains 2583 reactions [27], the PFA Toolbox is able to get estimates for all 2507 fluxes with three degrees of possibility (i.e., solving 3x2507 LP problems). Computing those estimates required 120 min in an AMD A10–5800 K with Radeon HD graphic (3.80 GHz) PC and 8 GB of RAM with GLPK optimizer. This suggests that the PFA Toolbox may be able to solve MFA flux estimations of large models with good results and reasonable computational cost.
There is, however, a limitation regarding MFA-wise methods when estimating fluxes in large networks: there may be too many flux vectors compatible with the (few) available measurements [28]. Unlike traditional methods, those proposed here may still be of use in this situation. Possibilistic MFA and Interval MFA capture all the equally possible flux states (or "similarly" possible) by means of possibilistic distributions or intervals. If there is a wide range of candidates, however, the estimation may be only slightly informative. If this is the case, one could decide to incorporate a rational assumption, as done in FBA methods [29, 30].
We have presented the PFA Toolbox for MATLAB. This toolbox provides a set of MATLAB functions to apply Interval MFA and Possibilistic MFA in a simple and flexible way. The PFA Toolbox is completely free and open source, and can be modified by its users. The toolbox implements MFA-wise methods to perform metabolic flux estimations that are particularly well suited to deal with scenarios of high uncertainty and scarce measurements, which are common in industry.
Availability and requirements
Project name: PFA Toolbox version 1.0.0.
Project home page: http://kikollan.github.io/PFA-Toolbox/
Operating systems: platform independent.
Programming language: MATLAB
Other requirements: −
License: Own license.
Any restriction to use by non-academics: none.
CB, constraint-based model; COBRA, Constraint-Based Reconstruction and Analysis; FASIMU, Flux-balance Analysis based Simulations; GLPK, GNU Linear Programming kit; GUI, Graphical User Interface; IBM ILOG CPLEX, High-performance mathematical programming solver for linear programming; LP, Linear Programming; MEC, Measurement constraints; MFA, Metabolic Flux Analysis; MOC, model-based constraints; PFA, Possibilistic Flux Analysis; YALMIP, Modelling language for advanced modeling and solution of optimization problems
Sauer U, Hatzimanikatis V, Bailey J, Hochuli M, Szyperski T, Wuethrich K. Metabolic fluxes in riboflavin-producing Bacillus subtilis. Nature biotechnology. 1997;15(5):448–52.
Wittmann C. Metabolic flux analysis using mass spectrometry. In: Tools and Applications of Biochemical Engineering Science. Berlin: Springer; 2002. p. 39–64.
Antoniewicz M. Methods and advances in metabolic flux analysis: a mini-review. J Ind Microbiol Biot. 2015;42(3):317–25.
Araúzo-Bravo MR, Shimizu JK. An improved method for statistical analysis of metabolic flux analysis using isotopomer-mapping matrices with analytical expressions. J Biotech. 2003;05:117–33.
Klamt S, Schuster S, Gilles D. Calculability analysis in underdetermined metabolic networks illustrated by a model of the central metabolism in purple nonsulfur bacteria. Biotechnol Bioeng. 2002;77(7):734–51.
Llaneras F. Interval and possibilistic methods for constraint-based metabolic models, PhD Thesis. Universidad Politécnica de Valencia: Departamento de Ingeniería de Sistemas y Automática; 2011.
Llaneras F, Picó J. An interval approach for dealing with flux distributions and elementary modes activity patterns. J Theor Biol. 2007;246(2):290–308.
Llaneras F, Sala A, Picó J. A possibilistic framework for constraint-based metabolic flux analysis. BMC Syst Biol. 2009;3(1):79.
Tortajada M, Llaneras F, Picó J. Validation of a constraint-based model of Pichia pastoris metabolism under data scarcity. BMC Syst Biol. 2010;4(1):115.
Llaneras F, Picó J. A procedure for the estimation over time of metabolic fluxes in scenarios where measurements are uncertain and/or insufficient. BMC Bioinformatics. 2007;8(1):421.
Iyer VV, Ovacik MA, Androulakis IP, Roth CM, Ierapetritou MG. Transcriptional and metabolic flux profiling of triadimefon effects on cultured hepatocytes. Toxicology and applied pharmacology. 2010;248(3):165–77.
Zamorano F, Wouwer A, Bastin G. Detailed metabolic flux analysis of an underdetermined network of CHO cells. J Biotechnol. 2010;150(4):497–508.
Iyer V, Yang H, Ierapetritou M, Roth C. Effects of glucose and insulin on HepG2‐C3A cell metabolism. Biotechnol Bioeng. 2010;107(2):347–56.
Iyer V, Androulakis I, Roth C, Ierapetritou M. Effects of Triadimefon on the Metabolism of Cultured Hepatocytes. In: BioInformatics and BioEngineering (BIBE), IEEE International Conference on. 2010. p. 118–23.
Orman MA, Arai K, Yarmush ML, Androulakis IP, Berthiaume F, Ierapetritou MG. Metabolic flux determination in perfused livers by mass balance analysis: effect of fasting. Biotechnology and bioengineering. 2010;107(5):825–35.
Hoppe A, Hoffmann S, Gerasch A, Gille C, Holzhütter H. FASIMU: flexible software for flux-balance computation series in large metabolic networks. BMC bioinformatics. 2011;12(1):28.
González J, Folch-Fortuny A, Llaneras F, Tortajada M, Picó J, Ferrer A. Metabolic flux understanding of Pichia pastoris grown on heterogenous culture media. Chemometr Intell Lab. 2014;134:89–99.
Morales Y, Tortajada M, Picó J, Vehí J, Llaneras F. Validation of an FBA model for Pichia pastoris in chemostat cultures. BMC System Biol. 2014;8(1):142.
Stephanopoulos GN, Aristidou AA, Nielsen J. Metabolic Engineering: Principles and Methodologies. San Diego, USA: Academic; 1998.
Heijden R, Romein B, Heijnen J, Hellinga C, Luyben K. Linear constraint relations in biochemical reaction systems: I & II. Biotech Bioeng. 1994;43(1):3–10.
Lofberg J. YALMIP: A toolbox for modeling and optimization in MATLAB. In: IEEE International Symposium on Computer Aided Control Systems Design. 2004. p. 284–9.
YALMIP Home Page [http://users.isy.liu.se/johanl/yalmip/]. Accessed 11 May 2016.
IBM ILOG CPLEX- High-performance mathematical programming engine. [http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/]. Accessed 11 May 2016.
GLPK (GNU Linear programming kit) [http://www.gnu.org/software/glpk/]. Accessed 11 May 2016.
Orth D, Fleming M, Palsson B. Reconstruction and use of microbial metabolic networks: the core Escherichia coli metabolic model as an educational guide. EcoSal Plus. 2010;4:1.
Emmerling M, Dauner M, Ponti A, Fiaux J, Hochuli M, Szyperski T, Wüthrich K, Bailey J, Sauer U. Metabolic flux responses to pyruvate kinase knockout in Escherichia coli. Journal of bacteriology. 2002;184(1):152–64.
Orth J, Conrad T, Na J, Lerman J, Nam H, Feist A, Palsson B. A comprehensive genome‐scale reconstruction of Escherichia coli metabolism—2011. Molecular systems biology. 2011;7(1):535.
Bonarius H, Schmid G, Tramper J. Flux analysis of underdetermined metabolic networks: the quest for the missing constraints. Trends in Biotechnology. 1997;15(8):308–14.
Palsson BØ. Systems biology: properties of reconstructed networks. New York: Cambridge University Press; 2006.
Schilling C, Covert M, Famili I, Church G, Edwards J, Palsson B. Genome-scale metabolic model of Helicobacter pylori 26695. Journal of Bacteriology. 2002;184(16):4582–93.
Solà A, Jouhten P, Maaheimo H, Sánchez-Ferrando F, Szyperski T, Ferrer P. Metabolic flux profiling of Pichia pastoris grown on glycerol/methanol mixtures in chemostat cultures at low and high dilution rates. Microbiol. 2007;153:281–90.
Solà A. Estudi del metabolisme central del carboni de Pichia pastoris, PhD Thesis. Universitat Autònoma de Barceloana: Escola Tècnica Superior d'Enginyeria; 2004.
Jungo C, Rerat C, Marison IW, von Stockar U. Quantitative characterization of the regulation of the synthesis of alcohol oxidase and of the expression of recombinant avidin in a Pichia pastoris Mut + strain. Enzyme Microb Technol. 2006;39:936–44.
Tortajada M. Process development for the obtention and use of recombinant glycosidases: expression, modelling and immobilization, PhD Thesis. Universidad Politécnica de Valencia: Departamento de Ingeniería de Sistemas y Automática; 2012.
Jordà J, de Jesus SS, Peltier S, Ferrer P, Albiol J. Metabolic flux analysis of recombinant Pichia pastoris growing on different glycerol/methanol mixtures by iterative fitting of NMR-derived 13C-labelling data from proteinogenic amino acids. New Biotecnol. 2014;31(1):120–32.
We acknowledge Ignacio Ribelles for contributing to programming the MATLAB functions and for writing the GUI.
This research has been partially supported by the Spanish Government (FEDER-CICYT: DPI 2014–55276-C5–1-R). Yeimy Morales is grateful for the BR Grants of the University of Girona (BR2012/26). Gabriel Bosque Chacón is recipient of a doctoral fellowship from the Spanish Government (BES-2012–053772).
All data are included in the manuscript, and the associated supplementary material and links provided.
FLL and JP developed the idea for the toolbox, and with JV, they designed the research and coordinated the project. FLL designed the toolbox implementation and wrote the first version of the code. YM contributed to the code, documented it and wrote the user's documentation. YM and GB developed the examples and debugged the toolbox. YM drafted the first manuscript. All authors read and approved the final manuscript.
MICElab, IIIA, Universitat de Girona, Campus Montilivi, P4, Girona, 17071, Spain
Yeimy Morales, Josep Vehí & Francisco Llaneras
Institut Universitari d'Automàtica i Informàtica Industrial, Universitat Politècnica de València, Camino de Vera s/n, Edificio 5C, 46022, Valencia, Spain
Gabriel Bosque & Jesús Picó
Yeimy Morales
Gabriel Bosque
Josep Vehí
Jesús Picó
Francisco Llaneras
Correspondence to Yeimy Morales.
Code for the examples A.rar file with the MATLAB files code to perform the examples described below with Example of flux estimation under data scarcity (a), P. pastoris (b) and E. coli (c). (RAR 5 kb)
Metabolic network of P. pastoris. Metabolic network for the Pichia pastoris model. For the sake of clarity, the reactions representing biomass growth and ATP balance have not been included in the scheme. (PDF 1082 kb)
Stoichiometric matrix and experimental data for Pichia pastoris. A Microsoft Excel spreadsheet file with i) the list of reactions and metabolites, ii) the stoichiometric matrix of P. pastoris and iii) the experimental datasets taken from the literature. This includes measurements of biomass, substrates uptakes (glycerol, glucose, and methanol), Oxygen Uptake Rate (OUR), CO2 production (CPR), and formation of byproducts (ethanol, citrate, and pyruvate). (XLSX 48 kb)
Metabolic network of Escherichia Coli. Metabolic network for the Escherichia coli model. (PDF 86 kb)
Stoichiometric matrix and experimental data for Escherichia coli. A Microsoft Excel spreadsheet file with i) the stoichiometric matrix of E. coli and ii) the experimental datasets taken from the literature. This includes measurements of biomass, glycerol, OUR, CPR and pyruvate. (XLSX 62 kb)
Morales, Y., Bosque, G., Vehí, J. et al. PFA toolbox: a MATLAB tool for Metabolic Flux Analysis. BMC Syst Biol 10, 46 (2016). https://doi.org/10.1186/s12918-016-0284-1
Metabolic Flux Analysis
Constraint-based modelling
Methods, software and technology
|
CommonCrawl
|
Search for: All records
Creators/Authors contains: "Nierenberg, Anna"
« Prev Select page number Next »
Total Resources
Full Text / Resource Available
Filter by Author / Creator
Birrer, Simon (7)
Gilman, Daniel (7)
Treu, Tommaso (7)
Benson, Andrew (6)
Nierenberg, Anna (6)
Du, Xiaolong (3)
Nierenberg, Anna M (3)
Bovy, Jo (2)
Peter, Annika H (2)
Roberts, Daniella M (2)
Boylan-Kolchin, Michael (1)
Bullock, James S (1)
Casey, Kirsten J (1)
Davis, A Bianca (1)
Faucher-Giguère, Claude-André (1)
Garling, Christopher T (1)
Graus, Andrew S (1)
Greco, Johnny P (1)
Grudić, Michael Y (1)
Hopkins, Philip F (1)
Filter by Editor
& Spizer, S. M. (0)
& . Spizer, S. (0)
& Ahn, J. (0)
& Bateiha, S. (0)
& Bosch, N. (0)
& Chen, B. (0)
& Chen, Bodong (0)
& Drown, S. (0)
& Higgins, A. (0)
& Kali, Y. (0)
& Ruiz-Arias, P.M. (0)
& S. Spitzer (0)
& Spitzer, S. (0)
& Spitzer, S.M. (0)
:Chaosong Huang, Gang Lu (0)
A. Beygelzimer (0)
A. E. Lischka, E.B. Dyer (0)
A. Ghate, K. Krishnaiyer (0)
A. Higgins (0)
A. I. Sacristán, J. C. (0)
Excel (limit 2000)
CSV (limit 5000)
XML (limit 5000)
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Dark Matter Constraints from a Unified Analysis of Strong Gravitational Lenses and Milky Way Satellite Galaxies
https://doi.org/10.3847/1538-4357/abf9a3
Nadler, Ethan O. ; Birrer, Simon ; Gilman, Daniel ; Wechsler, Risa H. ; Du, Xiaolong ; Benson, Andrew ; Nierenberg, Anna M. ; Treu, Tommaso ( August 2021 , The Astrophysical Journal)
The luminosity functions and redshift evolution of satellites of low-mass galaxies in the COSMOS survey
https://doi.org/10.1093/mnras/stab069
Roberts, Daniella M ; Nierenberg, Anna M ; Peter, Annika H ( February 2021 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT The satellite populations of the Milky Way, and Milky Way mass galaxies in the local Universe, have been extensively studied to constrain dark matter and galaxy evolution physics. Recently, there has been a shift to studying satellites of hosts with stellar masses between that of the Large Magellanic Cloud and the Milky Way, since they can provide further insight on hierarchical structure formation, environmental effects on satellites, and the nature of dark matter. Most work is focused on the Local Volume, and little is still known about low-mass host galaxies at higher redshift. To improve our understanding of the evolution of satellite populations of low-mass hosts, we study satellite galaxy populations as a function of host stellar mass 9.5 < log (M*/M⊙) < 10.5 and redshifts 0.1 < $z$ < 0.8 in the COSMOS survey, making this the first study of satellite systems of low-mass hosts across half the age of the universe. We find that the satellite populations of low-mass host galaxies, which we measure down to satellite masses equivalent to the Fornax dwarf spheroidal satellite of the Milky Way, remain mostly unchanged through time. We observe a weak dependence between host stellar mass and number of satellites permore »host, which suggests that the stellar masses of the hosts are in the power-law regime of the stellar mass to halo mass relation (M*–Mhalo) for low-mass galaxies. Finally, we test the constraining power of our measured cumulative luminosity function to calculate the low-mass end slope of the M*–Mhalo relation. These new satellite luminosity function measurements are consistent with Lamda cold dark matter predictions.« less
The primordial matter power spectrum on sub-galactic scales
https://doi.org/10.1093/mnras/stac670
Gilman, Daniel ; Benson, Andrew ; Bovy, Jo ; Birrer, Simon ; Treu, Tommaso ; Nierenberg, Anna ( March 2022 , Monthly Notices of the Royal Astronomical Society)
The primordial matter power spectrum quantifies fluctuations in the distribution of dark matter immediately following inflation. Over cosmic time, overdense regions of the primordial density field grow and collapse into dark matter haloes, whose abundance and density profiles retain memory of the initial conditions. By analysing the image magnifications in 11 strongly lensed and quadruply imaged quasars, we infer the abundance and concentrations of low-mass haloes, and cast the measurement in terms of the amplitude of the primordial matter power spectrum. We anchor the power spectrum on large scales, isolating the effect of small-scale deviations from the Lambda cold dark matter (ΛCDM) prediction. Assuming an analytic model for the power spectrum and accounting for several sources of potential systematic uncertainty, including three different models for the halo mass function, we obtain correlated inferences of $\log _{10}\left(P / P_{\Lambda \rm {CDM}}\right)$, the power spectrum amplitude relative to the predictions of the concordance cosmological model, of $0.0_{-0.4}^{+0.5}$, $0.1_{-0.6}^{+0.7}$, and $0.2_{-0.9}^{+1.0}$ at k = 10, 25, and 50 $\rm {Mpc^{-1}}$ at $68 {{\ \rm per\ cent}}$ confidence, consistent with CDM and single-field slow-roll inflation.
The LBT satellites of Nearby Galaxies Survey (LBT-SONG): the satellite population of NGC 628
https://doi.org/10.1093/mnras/staa3246
Davis, A Bianca ; Nierenberg, Anna M ; Peter, Annika H ; Garling, Christopher T ; Greco, Johnny P ; Kochanek, Christopher S ; Utomo, Dyas ; Casey, Kirsten J ; Pogge, Richard W ; Roberts, Daniella M ; et al ( December 2020 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT We present the first satellite system of the Large Binocular Telescope Satellites Of Nearby Galaxies Survey (LBT-SONG), a survey to characterize the close satellite populations of Large Magellanic Cloud to Milky-Way-mass, star-forming galaxies in the Local Volume. In this paper, we describe our unresolved diffuse satellite finding and completeness measurement methodology and apply this framework to NGC 628, an isolated galaxy with ∼1/4 the stellar mass of the Milky Way. We present two new dwarf satellite galaxy candidates: NGC 628 dwA, and dwB with MV = −12.2 and −7.7, respectively. NGC 628 dwA is a classical dwarf while NGC 628 dwB is a low-luminosity galaxy that appears to have been quenched after reionization. Completeness corrections indicate that the presence of these two satellites is consistent with CDM predictions. The satellite colours indicate that the galaxies are neither actively star forming nor do they have the purely ancient stellar populations characteristic of ultrafaint dwarfs. Instead, and consistent with our previous work on the NGC 4214 system, they show signs of recent quenching, further indicating that environmental quenching can play a role in modifying satellite populations even for hosts smaller than the Milky Way.
Constraints on the mass-concentration relation of cold dark matter halos with 11 strong gravitational lenses
https://doi.org/10.1093/mnrasl/slz173
Gilman, Daniel ; Du, Xiaolong ; Benson, Andrew ; Birrer, Simon ; Nierenberg, Anna ; Treu, Tommaso ( February 2020 , Monthly Notices of the Royal Astronomical Society: Letters)
Abstract The mass-concentration relation of dark matter halos reflects the assembly history of objects in hierarchical structure formation scenarios, and depends on fundamental quantities in cosmology such as the slope of the primordial matter power-spectrum. This relation is unconstrained by observations on sub-galactic scales. We derive the first measurement of the mass-concentration relation using the image positions and flux ratios from eleven quadruple-image strong gravitational lenses (quads) in the mass range 106 − 1010M⊙, assuming cold dark matter. We model both subhalos and line of sight halos, finite-size background sources, and marginalize over nuisance parameters describing the lens macromodel. We also marginalize over the the logarithmic slope and redshift evolution of the mass-concentration relation, using flat priors that encompass the range of theoretical uncertainty in the literature. At z = 0, we constrain the concentration of 108M⊙ halos $c=12_{-5}^{+6}$ at $68 \%$ CI, and $c=12_{-9}^{+15}$ at $95 \%$ CI. For a 107M⊙ halo, we obtain $68 \%$ ($95 \%$) constraints $c=15_{-8}^{+9}$ ($c=15_{-11}^{+18}$), while for 109M⊙ halos $c=10_{-4}^{+7}$ ($c=10_{-7}^{+14}$). These results are consistent with the theoretical predictions from mass-concentration relations in the literature, and establish strong lensing by galaxies as a powerful probe of halo concentrations on sub-galactic scales across cosmologicalmore »distance.« less
Warm dark matter chills out: constraints on the halo mass function and the free-streaming length of dark matter with eight quadruple-image strong gravitational lenses
https://doi.org/10.1093/mnras/stz3480
Gilman, Daniel ; Birrer, Simon ; Nierenberg, Anna ; Treu, Tommaso ; Du, Xiaolong ; Benson, Andrew ( February 2020 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT The free-streaming length of dark matter depends on fundamental dark matter physics, and determines the abundance and concentration of dark matter haloes on sub-galactic scales. Using the image positions and flux ratios from eight quadruply imaged quasars, we constrain the free-streaming length of dark matter and the amplitude of the subhalo mass function (SHMF). We model both main deflector subhaloes and haloes along the line of sight, and account for warm dark matter free-streaming effects on the mass function and mass–concentration relation. By calibrating the scaling of the SHMF with host halo mass and redshift using a suite of simulated haloes, we infer a global normalization for the SHMF. We account for finite-size background sources, and marginalize over the mass profile of the main deflector. Parametrizing dark matter free-streaming through the half-mode mass mhm, we constrain the thermal relic particle mass mDM corresponding to mhm. At $95 \, {\rm per\, cent}$ CI: mhm < 107.8 M⊙ ($m_{\rm {DM}} \gt 5.2 \ \rm {keV}$). We disfavour $m_{\rm {DM}} = 4.0 \,\rm {keV}$ and $m_{\rm {DM}} = 3.0 \,\rm {keV}$ with likelihood ratios of 7:1 and 30:1, respectively, relative to the peak of the posterior distribution. Assuming cold dark matter, we constrainmore »the projected mass in substructure between 106 and 109 M⊙ near lensed images. At $68 \, {\rm per\, cent}$ CI, we infer $2.0{-}6.1 \times 10^{7}\, {{\rm M}_{\odot }}\,\rm {kpc^{-2}}$, corresponding to mean projected mass fraction $\bar{f}_{\rm {sub}} = 0.035_{-0.017}^{+0.021}$. At $95 \, {\rm per\, cent}$ CI, we obtain a lower bound on the projected mass of $0.6 \times 10^{7} \,{{\rm M}_{\odot }}\,\rm {kpc^{-2}}$, corresponding to $\bar{f}_{\rm {sub}} \gt 0.005$. These results agree with the predictions of cold dark matter.« less
Probing dark matter structure down to 107 solar masses: flux ratio statistics in gravitational lenses with line of sight halos
Gilman, Daniel ; Birrer, Simon ; Treu, Tommaso ; Nierenberg, Anna ; Benson, Andrew ( June 2019 , Monthly Notices of the Royal Astronomical Society)
Stars made in outflows may populate the stellar halo of the Milky Way
https://doi.org/10.1093/mnras/staa522
Yu, Sijie ; Bullock, James S ; Wetzel, Andrew ; Sanderson, Robyn E ; Graus, Andrew S ; Boylan-Kolchin, Michael ; Nierenberg, Anna M ; Grudić, Michael Y ; Hopkins, Philip F ; Kereš, Dušan ; et al ( May 2020 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT We study stellar-halo formation using six Milky-Way-mass galaxies in FIRE-2 cosmological zoom simulations. We find that $5{-}40{{\ \rm per\ cent}}$ of the outer (50–300 kpc) stellar halo in each system consists of in-situ stars that were born in outflows from the main galaxy. Outflow stars originate from gas accelerated by superbubble winds, which can be compressed, cool, and form co-moving stars. The majority of these stars remain bound to the halo and fall back with orbital properties similar to the rest of the stellar halo at z = 0. In the outer halo, outflow stars are more spatially homogeneous, metal-rich, and alpha-element-enhanced than the accreted stellar halo. At the solar location, up to $\sim \!10 {{\ \rm per\ cent}}$ of our kinematically identified halo stars were born in outflows; the fraction rises to as high as $\sim \!40{{\ \rm per\ cent}}$ for the most metal-rich local halo stars ([Fe/H] >−0.5). Such stars can be retrograde and create features similar to the recently discovered Milky Way 'Splash' in phase space. We conclude that the Milky Way stellar halo could contain local counterparts to stars that are observed to form in molecular outflows in distant galaxies. Searches for such a population may provide amore »new, near-field approach to constraining feedback and outflow physics. A stellar halo contribution from outflows is a phase-reversal of the classic halo formation scenario of Eggen, Lynden-Bell & Sandange, who suggested that halo stars formed in rapidly infalling gas clouds. Stellar outflows may be observable in direct imaging of external galaxies and could provide a source for metal-rich, extreme-velocity stars in the Milky Way.« less
Strong lensing signatures of self-interacting dark matter in low-mass haloes
https://doi.org/10.1093/mnras/stab2335
Gilman, Daniel ; Bovy, Jo ; Treu, Tommaso ; Nierenberg, Anna ; Birrer, Simon ; Benson, Andrew ; Sameie, Omid ( August 2021 , Monthly Notices of the Royal Astronomical Society)
ABSTRACT Core formation and runaway core collapse in models with self-interacting dark matter (SIDM) significantly alter the central density profiles of collapsed haloes. Using a forward modelling inference framework with simulated data-sets, we demonstrate that flux ratios in quadruple image strong gravitational lenses can detect the unique structural properties of SIDM haloes, and statistically constrain the amplitude and velocity dependence of the interaction cross-section in haloes with masses between 106 and 1010 M⊙. Measurements on these scales probe self-interactions at velocities below $30 \ \rm {km} \ \rm {s^{-1}}$, a relatively unexplored regime of parameter space, complimenting constraints at higher velocities from galaxies and clusters. We cast constraints on the amplitude and velocity dependence of the interaction cross-section in terms of σ20, the cross-section amplitude at $20 \ \rm {km} \ \rm {s^{-1}}$. With 50 lenses, a sample size available in the near future, and flux ratios measured from spatially compact mid-IR emission around the background quasar, we forecast $\sigma _{20} \lt 11\rm {\small {--}}23 \ \rm {cm^2} \rm {g^{-1}}$ at $95 {{\ \rm per\ cent}}$ CI, depending on the amplitude of the subhalo mass function, and assuming cold dark matter (CDM). Alternatively, if $\sigma _{20} = 19.2 \ \rmmore »{cm^2}\rm {g^{-1}}$ we can rule out CDM with a likelihood ratio of 20:1, assuming an amplitude of the subhalo mass function that results from doubly efficient tidal disruption in the Milky Way relative to massive elliptical galaxies. These results demonstrate that strong lensing of compact, unresolved sources can constrain SIDM structure on sub-galactic scales across cosmological distances, and the evolution of SIDM density profiles over several Gyr of cosmic time.« less
Probing the nature of dark matter by forward modelling flux ratios in strong gravitational lenses
https://doi.org/10.1093/mnras/sty2261
Gilman, Daniel ; Birrer, Simon ; Treu, Tommaso ; Keeton, Charles R ; Nierenberg, Anna ( August 2018 , Monthly Notices of the Royal Astronomical Society)
|
CommonCrawl
|
Small-area spatio-temporal analyses of participation rates in the mammography screening program in the city of Dortmund (NW Germany)
Dorothea Lemke1,2,
Shoma Berkemeyer3,
Volkmar Mattauch4,
Oliver Heidinger4,
Edzer Pebesma2 &
Hans-Werner Hense1,4
BMC Public Health volume 15, Article number: 1190 (2015) Cite this article
The population-based mammography screening program (MSP) was implemented by the end of 2005 in Germany, and all women between 50 and 69 years are actively invited to a free biennial screening examination. However, despite the expected benefits, the overall participation rates range only between 50 and 55 %. There is also increasing evidence that belonging to a vulnerable population, such as ethnic minorities or low income groups, is associated with a decreased likelihood of participating in screening programs. This study aimed to analyze in more detail the intra-urban variation of MSP uptake at the neighborhood level (i.e. statistical districts) for the city of Dortmund in northwest Germany and to identify demographic and socioeconomic risk factors that contribute to non-response to screening invitations.
The numbers of participants by statistical district were aggregated over the three periods 2007/2008, 2009/2010, and 2011/2012. Participation rates were calculated as numbers of participants per female resident population averaged over each 2-year period. Bayesian hierarchical spatial models extended with a temporal and spatio-temporal interaction effect were used to analyze the participation rates applying integrated nested Laplace approximations (INLA). The model included explanatory covariates taken from the atlas of social structure of Dortmund.
Generally, participation rates rose for all districts over the time periods. However, participation was persistently lowest in the inner city of Dortmund. Multivariable regression analysis showed that migrant status and long-term unemployment were associated with significant increases of non-attendance in the MSP.
Low income groups and immigrant populations are clustered in the inner city of Dortmund and the observed spatial pattern of persistently low participation in the city center is likely linked to the underlying socioeconomic gradient. This corresponds with the findings of the ecological regression analysis manifesting socioeconomically deprived neighborhoods as risk factors for low attendance in the MSP. Spatio-temporal surveillance of participation in cancer screening programs may be used to identify spatial inequalities in screening uptake and plan spatially focused interventions.
The implementation of a nation-wide, population-based mammography screening program (MSP) started in Germany by the end of the year 2005. The stepped implementation process was completed in the state of North-Rhine Westphalia in 2009. All resident women aged between 50 and 69 years are actively invited to a mammography screening examination every two years. The participation is voluntary and free of cost. Mammography screening is a procedure of secondary cancer prevention with an aim of detecting breast cancer in early stages where therapy is less invasive (e.g., breast-conserving therapies instead of mastectomy), remaining lifespans are extended and, ideally, breast cancer mortality is reduced. Despite these expected benefits and the free provision by all statutory health insurances, the overall participation rates range only between 50 and 55 % [1]. Population-based surveys demonstrate that significant gaps exist in screening mammography uptake across population subgroups [2]. These differences are believed to substantially contribute to a higher prevalence of late stage breast cancer at diagnosis among vulnerable populations, including racial and ethnic minorities [3] and low-income groups [4, 5]. More specifically, living in an economically deprived neighborhood showed a decreased likelihood of participating in cancer screening programs and an increased risk of a late-stage breast cancer diagnosis with the correspondent unfavorable prognosis [2, 6–9]. To date, few studies analyzed the intra-urban variation of participation rates in mammography screening programs [2, 10] and to our knowledge none investigated the situation in Germany. We suggest that small-area analyses may provide important insights into the processes and factors that are associated with low participation rates and that this may help to develop spatially focused approaches to improve the participation rates in disadvantaged neighborhoods.
Therefore, this study aimed to investigate the spatio-temporal distribution of the participation rates in the mammography screening program at the neighborhood-level (e.g. statistical districts) of a large city in Germany and to identify important demographic and socio-economic factors that influence the non-attendance to screening invitations.
Study region
Dortmund is a city in the federal state of North Rhine-Westphalia in northwestern Germany with a total population of 575 944 inhabitants in 2013. It is the largest city by area and population in the Ruhr district, a metropolitan area with some 5.1 million inhabitants which is the largest urban and industrial agglomeration in Germany. Dortmund is divided into 62 statistical districts with a median female population of 4086 inhabitants per statistical district. The city's population is characterized by a high proportion of immigrants from southeast Europe and Turkey. The coal crisis in the end of 1970 led to a massive reduction of jobs in the coal and steel industry which resulted in high unemployment rates to this day. Immigrants and their descendants grew up in more socially deprived neighborhoods than many of the autochthonous population. This resulted in a strong spatial segregation of populations with a migration background and with low economic status within the city [11].
Participation rates and geo-referencing
The statistical districts were used as the geographical reference system in which the MSP participation rates were assessed. The residence addresses of all MSP participants for the years 2005 to 2013 were stored at KV.it Dortmund, the institution which administrates the MSP documentation software MaSc [12]. KV.it assigned MSP participants to one of the 62 statistical districts by linking their home addresses to a comprehensive list of street addresses for each district. A list of individual, anonymized participants who were geo-referenced to one of the statistical districts was then transferred to the Institute of Epidemiology and Social Medicine at the University Münster [13] where all subsequent analyses were carried out.
The years 2005 and 2006 were excluded from the present analyses to avoid contamination with the various organizational aspects of the stepped-up implementation of the MSP. All eligible women receive a biennial invitation to the screening program, hence, we chose to analyze three 2-year periods: 2007/2008, 2009/2010, and 2011/2012. The participation rates were calculated using the aggregated numbers of participants and the averaged female background population (age group 50–69) for each two year-period.
Spatio-temporal mapping and regression
The spatio-temporal distribution of participation rates was analyzed within a hierarchical Bayesian framework using a multivariate binomial regression model (spatio-temporal odds model): Let nit denote the number of eligible women resident in district i and period t and Yit the number of participants in breast cancer screening, with I = 1, …, 62 and t = 1, 2, 3. We assumed that the observed number of participants (Yit) had a binomial distribution with parameters nit and θit (probability of participation). At a second level, the probability of participation θit was then decomposed on the logit scale into an overall participation rate (α), main spatial effects (ui and vi) (constant in time), main temporal effects (unstructured (Φt) and structured (γt)), and a space-time interaction term (ψit).
$$ {\mathrm{Y}}_{\mathrm{it}} \sim \mathrm{Binomial}\left({\mathrm{n}}_{\mathrm{it}},{\uptheta}_{\mathrm{it}}\right) $$
$$ \mathrm{logit}\ {\theta}_{it}=\alpha +{u}_i+{v}_i+{\varPhi}_t+{\gamma}_t+{\psi}_{it} $$
The proposed space-time models, assuming a nonparametric time trend and a spatio-temporal interaction term, were introduced by Knorr-Held [14] and are an extension of the spatial model introduced by Besag et al. [15]. All model terms were treated as random variables: The spatially unstructured random effect (ui) was considered independent and identically distributed (iid) with zero mean and unknown precision (τu). To account for the assumption of correlated participation rates in nearby statistical districts, the spatially structured effect (vi) is modelled for each 62 districts as an intrinsic Gaussian Markov random field with unknown precision (τv). This specification is also called a conditionally autoregressive (CAR) prior and was introduced by Besag et al.[15]. In order to insure the identifiability of the intercept α (overall participation rate), a sum-to-zero constraint was imposed on the vi's [16]. The unstructured temporal effect (Φt) was also modelled iid with zero mean and unknown precision. For the structured time effect (γt) random walks of first order were considered [17, 18]. The interaction term (ψit) can be specified in several ways [14], here it is assumed that the two unstructured effects (vi and γt) interact [17, 18]. Therefore, the interaction effect was also specified as zero mean normal with unknown precision (iid., i.e. ψit ~ N(0, τψ). The distribution of the hyperpriors was specified as follows: Minimally informative priors were specified on the log of the unstructured effect precision (log τv ~ logGamma (1, 0.001)) and on the log of the structured effect precision (log τu ~ logGamma (1,0.001)). For the unstructured time effect, a log τφ ~ logGamma (1, 0.01) hyperprior was chosen. For the structured temporal effect and the interaction term, minimally informative priors (the default priors): log τϒ, log τψ ~ logGamma (1, 0.00005) have been used. Altogether, the distribution of the hyperpriors resembles the ones used by Ugarte et. al [18].
$$ \mathrm{logit}\ {\theta}_{it}=\alpha +{u}_i+{v}_i+{\varPhi}_t+{\gamma}_t+{\psi}_{it}+\beta {x}_i^T $$
The specified model (Equation 3) was extended to βxTi, where xTi contains the covariates with a space-time index in order to investigate potential risk factors associated with spatio-temporal variations in the participation rates. The covariates were taken from the atlas of social structure (Sozialstrukturatlas) of Dortmund which is a collection of administratively collected data reflecting social inequalities and differences in the population [19]. These are grouped into the dimensions employment status, demography, income, welfare, and housing. A full description of the explanatory variables is given in Table 1. In order to account for multicollinearity, an initial correlation matrix was examined for high correlations among the variables. Variables with a high correlation (>0.8) were excluded from further regression analyses. For the three 2-year time periods, the data of 2008, 2010 and 2012, respectively, were included in the model, and all variables were dichotomized according to their median value. Following the suggestions of Rothman et al. [20-21] each covariate was fitted separately and model fit was assessed using the changes in deviance information criterion (DIC) (smaller values of DIC indicating more explained variance and better fit). A multivariable model was fit by selectively including variables, starting with those that showed the lowest DIC in univariable analyses, until the DIC could be no further reduced. For the Bayesian inference, the integrated nested Laplace approximation (INLA) approach was used as introduced by Rue et al. [22] and implemented in the R package R-INLA [21, 23, 24]. The Bayesian inference was also used to report the resulting odd ratios (OR) as point estimate (posterior mean) and 95 % credibility intervals (CI) as a quantification of parameter uncertainty. All computations and visualizations were done in R v. 3.0.2 [25].
Table 1 Summary statistics of included variables in the 62 statistical districts and the three time periods
KV.it [12] administrates the MSP documentation including the storage and management of the MSP participants consistent with the existing data protection legislation. For this study, KV.it aggregated participants in statistical districts so that individual women could be not identified. Data were transferred to the investigators in an anonymized form. Use of anonymized data for research purposes does not require a vote by ethics committee or an institutional review board.
Mapping participation rates
The observed annual participation rates showed the overall biennial pattern of participation, i.e., one year with high number of participants and the subsequent year with lower participants (Fig. 1). Despite rising overall MSP participation rates for the three periods from 48 % (2007/08) over 50 % (2009/10) to 54 % (2011/12) (Referenzzentrum MS), a concentration of statistical districts with low participation rates persisted in the inner city of Dortmund, while the outer districts had consistently higher participation rates (Fig. 2a–c). The modeled time trends in Fig. 2d demonstrate these increasing participation rates over the three periods, where the structured time effect (γt) was more pronounced than the unstructured time effect (Φt). The spatial trends combine structured and unstructured effects (Fig. 2e) and confirm, after accounting for covariates, that lower participation rates cluster in the inner city. Finally, the interaction analyses reveals a clear space-time pattern (Fig. 2f-h) which indicates that in 2007/8, on a generally low participation level, the participation rates were particularly low in the eastern districts. This changed in 2008/9 as lower participation persisted in the western and central parts of the city, while in 2010/11 a low participation rate was found in only one inner city district.
Yearly distribution of the participation rates over the period from 2007 to 2012
Participation rates and random effects in the final spatio-temporal regression model. Spatial pattern of the biennial participation rates, for the periods: 2007–08 (a), 2009–10 (b), and 2011–12 (c). Black dots mark the location of the screening units in the study region. Odds ratios compared to the intercept (α) of the unstructured (Φt) and structured (γt) temporal effect (d), combined unstructured and structured spatial heterogeneity (ui + vi) (e), and spatio-temporal interaction effect (ψit) for 2007–08 (f), 2009–10 (g), and 2011–12 (h). All random effects were classified according to their quantiles
Regression analyses
Due to high correlations and close content relations, the variables: unemployment rate (total), persons with migrational background, and basic social welfare rate were excluded from the further analyses. The results of uni- and multivariable spatio-temporal regression analyses are summarized in Table 2. The odds ratios of the univariate spatio-temporal regression analyses demonstrate clearly that districts with high proportions of unemployed migrants or long-term unemployed residents showed a statistically significant impact on lower participation rates. In contrast, higher proportion of elderly population showed positive association with the participation rates. The other ecological variables clearly contained null values in their credibility intervals and were therefore considered as factors without relevant influence. In the multivariable analyses that simultaneously adjusted for spatio-temporal variation, the negative association of the proportion of unemployed migrants and long-term unemployed remained statistically significant. Proportions of migrant unemployment and long-term unemployed above the median were associated with a significant 6 % and, 3 %, respectively, increase in risk for non-attendance in the mammography screening program.
Table 2 Estimation results of univariable and multivariable analysis using the spatio-temporal model
The present study analyzed small-area, intra urban variations of participation rates of the MSP in the city of Dortmund over three 2-year periods. An overall increase in the participation rates was observed over the study period, while the increase was unevenly distributed across the study area. There was a spatial concentration of statistical districts, mainly in the city center, with persistently low participation rates. Dortmund is known to have a strong gradient of socio-economical segregation [26]. Its population is characterized by a high proportion of residents which are likely to have a lower socio-economic status, because of the massive loss of jobs in mining and steel industries in this area after 1970 (also known as structural crisis) which was compounded in population segments with a migration background [11]. Muller and Berger [26] reported, in their investigation about neighborhood deprivation and prevalence of type 2 diabetes in Dortmund, that the inner city and parts of the western city are characterized by the highest level of socio-economic constraints including high proportions of immigrants, unemployed residents, residents with basic social welfare as well as a high population density and low level of incomes.
Therefore, it seems plausible that the observed spatial pattern in the participation rates is linked to the underlying socio-economic gradient. Despite the high availability of screening facilities in the inner city (Fig. 2a–c), it should be noted that these statistical districts had persistently low participation rates. In contrast, reduced participation rates in the southernmost districts may be attributable to a higher proportion of women with private health insurances who tend to abstain from public health offers. The highest rates of participation were found in the eastern parts of the city, that has more affluent statistical districts and screening facilities more nearby. The ecological regression analyses confirmed the spatio-temporal results by revealing that characteristics of disadvantage in statistical districts were related to an increased probability of non-participation in the MSP.
The identification of socio-economic risk factors at area-level as explanatory variables of non-attendance in mammography screening have been examined in previous studies [27–30] which also found that neighborhood income was an important determinant of participation [31–33]. Additionally, Peek and Han [34] reported that vulnerable groups such as the poor, the elderly, and minorities were often unaware of mammography screening programs and had a reduced awareness and a lack of information of disease prevention, diagnosis, and treatment [35]. Awareness of the program is unlikely to play a major role in Germany as all resident women were personally invited as part of a structured systematic program of early breast cancer detection; attitude towards disease prevention seems to be a more likely reason for the lower MSP attendance.
Regarding the overall increase in the participation rates over time, it should be kept in mind that the MSP is a rather recent program as compared to other European countries [36]. It became operational as part of routine care only by the end of the year 2005, and comprehensive implementation in North Rhine–Westphalia was not completed before the end of 2009 [37]. Therefore, the increasing participation rates over the study period may be mainly attributed to an increased efficiency of operational routines within the screening units which allowed for a growing numbers of screened women and it is probable that these observed spatial effects - especially within the first two study periods - are influenced by these technical and structural developments [38]. The spatio-temporal interaction effect adds to the spatial and temporal findings in that it identifies districts where the observed participation rates were reduced as compared to the entire city and throughout the study period [39]. Holding the spatial component constant confirmed the increasing overall trend of the participation rates, while holding the temporal trend constant confirmed that statistical districts in the city center had consistently reduced attendance rate in mammography screening.
This study has several strengths and limitations. An obvious strength has been the use of the Bayesian hierarchical framework in order to borrow strength from spatial and temporal neighbors to reduce the high variability inherent in the estimators, in particular, when numbers (disease counts and/or background population) are unstable [18, 39, 40]. Also, the inclusion of a space-time interaction effect is an added strength of this study, because the participation rates of mammography screening may be plausibly assumed to be dependent in space and time. Adjustment for the spatial, temporal, and space-time interaction effects depicts more clearly how the spatial pattern of the participation rates evolved over time, while the intersection of space and time is seldom considered to disentangle the complex determinants of health-related behavior and diseases [39]. Furthermore, the use of a non-parametric time trend has been a more plausible assumption than a linear time trend, because not all census tracts showed a linear increase or decrease in their participation rates. However, the analysis of effects in our study was confined to only three time periods, and hence requires caution in interpretation. The use of integrated nested Laplace approximations (INLA) reduced computing time substantially while attaining a high degree of accuracy, when fitting large, complex data sets at detailed geographic levels as used in spatio-temporal disease mapping. Given the inherently ecological nature of this study, the parameter estimates may not be used for making inferences on the individual level and therefore must not be interpreted causally. However, the results provide important hints to how social, cultural, and contextual factors may influence the attendance in mammography screening. Thus, despite the ecological nature of our study, the results may be used to provide spatially focused interventions to improve participation in disadvantaged city districts. Another limitation results from the fact that the precise number of invited women was not available due to data privacy regulations in Germany. The denominator used for calculating the participation rates contained therefore the whole female population in that age group which also included women non-eligible to screening (e.g., because of prevalent breast cancer). Thus, the participation rates may be slightly underestimated but this is not perceived as biasing the spatial associations. Another limitation results from the aggregation of the number of participants to a period of 24 month. Because the invitations to biennial screening were continuously mailed throughout the 22–26 month period, a certain amount of misclassifications is to be expected. However, as the general trend of the participation rates showed a clear biennial pattern it seems safe to assume that the main spatio-temporal process of the participation rates was captured with the temporal aggregation employed in this study.
This study analyzed the intra-urban participation rates of a mammography screening program within a hierarchical Bayesian framework using spatio-temporal disease models to identify regions and risk factors of low attendance. Despite a general temporal trend with increasing participation rates, spatial clustering of persistently lower participation rates was observed in the inner city districts, which are known as the socio-economically most deprived neighborhoods of Dortmund. This corresponds with the findings of the ecological regression analysis manifesting indicators of socio-economic constraint in a neighborhood as risk factors for low attendance in the MSP. The spatio-temporal interaction effect showed that the participation rates developed spatially unequally over time and that certain districts had low participation rates throughout the study period. Spatio-temporal surveillance of the participation rates and focused intervention could help identifying and reducing spatial inequalities in the uptake of mammography screening.
Mammographie K. Evaluationsbericht 2011. Berlin: Zusammenfassung der Ergebnisse des Mammographie-Screening-Programms in Deutschland; 2014.
Zenk SN, Tarlov E, Sun J. Spatial equity in facilities providing low- or no-fee screening mammography in Chicago neighborhoods. J Urban Health. 2006;83(2):195–210. doi:10.1007/s11524-005-9023-4.
Lantz PM, Mujahid M, Schwartz K, Janz NK, Fagerlin A, Salem B, et al. The influence of race, ethnicity, and individual socioeconomic factors on breast cancer stage at diagnosis. Am J Public Health. 2006;96(12):2173–8. doi:10.2105/AJPH.2005.072132.
Clegg LX, Hankey BF, Tiwari R, Feuer EJ, Edwards BK. Estimating average annual per cent change in trend analysis. Stat Med. 2009;28(29):3670–82. doi:10.1002/sim.3733.
Merkin SS, Stevenson L, Powe N. Geographic socioeconomic status, race, and advanced-stage breast cancer in New York City. Am J Public Health. 2002;92(1):64–70.
Henry KA, Boscoe FP, Johnson CJ, Goldberg DW, Sherman R, Cockburn M. Breast Cancer Stage at Diagnosis: Is Travel Time Important? J Community Health. 2011;36(6):933–42. doi:10.1007/s10900-011-9392-4.
Lian M, Struthers J, Schootman M. Comparing GIS-based measures in access to mammography and their validity in predicting neighborhood risk of late-stage breast cancer. PLoS One. 2012;7(8):e43000. doi:10.1371/journal.pone.0043000.
Meersman SC, Breen N, Pickle LW, Meissner HI, Simon P. Access to mammography screening in a large urban population: a multi-level analysis. Cancer Causes Control. 2009;20(8):1469–82. doi:10.1007/s10552-009-9373-4.
Peipins LA, Graham S, Young R, Lewis B, Foster S, Flanagan B, et al. Time and distance barriers to mammography facilities in the Atlanta metropolitan area. J Community Health. 2011;36(4):675–83. doi:10.1007/s10900-011-9359-5.
Dai D. Black residential segregation, disparities in spatial access to health care facilities, and late-stage breast cancer diagnosis in metropolitan Detroit. Health Place. 2010;16(5):1038–52. doi:10.1016/j.healthplace.2010.06.012.
Stadt Dortmund. Sozialstrukturatlas 2005 - Demographische und soziale Struktur der Stadt Dortmund, ihrer Stadtbezirke und Sozialräume. In: Dezernat für Arbeit GuS, editor. Dortmund; 2005.
KV.IT Dortmund. IT-Gesellschaft für integrierte Services im Gesundheitswesen. 2015. http://www.kv-it-gmbh.de/. Accessed 04/21/2015.
University of Münster. Institute for Epidemiology and Social Medicine. 2015. http://campus.uni-muenster.de/index.php?id=5943&L=1. Accessed 04/21/2015.
Knorr-Held L. Bayesian modelling of inseparable space-time variation in disease risk. Stat Med. 2000;19(17–18):2555–67.
Besag J, York J, Mollie A. Bayesian Image-Restoration, with 2 Applications in Spatial Statistics. Ann I Stat Math. 1991;43(1):1–20.
Schrodle B, Held L. Spatio-temporal disease mapping using INLA. Environmetrics. 2010;22(6):725–34.
Held L, Natario I, Fenton SE, Rue H, Becker N. Towards joint disease mapping. Stat Methods Med Res. 2005;14(1):61–82.
Ugarte MD, Adin A, Goicoa T, Militino AF. On fitting spatio-temporal disease mapping models using approximate Bayesian inference. Stat Methods Med Res. 2014;23(6):507–30. doi:10.1177/0962280214527528.
Stadt Dortmund. Dortmunder statistisches Informationssystem. 2015. https://www.domap.de/wps/portal/dortmund/produktanzeige?p_id=statistischedaten0. 04/21/2015.
Rothman KJ, Greenland S, Lash TL. Modern epidemiology. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins; 2008.
Wilking H, Hohle M, Velasco E, Suckau M, Eckmanns T. Ecological analysis of social risk factors for Rotavirus infections in Berlin, Germany, 2007–2009. Int J Health Geogr. 2012;11:37. doi:10.1186/1476-072X-11-37.
Rue H, Martino S, Chopin N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J Biom Biostat. 2009;71(2):319–92. doi:10.1111/j.1467-9868.2008.00700.x.
Bivand RS, Gomez-Rubio V, Rue H. Spatial Data Analysis with R-INLA with Some Extensions. J Stat Softw. 2015;63(20):1–31.
Lindgren F, Rue H. Bayesian Spatial Modelling with R-INLA. J Stat Softw. 2015;63(19):1–25.
R Development Core Team. A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2015.
Muller G, Berger K. Neighbourhood deprivation and type 2 diabetes: results from the Dortmund Health Study (DHS). Gesundheitswesen. 2013;75(12):797–802. doi:10.1055/s-0033-1333737.
Dailey AB, Brumback BA, Livingston MD, Jones BA, Curbow BA, Xu X. Area-level socioeconomic position and repeat mammography screening use: results from the 2005 National Health Interview Survey. Cancer Epidemiol Biomarkers Prev. 2011;20(11):2331–44. doi:10.1158/1055-9965.EPI-11-0528.
Dailey AB, Kasl SV, Holford TR, Calvocoressi L, Jones BA. Neighborhood-level socioeconomic predictors of nonadherence to mammography screening guidelines. Cancer Epidemiol Biomarkers Prev. 2007;16(11):2293–303. doi:10.1158/1055-9965.EPI-06-1076.
Pornet C, Dejardin O, Morlais F, Bouvier V, Launoy G. Socioeconomic and healthcare supply statistical determinants of compliance to mammography screening programs: a multilevel analysis in Calvados. France Cancer epidemiology. 2010;34(3):309–15. doi:10.1016/j.canep.2010.03.010.
von Euler-Chelpin M, Olsen AH, Njor S, Vejborg I, Schwartz W, Lynge E. Socio-demographic determinants of participation in mammography screening. Int J Cancer. 2008;122(2):418–23. doi:10.1002/ijc.23089.
Kothari AR, Birch S. Individual and regional determinants of mammography uptake. Can J Public Health. 2004;95(4):290–4.
Maheswaran R, Pearson T, Jordan H, Black D. Socioeconomic deprivation, travel distance, location of service, and uptake of breast cancer screening in North Derbyshire, UK. J Epidemiol Community Health. 2006;60(3):208–12. doi:10.1136/jech.200X.038398.
Ouédraogo S, Dabakuyo-Yonli TS, Amiel P, Dancourt V, Dumas A, Arveux P. Breast cancer screening programmes: Challenging the coexistence with opportunistic mammography. Patient Educ Couns. 2014;97(3):410–7. doi:10.1016/j.pec.2014.08.016.
Peek ME, Han JH. Disparities in screening mammography. Current status, interventions and implications. J Gen Intern Med. 2004;19(2):184–94.
Chamot E, Charvet AI, Perneger TV. Who gets screened, and where: a comparison of organised and opportunistic mammography screening in Geneva, Switzerland. Eur J Cancer. 2007;43(3):576–84. doi:10.1016/j.ejca.2006.10.017.
Eurostat. Breast cancer screening statistics. In: Statistics explained. 2012. http://ec.europa.eu/eurostat/statistics-explained/index.php/Breast_cancer_screening_statistics. Accessed 03/30/2015 2015.
Heidinger O, Batzler WU, Krieg V, Weigel S, Biesheuvel C, Heindel W, et al. The incidence of interval cancers in the German mammography screening program: results from the population-based cancer registry in North Rhine-Westphalia. Dtsch Arztebl Int. 2012;109(46):781–7. doi:10.3238/arztebl.2012.0781.
Bluekens AM, Karssemeijer N, Beijerinck D, Deurenberg JJ, van Engen RE, Broeders MJ, et al. Consequences of digital mammography in population-based breast cancer screening: initial changes and long-term impact on referral rates. Eur Radiol. 2010;20(9):2067–73. doi:10.1007/s00330-010-1786-7.
DiMaggio C. Small-area spatiotemporal analysis of pedestrian and bicyclist injuries in new york city. Epidemiology. 2015;26(2):247–54. doi:10.1097/EDE.0000000000000222.
Papoila AL, Riebler A, Amaral-Turkman A, Sao-Joao R, Ribeiro C, Geraldes C, et al. Stomach cancer incidence in Southern Portugal 1998–2006: a spatio-temporal analysis. Biom J. 2014;56(3):403–15. doi:10.1002/bimj.201200264.
We would like thank KV.it (Dortmund) for georeferencing the screening participants. Additionally, we like to thank the statistical department of Dortmund for supply of the geometrical boundary information, the background population, and the covariate data.
Institute of Epidemiology and Social Medicine, Medical Faculty, Westfälische Wilhelms-Universität Münster, Albert-Schweitzer-Campus 1 D3, D 48149, Münster, Germany
Dorothea Lemke & Hans-Werner Hense
Institute for Geoinformatics, Geosciences Faculty, Westfälische Wilhelms-Universität Münster, Münster, Germany
Dorothea Lemke & Edzer Pebesma
Reference Center for the Mammography Screening Program, University Hospital, Westfälische Wilhelms-Universität Münster, Münster, Germany
Shoma Berkemeyer
Epidemiological Cancer Registry North Rhine-Westphalia, Münster, Germany
Volkmar Mattauch, Oliver Heidinger & Hans-Werner Hense
Dorothea Lemke
Volkmar Mattauch
Oliver Heidinger
Edzer Pebesma
Hans-Werner Hense
Correspondence to Dorothea Lemke.
The authors declare that they have no competing interest.
DL and HWH designed the study. VM and OH provided the data. DL carried out all analyses and drafted the first version of the manuscript. SB carried out confirmatory checks. DL, SB, EP and HWH interpreted the results. All authors were involved in critical review of the manuscript and assented to final version.
Lemke, D., Berkemeyer, S., Mattauch, V. et al. Small-area spatio-temporal analyses of participation rates in the mammography screening program in the city of Dortmund (NW Germany). BMC Public Health 15, 1190 (2015). https://doi.org/10.1186/s12889-015-2520-9
Mammography screening
Spatio-temporal modelling
|
CommonCrawl
|
Space Exploration Meta
Space Exploration Beta
Faster alternative to travel to any location in the west
Suppose one needs to travel to Somalia from Indonesia(which are approximately at a distance of $4000$ miles from each other on the line of equator). To accomplish this task, I suggest the following method:
Design an airplane that can surpass the earth's atmosphere radially(~ $300$ miles thick) and stand still in the air till the landing point of Somalia is just about to arrive radially beneath the plane's stationary point(earth rotates from west to east) at which time instant, the landing process is initiated. In this way, the total time taken would be:
\begin{equation} T=\frac{4000(mi)}{1000(mi/h)} + \epsilon \end{equation} Here, $1000(mi/h)$ is speed of any point on earth's equator and $\epsilon$ is the small time the aircraft took to travel to and fro through the earth's atmosphere.
The total approximated journey time with this method would not exceed $6(hr)$ which is considerably smaller than the actual time taken of about $12(hr)$. My apprehension is why this technique is not being used to facilitate travelling between two different points when the absolute speed of the aircraft(w.r.t space) can be reduced to zero by surpassing the earth's atmosphere?
atmosphere earth rotation
Akshay BansalAkshay Bansal
$\begingroup$ Note that the atmosphere is only about 60 miles high, for practical purposes. (Above that, drag affects satellites at orbital speed on a scale of hours, days, or weeks, rather than the minutes or seconds at lower altitudes.) But I'm not clear how you're "standing still", exactly. $\endgroup$ – Nathan Tuggy Feb 3 '16 at 3:26
Summary: It's not a great idea. It will take you over 8 hours, and you'll end up 5400+ miles above your destination. Details follow image.
Since you don't specify the speed of the plane, let's assume for the moment that you can move 300 miles up from your current location instantaneously.
To avoid the air/wind problem, let's also assume we are doing this on an airless planet that is otherwise similar to Earth.
Finally, since you can "stand still in the air", we'll ignore the effect of gravity as well.
Since your starting point is rotating, its (x,y) position in the diagram above is modeled as:
$ \left\{\text{eer} \sin \left(\frac{2 \pi t}{\text{sidday}}\right),\text{eer} \cos \left(\frac{2 \pi t}{\text{sidday}}\right)\right\} $
where t is the time in seconds since the "launch", eer = 6378.137 is the Earth's equitorial radius in kilometers and sidday = 86164.1 is the length of the sidereal day in seconds.
And your destination is:
$ \left\{-\text{eer} \sin \left(\frac{\text{dist}}{\text{eer}}-\frac{2 \pi t}{\text{sidday}}\right),\text{eer} \cos \left(\frac{\text{dist}}{\text{eer}}-\frac{2 \pi t}{\text{sidday}}\right)\right\} $
where dist = 4000*1.609344 is the distance to your destination in kilometers.
Since your launchpoint has an initial velocity of about 1000mph, so do you. This means your position at time t is:
$\left\{\frac{2 \pi \text{eer} t}{\text{sidday}},\text{eer}+\text{hi}\right\}$
where hi= 300*1.609344 is your initial height in kilometers.
With these conditions, you will be over your destination about 28911 seconds after you start (8 hours, 1 minute and 51 seconds), at a height of about 8718 kilometers (about 5417 miles).
Your average surface velocity would be right around 500 miles per hour, slower than a supersonic airplane, and even slower if you include the time to ascend 300 miles at the start and descend 5417 miles at the end.
Of course, if your magic plane can move 300 miles in time $\epsilon$, you could just as easily apply a westward thrust of ~1000mph to cancel our your initial eastward velocity, in which case your calculations would be correct.
A plane as powerful as this, however, could probably travel anywhere in the world rapidly, without help from the Earth's rotation, so you'd be best off aiming it west, just as you would a normal airplane.
I toyed with the idea that you could launch at a high velocity and land back on Earth at your destination purely due to gravity (albeit at a very high speed) as per:
Trajectory of projectile launched from planet's surface
but haven't been able to get the numbers to work. My work on the high-velocity launch idea is at:
https://github.com/barrycarter/bcapps/blob/master/MATHEMATICA/bc-solve-physics-232844.m
barrycarterbarrycarter
The first problem is just that there is no free lunch in decelerating/accelerating; in particular, there's nothing in space to slow you down, nothing that is "unmoving" in any real sense to push against, so if you go straight up, you're still moving along with the earth's rotation at the same speed. (Not the same angular velocity, but the same mph.) So just going up and then coming down doesn't buy you all that much: you cannot "reduce absolute speed to zero" that way without actually reducing speed in exactly the same way you normally do (by burning fuel).
By going outward, you can reduce your angular velocity somewhat, meaning that you'd no longer quite keep up with the number of (micro)radians per second that the earth is traversing. Unfortunately, the earth's radius is a good few thousand miles, and going a measly few hundred out will do very little that way.
A second problem is the challenge of hovering for that long; you're not going nearly fast enough to use orbital mechanics to fall around the earth continuously for free, so instead you have to burn fuel, and a lot of it, to keep up a continuous 1g vertical acceleration just to keep from falling. You can save a bit by simply accelerating in a higher arc and then letting yourself drop down. That way, gravity is reduced a little at the top. But it takes quite a distance before that saves you much, and if you go too far, you'll take too long to fall back down and you'll land in the wrong place.
The third problem, although far smaller, is that to the comparatively simple challenge of keeping airliners pressurized you have now added the thornier problem of maintaining atmosphere in space for hours. Worse, you also have to manage temperature. That's harder than it sounds in space, where there's glaring sunlight and no air to carry away heat. (With hundreds of warm bodies in a small enclosure, freezing would not be a problem.)
Finally, you have to use rockets for a lot of this. Rockets are less efficient than jet engines by a large margin, because they have to carry their air with them, and that requires extra fuel and extra oxygen, which requires more fuel and oxygen, and so forth. When you're also doing far more acceleration than a normal airliner, the result is burning many times as much fuel, which also requires a much larger (and more expensive) vehicle to carry all that.
Oh yeah, and you're probably going to be breaking the speed of sound at various stages during your flight. This is already a practical problem for airliners; Concorde was retired largely because of problems getting clearances where people wouldn't be bothered too much by sonic booms, and the fuel flow at supersonic speeds is ... you guessed it, much higher than subsonic, where airliners normally are.
In sum, fuel costs are going to be perhaps hundreds of times as much, the vehicle will cost far more to design and build, you'll have to train pilots in new and exciting ways to handle problems... and you won't actually save all that much time, if only because you'll have to fly complicated paths around anywhere that's populated.
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged atmosphere earth rotation or ask your own question.
Returning to moderation
At what altitude would I have to go in a lighter than air balloon to be above all wind and just have the earth rotate underneath me?
How will a suborbital flight country to country work?
Can you have a rainbow on any bodies in the solar system besides Earth?
Is there a way to achieve any kind of atmospheric flight on the moon Europa?
Are there any protocols the space station follows if there is an Earth-wide catastrophe?
Are the mountains of Venus of any help for us to explore the surface?
How much will Bennu rotate faster and faster with mirrors facing the Sun on its left side?
At what point when travelling faster and faster around the earth are you not gonna be falling back again (7km/s? 8km/s?)
How would humans with appropriate equipment travel the surface of Saturn's moon Titan on foot?
|
CommonCrawl
|
Probing multiphoton light-induced molecular potentials
M. Kübel ORCID: orcid.org/0000-0001-6065-61221,2,3,
M. Spanner1,
Z. Dube1,
A. Yu. Naumov1,
S. Chelkowski4,
A. D. Bandrauk4,
M. J. J. Vrakking ORCID: orcid.org/0000-0002-3249-16635,
P. B. Corkum1,
D. M. Villeneuve ORCID: orcid.org/0000-0002-2810-36481 &
A. Staudte ORCID: orcid.org/0000-0002-8284-38311
Nature Communications volume 11, Article number: 2596 (2020) Cite this article
Atomic and molecular interactions with photons
Attosecond science
The strong coupling between intense laser fields and valence electrons in molecules causes distortions of the potential energy hypersurfaces which determine the motion of the nuclei and influence possible reaction pathways. The coupling strength varies with the angle between the light electric field and valence orbital, and thereby adds another dimension to the effective molecular potential energy surface, leading to the emergence of light-induced conical intersections. Here, we demonstrate that multiphoton couplings can give rise to complex light-induced potential energy surfaces that govern molecular behavior. In the laser-induced dissociation of H2+, the simplest of molecules, we measure a strongly modulated angular distribution of protons which has escaped prior observation. Using two-color Floquet theory, we show that the modulations result from ultrafast dynamics on light-induced molecular potentials. These potentials are shaped by the amplitude, duration and phase of the dressing fields, allowing for manipulating the dissociation dynamics of small molecules.
Potential energy surfaces describe the forces acting on the nuclei of a molecule. Within the Born–Oppenheimer approximation the motion of the nuclei along these potentials is treated independently of the electronic motion. This picture breaks down when the electronic level separation becomes comparable to the kinetic energy of the nuclei. This occurs at specific points in the molecular geometry, which are known as conical intersections and that are a hallmark of polyatomic molecules1. Conical intersections play an eminent role in visible and ultraviolet photochemistry2,3, for example, in isomerization4,5, and electron transfer processes6. Moreover, they are strongly implicated in the photostability of DNA by way of allowing radiation-less de-excitation7.
The single-photon transition between two dipole-coupled electronic states can also create a conical, albeit transient, intersection. Hence, these localized features of the laser-dressed potential energy surface have been dubbed light-induced conical intersections (LICI)8,9. Their precise position, and the underlying dipole coupling strength are determined by the frequency and intensity of the incident light. LICIs can also be found in diatomic molecules, since the angle between the light polarization and the molecular axis adds another degree of freedom to the nuclear motion10,11. The angle-dependent distortion of the molecular potential energy surfaces in a linearly polarized laser field directly affects molecular dissociation12,13,14,15,16 and has been predicted to cause rotational excitation17,18,19,20,21,22. Recently, experimental indications of LICI in H2+ have been found in angle-resolved ion spectra23.
In ultrashort infrared laser fields, the light intensity can easily exceed the threshold for multiphoton transitions. While LICIs are a consequence of single-photon couplings and therefore the potential energy scales linearly with respect to variations of the laser field strength, multiphoton couplings lead to unique structures of their own. In the case of diatomic molecules, these structures become nonlinear point intersections of the potential energy surfaces. The one-dimensional (1D) treatment of single and multiphoton resonances has led to the prediction of light-induced potentials (LIPs)23,24,25,26,27,28,29, and anomalous fragment angular distributions have been predicted in the non-perturbative regime30. Experimentally, however, the consequences of the angle-dependent coupling strength around nonlinear point intersections for the dissociation dynamics have so far been largely unexplored.
Here, we show in theory and experiment that LIPs featuring nonlinear light-induced point intersections can result in strong modulations of the angular ion yield. Using a two-color pump–probe scheme allows us to probe and control the nuclear dynamics, as the underlying LIPs evolve in the laser field. The experimental results are interpreted with the help of numerical solutions of the time-dependent Schrödinger equation and two-color Floquet theory. We find that, owing to both linear and nonlinear transitions, as well as rotational dynamics, the two-color laser field gives rise to remarkably complex dissociation dynamics that produce the strong modulations in the angular ion yield.
For our studies, we choose the simplest molecule, H2+, which is widely regarded as a prototype system for the interaction of molecules with light31. Due to the sparsity of electronic states, H2+ can often be described as a two-level system consisting of the two lowest electronic states, 1 sσg and 2 pσu. When coupled to intense laser fields, these states give rise to intensity-dependent dissociation mechanisms known as bond softening32 and above threshold dissociation33. The possibility to control them using the laser amplitude, frequency, phase, and pulse duration has been demonstrated34,35,36,37,38. In particular, the opposite parity of the σg and σu states can lead to electron localization39, giving rise to charge resonance-enhanced ionization40,41 and symmetry breaking in dissociation42,43,44 under the influence of two-color laser fields45,46 or carrier-envelope phase (CEP) stable few-cycle pulses47,48,49,50. Notably, the dissociation dynamics are sensitive to the rather complex shape of orthogonally polarized two-color fields51. Descriptive treatments for most of these phenomena are provided by dressed-state pictures, such as LIPs.
Nonlinear point intersections
Figure 1a shows an example of some LIP energy surfaces for H2+ in a moderately intense (3 × 1013 W/cm2), visible (685 nm) laser field, calculated using the procedure outlined in the "Methods" section. Shown are the laser-dressed states σg and σu − 1ħωVIS i.e., σu, shifted down due to the absorption of one photon from the visible field. Along the laser polarization, i.e., θ = 0, π, the state crossing indicated in Fig. 1b opens up and turns into an avoided crossing. This necessarily lowers the potential barrier at the avoided crossing and permits dissociation of formerly bound molecules (bond softening). Importantly, no such avoided crossing occurs when the laser polarization is perpendicular to the internuclear axis. Therefore, a LICI is formed at the internuclear distance where the laser-dressed σg and σu − 1 ℏωVIS states cross.
Fig. 1: Diabatic single- and multiphoton light-induced molecular potentials.
a Angle-dependent potential energy surfaces of H2+ dressed by a linearly polarized, moderately intense, visible (685 nm, 3 × 1013 W/cm2) laser field. The nuclear potential energy E is plotted as a function of the internuclear distance R, and the angle θ between the molecular axis and the laser polarization. The LICI at the one-photon crossing is marked by a 1. The potential energy curve shown in b corresponds to a lineout along the internuclear distance R through the LICI. The dotted line marks the position of the one-photon resonance between the σg and σu states. The potential energy diagram in c, shows several multiphoton crossings between the σg (dissociation limits of (…, 0, −2, −4,…) ωIR) and σu (dissociation limits of (…, 1, −1, −3, …) ωIR) states in a 2300 nm dressing field. Two of these states are labeled, and the location of multiphoton transitions of order 7, 5, 3, and 1 (from left to right) are indicated by dashed lines. d Same as a for a mid-IR dressing field (2300 nm, 3 × 1013 W/cm2). Structures attributed to 1, 3, and 5 photon couplings are marked. e Calculated proton momentum distributions produced by a 2300 nm, 3 × 1013 W/cm2 laser field polarized along the z-axis. The color scale indicates the proton yield. Contributions from dissociation on the σg and σu surfaces are shown separately, and structures attributed to the 1, 3, and 5 photon couplings are indicated.
While single-photon transitions dominate in moderately intense visible laser fields, multiphoton transitions become relevant when the wavelength is shifted into the mid-infrared52. For example, the three-photon transition by 2300 nm light becomes significant already at an intensity as low as 5 × 1012 W/cm2, see Supplementary Fig. 1 for details. Several crossings of potential curves that correspond to multiphoton transitions between the σg and σu states of H2+ at a wavelength of 2300 nm are shown in Fig. 1c. The corresponding LIP energy landscape calculated for an intensity of 3 × 1013 W/cm2 is presented in Fig. 1d. The potential energy surfaces exhibit complex structures that result from multiphoton couplings. Indicated in the figure are the curve crossings due to n-photon (n = 1, 3, and 5) transitions. Notably, the intersections for n = 3 and 5 are not conical (see Supplementary Fig. 2), as would be the case for all higher intersections. In order to see how these light-induced structures affect the nuclear dynamics, we first solve the two-dimensional (2D) time-dependent Schrödinger equation (2D TDSE, see "Methods" section for details) for H2+ under the influence of a moderately intense mid-IR laser pulse. The calculated proton momentum distributions presented in Fig. 1e show distinct features in the proton angular distribution that can be associated with the n-photon couplings. While these features are absent in the single-photon coupling regime at low intensity, significantly higher intensities produce very convoluted dissociation patterns that involve high-order couplings, but would likely defy experimental resolution, see Supplementary Fig. 3.
Structured proton angular distribution
In order to experimentally probe the light-induced molecular potentials depicted in Fig. 1d, e, describing the situation where the mid-IR field induces multiphoton dynamics in the dissociation process, but does not cause ionization, we implement the two-pulse scheme depicted in Fig. 2a. First, an intense, few-cycle visible laser pulse ionizes neutral H2, producing a bound coherent wave packet in H2+ with a nearly isotropic alignment distribution, with respect to the laser polarization53. Second, a moderately intense mid-IR pulse creates the LIPs on which dissociation occurs. The LIPs are probed by recording the momentum distribution of protons resulting from the dissociating part of the molecular wave packet. The molecular ions dissociate along their initial alignment direction, unless rotational dynamics occur, as predicted, e.g., in refs. 17,18,21,22.
Fig. 2: Experimental signatures of LIPs.
a A few-cycle visible pulse (yellow) ionizes H2 and prepares a wave packet on the σg state of the molecular cation. An additional mid-infrared control pulse (red) couples the σg and σu states by 1, 3, or 5 photons, creating LIPs, on which H2+ dissociates into an H atom and a proton. b The measured proton momentum distribution in the recoil frame for perpendicularly polarized two-color fields (730 nm, 5 fs, 2 × 1014 W/cm2, and 2300 nm, 45 fs, 3 × 1013 W/cm2) probes the LIPs. It strongly contrasts results obtained for c only mid-IR pulses (2000 nm, 65 fs, 1 × 1014 W/cm2) or d only visible (730 nm, 5 fs, 2 × 1014 W/cm2) pulses. The angular structure is moreover absent in two-color experiments carried out with parallel polarization (e). The data has been integrated over the direction perpendicular to the polarization plane. The arrows indicate the polarization axes of the visible (orange) and mid-IR (red) laser pulses. The color scale represents the measured proton yield normalized to the maximum value in each plot.
The intent of the two-pulse scheme is to decouple the production of the molecular wave packet from the field that generates the LIPs. This allows for probing the LIPs at selected times within the mid-IR pulse by scanning the time delay between the laser pulses. Moreover, the use of a shorter wavelength pulse for ionization allows us to reduce the focal volume averaging in the long-wavelength field, which often washes out subtle features in strong-field experiments (e.g., ref. 54). Finally, choosing a perpendicular relative polarization of the visible and mid-IR pulses is expected to avoid overlap between the signal of interest produced by the mid-IR pulse, and any protons produced by the visible pulse alone. The experimental setup is described in the "Methods" section "Experiment".
Figure 2b shows experimental results obtained with the cross-polarized few-cycle visible and mid-IR pulses. The three-dimensional (3D) momentum distributions of protons and electrons were measured in coincidence, using Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS). The coincidence measurement allows us to present results in the recoil frame, where the impact of the electron recoil has been largely removed from the measured ion momentum distribution. The results exhibit a striking angular structure that is blurred if the recoil momentum is not accounted for (see Supplementary Fig. 6). The angular structure consists of on-axis features along either of the polarization axes, and additional spots at intermediate angles. Drawing a comparison to results obtained with only mid-IR (Fig. 2c) or visible (Fig. 2d) pulses, suggests that the on-axis features arise from bond softening by either pulse alone. Note, that the signal along the mid-IR polarization in the two-color experiment does not arise from dissociative ionization of neutral H2 by the mid-IR pulse on its own, as no notable ionization of neutral H2 is obtained at the intensity of 3 × 1013 W/cm2. Hence, the comparative mid-IR only data (Fig. 2c) is presented for a higher intensity of 1 × 1014 W/cm2. The additional spots in the two-color data are tentatively attributed to dynamics caused by the light-induced structures in the molecular potential energy landscape; cf. Fig. 1d.
Surprisingly, the experimental results presented in Fig. 2b, exhibit a much more pronounced angular structure than the TDSE results for the mid-IR field alone, presented in Fig. 1e. Moreover, the angular structure survives averaging over kinetic energy, in contrast to the weaker modulations in previous work on LICI (ref. 23). A hint on the origin of the additional angular structure in the present experiment comes from measurements carried out with a parallel polarization of the visible and mid-IR pulses. The proton momentum distribution obtained with parallel polarization is shown in Fig. 2e. It resembles the results obtained with mid-IR pulses only and does not exhibit significant angular structure. Although the intention of our scheme was to decouple the effects of the mid-IR and visible pulses, the striking dependence of the angular structure on the relative polarization of the two laser fields implies that the visible field contributes to the formation of the additional spots. It will thus be considered in the following analysis of our results.
Numerical results
In a first step to understand the dynamics producing the structured proton angular distribution observed in cross-polarized fields, we solve the 2D TDSE for H2+, taking both laser pulses into account. Due to the observed importance of the visible field in shaping the experimental results, we also consider a weak pulse pedestal at 5% of the peak intensity and 45 fs duration (full-width at half-maximum of the intensity envelope) for the visible few-cycle pulse. These values are consistent with field-resolved measurements of few-cycle pulses55. The initial alignment of the molecular axis with respect to the laser polarization is assumed to be istotropic.
It has been recognized in the literature that angular modulations in the proton spectra can arise from rotational dynamics in the vicinity of the LICI (refs. 18,21,22,23); more specifically, simulations that include rotational motion in the dissociation dynamics show angular modulations that are absent when the rotational degrees of freedom are frozen. These modulations can be connected to rotational scattering of the dissociating wave packet from the LICI (refs. 18,21,22,23). A first candidate for the physical mechanism underlying the appearance of a structured proton angular distribution is therefore the formation of a high-order rotational wave packet in the dissociating molecular cation. In order to test the role of rotational dynamics, we perform a first set of calculations where rotational transitions are artificially switched off and present the results in Fig. 3a. Evidently, pronounced modulations in the angular distribution are obtained, even without the inclusion of wave packet rotation. This suggests that rotational dynamics are not the primary physical mechanism underlying the angular modulation in the proton momentum distribution. Therefore, it will be important to identify how angular modulations arise already within a 1D treatment.
Fig. 3: Numerical results for two-color bond softening of H2+.
Proton momentum distributions obtained from solving the time-dependent Schrödinger equation a without and b with rotational dynamics taken into account. The colorbar represents the proton yield per momentum bin. The black arrows indicate population transfer through rotational dynamics. The dotted black lines at various angles are drawn as a guide to the eye. c The calculated proton angular distributions are compared to the measured data presented in Fig. 2b. Each angular distribution is integrated over kinetic energy and normalized to its integral. The error bars (s.d.) for the experimental data are of the same size as the symbols. The calculations have been averaged over a delay range corresponding to one mid-IR cycle around the temporal overlap between the few-cycle visible pulse with the center of the mid-IR pulse. d The LIP energy landscape obtained from Floquet theory for a two-color field with relative phase Δφ = 0. The nuclear potential energy is plotted as a function of the internuclear distance R, and the angle θ between the molecular axis and the polarization of the visible field.
The second set of calculations (Fig. 3b) takes rotational dynamics into account. The differences between Fig. 3a, b show the impact of rotations in certain parts of the momentum distributions. The black arrows highlight pronounced differences between the results of the two calculations at angles θ = 0° and θ = 20°. These differences illustrate the role of rotational alignment in shaping the final momentum distribution. On the basis of the comparison between Fig. 3a, b, we conclude that rotational dynamics play a significant but secondary role in defining the final momentum distribution; in contrast with the previously considered pure LICI case, the addition of rotations is not the sole cause of the angular structures in our experiment. A direct comparison of the calculated and measured angular distributions is given by Fig. 3d. Indeed, the strong modulations observed in the experimental data are only obtained in the simulations that take rotations into account. However, the modulation depth in the experimental data is smaller than in the simulations with rotations, which is ascribed to the reduced dimensionality of the simulations.
In order to identify the essential mechanism creating the angular structure in the absence of rotations, we employ two-color Floquet theory (see "Methods" section). We calculate the angle-dependent field-dressed states of H2+ using a two-color laser field, \({\mathbf{F}}\left( t \right) = \sqrt {I_{{\mathrm{VIS}}}} \cos (\omega _{{\mathrm{VIS}}}t + \varphi _{{\mathrm{VIS}}}){\hat{\mathbf{x}}} + \sqrt {I_{{\mathrm{IR}}}} \cos \left( {\omega _{{\mathrm{IR}}}t + \varphi _{{\mathrm{IR}}}} \right){\hat{\boldsymbol{z}}}\) (\({\hat{\mathbf{x}}}\) and \({\hat{\mathbf{z}}}\) being the unit vectors along x- and z-directions, respectively).
The field consists of a moderately intense mid-infrared field (λIR = 2280 nm, IIR = 3 × 1013 W/cm2) and a weak visible field (IVIS = 1 × 1013 W/cm2), corresponding to the pulse pedestal used in the TDSE calculations. We take λVIS = λIR/3 to ensure the periodicity of the laser fields required by Floquet theory. The resulting LIP energy landscape depends on the relative optical phase, Δφ = φVIS − φIR. Exemplarily, we present the LIP energy landscape for Δφ = 0 in Fig. 3d. Both experimental and numerical results, presented in Figs. 2 and 3, respectively, are integrated over Δφ.
A detailed analysis and discussion of the results from Floquet theory is presented in Supplementary Note 2. In brief, we find that the Floquet states represent a conclusive basis for understanding the emergence of angular structure in the proton momentum distribution, even without rotational dynamics taken into account. In the absence of rotations, the angular structure arises through a process we shall call angle-dependent channel switching, as different orders of multiphoton couplings dominate at different alignment angles of the molecular axis with respect to the laser polarization. The following picture can be invoked.
As the alignment angle of the molecular axis in the polarization plane (see Fig. 3), θ, is increased, the field components parallel to the molecular axis vary as FVIS ∝ cos(θ), and FIR ∝ sin(θ). This leads to a pronounced angle dependence of the field-dressed potential energy curves (see Fig. 3c), where several Floquet state crossings open up and close again as θ is varied. Specifically, at θ = 0°, i.e., for alignment of the molecular axis perpendicular to the mid-IR polarization, the effect of the mid-IR field is insignificant, and dissociation proceeds as in the single-color case (seen in Fig. 2b). Specifically, the wave packet dissociates on the purple surface in Fig. 3d. As θ is increased, a new dissociation channel due to one-photon coupling by the mid-IR field opens up. The new channel competes with the original one, which moves population to the pronounced feature at θ = 10°, making the on-axis feature much narrower than in the single-color case. This new dissociation channel corresponds to dissociation on the red surface in Fig. 3d. As θ is further increased, the width of the avoided crossing reaches 2ωIR, which closes the dissociation channel and gives rise to a LICI at θ ≈ 30°, clearly visible in Fig. 3d.
Notably, the computational results obtained without (Fig. 3a) and with (Fig. 3b) rotations strongly differ at θ ≈ 20°. We attribute this to the presence of the aforementioned LICI that promotes strong rotational dynamics, as the nuclear wave packet propagates around the cone in the LIP landscape. In a similar manner, the splitting of the narrow feature at 0° in Fig. 3a into the double peak structure in Fig. 3b is attributed to the point intersections at θ = 0°.
Delay dependence
Scanning the time delay between the visible and mid-IR pulses in our experiment allows us to probe the variations in the LIPs throughout the mid-IR pulse. The time delay controls the time of ionization with respect to the mid-IR pulse, and thereby determines the (i) strength, (ii) duration, and (iii) phase of the mid-IR field at the time it interacts with the molecular ion. In Fig. 4, we analyze the fragment momentum distribution for overlap of the ionizing visible pulse with the rising edge, the maximum and the falling edge of the mid-IR laser pulse. Each of the presented spectra are integrated over two mid-IR cycles, and therefore not expected to be sensitive to the mid-IR phase.
Fig. 4: Tracking the evolution of light-induced molecular potentials in H2+ throughout a mid-IR laser pulse.
a Vector potential of the mid-IR dressing field measured using the STIER technique. b–d Measured H+ momentum distributions in the polarization plane for different ionization times within the mid-IR pulse. The signal has been integrated over the delay ranges indicated by the brackets. The colorbar indicates the proton yield normalized to the maximum in each plot.
Figure 4a shows the vector potential of the mid-IR pulse used in our experiment, as measured with the STIER technique56 (see Supplementary Fig. 5). Selected recoil-frame proton momentum distributions are presented in Fig. 4b–d. The delay-dependent results probe the evolution of the LIP energy landscape throughout the mid-IR pulse. This is evidenced by the changes in the recorded dissociation patterns, as the delay between visible and mid-IR pulses is varied. For example, the feature at θ = 90°, i.e., along the mid-IR polarization axis, peaks around the center of the pulse, Fig. 4c, where it represents the strongest contribution to the proton momentum distribution. When the ionization occurs on the falling edge of the mid-IR pulse (Fig. 4d), the 90° feature is absent. On the basis of the computational results presented in Fig. 1 and the two-color Floquet states shown in Fig. 3d, we attribute this peak to a five-photon coupling induced by the mid-IR pulse. The nonlinearity of this process explains why this feature is particularly visible near the maximum of the mid-IR pulse and decays rapidly on the falling edge of the pulse. Notably, the maximum yield of protons emitted at 90° is obtained, when the visible pulse precedes the peak of the mid-IR pulse by (8.3 ± 0.5) fs, in reasonable agreement with the 7.3 fs vibrational half-period of H2+ (ref. 57). For earlier delays, when ionization occurs on the rising edge of the mid-IR pulse (Fig. 4b), the weaker signal at 90° indicates that dissociation occurs before the molecular ion interacts with the center of the mid-IR pulse. Similar observations are made for the feature at intermediate angles (around θ ≈ 40° in Fig. 3a). In addition, its angular position also varies from θ ≈ 30° in Fig. 4b toward θ ≈ 40° in Fig. 4c.
Contrary to the nonlinear features, the feature at θ = 10°, i.e., close to the visible polarization axis, exhibits little delay dependence. As discussed above, this feature can be understood as a consequence of the single-photon couplings by both, the visible and the mid-IR fields. The absence of nonlinearity in this process explains the insensitivity of the 10° feature to the mid-IR intensity. Calculated proton momentum distributions for different delay values, which are consistent with these conclusions, are presented in Supplementary Fig. 7.
In summary, we have demonstrated a powerful approach for probing light-induced molecular potentials. We observed strongly modulated proton angular distributions in experiments were H2+ ions, produced by a linearly polarized, few-cycle, visible laser pulse, are dissociated by a cross-polarized mid-IR laser field. We have shown that the modulations can be understood as signatures of complex LIP energy landscapes that are shaped by both single-photon and multiphoton transitions in a cross-polarized two-color laser field. Specifically, the modulations arise from a combination of two effects: First, angle-dependent channel switching, i.e., different dissociation pathways open and close as a function of alignment angle; second, rotational motion around light-induced point intersections, such as LICIs, shape the modulated angular ion yield. The LIP picture predicts where angle-dependent channel switching takes place, and where prominent light-induced point intersections are present.
Probing the LIPs resulting from the mid-IR dressing field on its own may be improved by using a shorter pulse for preparation of the bound wave packet, such as a few-cycle UV or attosecond pulse. Previous experiments along these lines (e.g., refs. 44,58) were conducted in the single-photon dressing regime and did not study the influence of the LIP surfaces on the angular dependence of dissociation.
Our approach allows us to follow the variation of the LIPs throughout the dressing laser field. On the timescale of the mid-IR pulse envelope, we observe the opening and closing of dissociation pathways as the dressing field strength changes. On shorter time scales, the propagation of the dissociating wave packet will become accessible with sub-femtosecond time resolution by monitoring the electron localization on either fragment. More generally, we have shown how complex LIP energy landscapes determine the outcome of molecular dissociation, using H2 as an example. Our approach will allow for elucidating the reaction dynamics of more complex molecules in the presence of LICIs and higher-order point intersections.
The employed experimental technique is a variant of ref. 56. The output of a commercial Ti:Sa chirped pulse amplification (CPA) laser (Coherent Elite, 10 kHz, 2 mJ) is split into two parts. The stronger part (85%) is used to pump an optical parametric amplifier, in order to obtain CEP stable idler pulses at 2.3 µm. The second part of the CPA output is focused into an argon-filled hollow core fiber to obtain broadband laser pulses, which are subsequently compressed to a pulse duration of ≈5 fs. The laser pulses are recombined using a polished Si mirror (thickness 2.2 mm) at 60° angle of incidence.
After recombination, the pulses are focused in the center of a COLTRIMS59, where they intersect an ultrasonic jet of pre-cooled (T = 60 K) of neutral H2. The intensity of the mid-IR pulse is weak enough to not cause any notable ionization by itself. Because ions are only produced in the small focal volume of the visible pulse (1/e² width (7 ± 2) µm), focal volume averaging within the larger focal volume ((30 ± 10) µm) of the mid-IR pulse is essentially avoided. In the COLTRIMS, the 3D momenta of ions and electrons generated in the laser focus are measured in coincidence, which provides access to the recoil-frame ion momentum that arises solely from the nuclear dynamics on the LIPs. See Supplementary Fig. 6 for a comparison of laboratory-frame and recoil-frame measurements. The measurement of the delay dependence of the electron momentum distribution yields the instantaneous mid-IR vector potential at each delay value, as shown in Supplementary Fig. 5.
Time-dependent Schrödinger equation
For the dynamics in the H2+ cation, we solve a 2D (one angle and one bond length) Schrödinger equation that includes dipole coupling between the two relevant electronic states 2Σg+ (also referred to as σg) and 2Σu+ (σu)
$${\boldsymbol{i}}\frac{\partial }{{\partial {\boldsymbol{t}}}}\left[ {\begin{array}{*{20}{c}} {{\boldsymbol{\Psi }}_{\boldsymbol{g}}\left( {\mathbf{R}} \right)} \\ {{\boldsymbol{\Psi }}_{\boldsymbol{u}}\left( {\mathbf{R}} \right)} \end{array}} \right] = - \frac{1}{{2\mu }}\left( {\frac{{\partial ^2}}{{\partial {\boldsymbol{R}}^2}} + \frac{1}{{{\boldsymbol{R}}^2}}\frac{{\partial ^2}}{{\partial \theta ^2}}} \right)\left[ {\begin{array}{*{20}{c}} {{\boldsymbol{\Psi }}_{\boldsymbol{g}}\left( {\mathbf{R}} \right)} \\ {{\boldsymbol{\Psi }}_{\boldsymbol{u}}\left( {\mathbf{R}} \right)} \end{array}} \right] \\ + \left[ {\begin{array}{*{20}{c}} {{\boldsymbol{V}}_{\boldsymbol{g}}\left( {\boldsymbol{R}} \right)} & { - {\mathbf{F}}\left( {\boldsymbol{t}} \right) \cdot {\mathbf{d}}\left( {\boldsymbol{R}} \right)} \\ { - {\mathbf{F}}\left( {\boldsymbol{t}} \right) \cdot {\mathbf{d}}\left( {\boldsymbol{R}} \right)} & {{\boldsymbol{V}}_{\boldsymbol{u}}\left( {\boldsymbol{R}} \right)} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{\boldsymbol{\Psi }}_{\boldsymbol{g}}\left( {\mathbf{R}} \right)} \\ {{\boldsymbol{\Psi }}_{\boldsymbol{u}}\left( {\mathbf{R}} \right)} \end{array}} \right],$$
where R = (R,θ) are the bond length, and angle between the laser field and the molecular axis. F(t) is the electric field of the laser that couples the two electronic states. The form of the electronic potential energy curves Vg and Vu, as well as the transition dipole d, are taken from Bunkin and Tugov60. Equation (1) was solved numerically using the Fourier split-operator method.
In our experiment, the H2+ system is created starting from the H2 neutral through strong-field ionization. The initial state of the wave function of the ionic simulations assumes a vertical transition from the ground electronic (1Σg), and ground vibrational state of the H2 neutral to the ground electronic state of the ion. The ground vibrational state on the 1Σg of the neutral is modeled as Morse oscillator state, using Morse parameters derived from Herzberg61. The rotational degree of freedom was initialized to a thermal rotational distribution, with temperature chosen to be low enough such that only the rotational ground state is populated. The initial distribution of the molecular axis with respect to the laser polarization is isotropic, closely reflecting the experimental conditions.
The laser field used in the calculations presented in Fig. 3a, b can be expressed as
$${\mathbf{F}}\left( {\boldsymbol{t}} \right) = {\boldsymbol{F}}_{{\mathbf{IR}}}\left( {{\boldsymbol{t}} + {\mathrm{\Delta }}{\boldsymbol{t}}} \right){\hat{\mathbf{z}}} + ({\boldsymbol{F}}_{{\mathbf{VIS}}}\left( {\boldsymbol{t}} \right) + {\boldsymbol{F}}_{{\mathbf{ped}}}\left( {\boldsymbol{t}} \right)){\hat{\mathbf{x}}},$$
where Δt is the time delay between visible and mid-IR pulses and each field, FA(t), is given by an expression (using atomic units),
$${\boldsymbol{F}}_{\boldsymbol{A}}\left( {\boldsymbol{t}} \right) = \sqrt {{\boldsymbol{I}}_{\boldsymbol{A}}} \exp \left( { - 2\ln 2\left( {\frac{{\boldsymbol{t}}}{{\tau _{\boldsymbol{A}}}}} \right)^2} \right)\cos \left( {\omega _{\boldsymbol{A}}{\boldsymbol{t}} + \varphi _{\boldsymbol{A}}} \right),$$
where \(\varphi _A\) is the CEP of each pulse.
The laser field consists of a mid-IR pulse (λIR = 2300 nm, τIR = 45 fs, ΙIR = 30 TW/cm2), an ionizing few-cycle visible pulse (λVIS = 730 nm, τVIS = 5 fs, ΙVIS = 300 TW/cm2), and a visible pulse pedestal (λped = 730 nm, τped = 45 fs, Ιped = 10 TW/cm²)). The calculations are started at t = 0, i.e., in the center of the visible pulse and performed for various values of \({\mathrm{\Delta }}t\) and, for each \({\mathrm{\Delta }}t,\varphi _{{\mathrm{VIS}}} = \varphi _{{\mathrm{ped}}} = n\pi\) (n = 0, 1) and \(\varphi _{{\mathrm{IR}}} \equiv 0\).
Floquet states
For each molecular alignment angle θ, the Floquet states62,63,64 are calculated for a field
$${\boldsymbol{F}}\left( {{\boldsymbol{t}},\theta } \right) = \sqrt {{\boldsymbol{I}}_{{\mathbf{IR}}}} \cos \left( {\omega {\boldsymbol{t}}} \right)\sin \theta + \sqrt {{\boldsymbol{I}}_{{\mathbf{ped}}}} {\mathbf{cos}}\left( {3\omega {\boldsymbol{t}} + \phi } \right)\cos \theta ,$$
where \(\phi\) is the relative phase of the two fields. The potential energy landscape presented in Fig. 3b is for the relative phase \(\phi = 0\). Here, the frequency of the visible pulse is approximated as ωVIS = 3 ωIR to obtain the required periodicity.
At each point along R, the Floquet states were constructed as follows. First, the one-period propagator U(t,t + T; R), where T = 2π/ω is the period of the 2280 nm laser field, was constructed numerically using
$${\boldsymbol{U}}\left( {{\boldsymbol{t}},{\boldsymbol{t}} + {\boldsymbol{T}};{\boldsymbol{R}}} \right) = {\boldsymbol{e}}^{ - {\boldsymbol{iH}}_{\boldsymbol{e}}({\boldsymbol{R}},{\boldsymbol{t}}_{{\boldsymbol{N}} - 1}){\mathrm{\Delta }}_{\boldsymbol{t}}}{\boldsymbol{e}}^{ - {\boldsymbol{iH}}_{\boldsymbol{e}}({\boldsymbol{R}},{\boldsymbol{t}}_{{\boldsymbol{N}} - 2}){\mathrm{\Delta }}_{\boldsymbol{t}}} \ldots {\boldsymbol{e}}^{ - {\boldsymbol{iH}}_{\boldsymbol{e}}({\boldsymbol{R}},{\boldsymbol{t}}_1){\mathrm{\Delta }}_{\boldsymbol{t}}}{\boldsymbol{e}}^{ - {\boldsymbol{iH}}_{\boldsymbol{e}}({\boldsymbol{R}},{\boldsymbol{t}}_0){\mathrm{\Delta }}_{\boldsymbol{t}}},$$
where the time interval \(\tau\) has been split into N = 1024 time steps of duration \(\Delta _t = T/N\) with the intermediate times given by \(t_n = t + n{\mathrm{\Delta }}_t\), and the purely electronic Hamiltonian \(H_e(t)\) is given by
$${\boldsymbol{H}}_{\boldsymbol{e}} = \left[ {\begin{array}{*{20}{c}} {{\boldsymbol{V}}_{\boldsymbol{g}}({\boldsymbol{R}})} & { - {\mathbf{F}}({\boldsymbol{t}}) \cdot {\mathbf{d}}({\boldsymbol{R}})} \\ { - {\mathbf{F}}({\boldsymbol{t}}) \cdot {\mathbf{d}}({\boldsymbol{R}})} & {{\boldsymbol{V}}_{\boldsymbol{u}}({\boldsymbol{R}})} \end{array}} \right].$$
The Floquet states \(|F_\alpha \left( {R,t} \right)\) are the eigenstates of \(U\left( {t,t + T;R} \right)\),
$${\boldsymbol{U}}\left( {{\boldsymbol{t}},{\boldsymbol{t}} + {\boldsymbol{T}};{\boldsymbol{R}}} \right)\left| {{\boldsymbol{S}}_\alpha \left( {{\boldsymbol{R}},{\boldsymbol{t}}} \right) = {\boldsymbol{e}}^{ - {\boldsymbol{i}}\varepsilon _\alpha \left( {\boldsymbol{R}} \right){\boldsymbol{t}}}} \right|{\boldsymbol{S}}_\alpha \left( {{\boldsymbol{R}},{\boldsymbol{t}}} \right),$$
where the \(\varepsilon _\alpha \left( R \right)\) are the quasi-energies of the Floquet states \(|S_\alpha \left( {R,t} \right)\). The Floquet states and quasi-energies are found directly by diagonalizing the 2 × 2 \(U\left( {t,t + T;R} \right)\) matrix for each R. The Floquet states \(|S_\alpha \left( {R,t} \right)\) are periodic with the period of the laser field, and exhibit a sub-cycle time dependence whenever multiphoton couplings are active. Consequently, the associated potential energy surfaces will also, in general, exhibit a sub-cycle time dependence. The sub-cycle time dependence can be expanded as a Fourier series to yield a set of time-independent potentials that characterize the system
$$e^{ - i\varepsilon _\alpha \left( R \right)t}|S_\alpha \left( {R,t} \right) = e^{ - i\varepsilon _\alpha \left( R \right)t}\mathop {\sum }\limits_{n = - \infty }^\infty |s_\alpha ^n\left( R \right)e^{ - in\omega t} = \mathop {\sum }\limits_{n = - \infty }^\infty |s_\alpha ^n\left( R \right)e^{ - i\left( {\varepsilon _\alpha \left( R \right) + n\omega } \right)t}.$$
The ladder of Floquet states is formed by the energies of the Fourier expansion, where the \(\varepsilon _\alpha \left( R \right)\) get repeated and shifted by nω, forming an infinite ladder of time-independent potentials. These quasi-energies \((\varepsilon _\alpha \left( R \right) + n\omega )\) are what is referred to as LIPs in the main text.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code availability
The computer codes used for TDSE simulations and Floquet calculations are available from the corresponding author upon reasonable request.
Longuet-Higgins, H. C. & Herzberg, G. Intersection of potential energy surfaces in polyatomic. Discuss. Faraday Soc. 35, 77–82 (1963).
Domcke, W., Yarkony, D. R. & Köppel, H. Conical Intersections: Theory, Computation and Experiment, Vol. 17 (World Scientific, 2011).
Domcke, W. & Yarkony, D. R. Role of conical intersections in molecular spectroscopy and photoinduced chemical dynamics. Annu. Rev. Phys. Chem. 63, 325–352 (2012).
ADS CAS PubMed Article Google Scholar
Levine, B. G. & Martínez, T. J. Isomerization through conical intersections. Annu. Rev. Phys. Chem. 58, 613–634 (2007).
Polli, D. et al. Conical intersection dynamics of the primary photoisomerization event in vision. Nature 467, 440 (2010).
Schultz, T. et al. Efficient deactivation of a model base pair via excited-state hydrogen transfer. Science 306, 1765–1768 (2004).
Kang, H., Lee, K. T., Jung, B., Ko, Y. J. & Kim, S. K. Intrinsic lifetimes of the excited state of DNA and RNA bases. J. Am. Chem. Soc. 124, 12958–12959 (2002).
Demekhin, P. V. & Cederbaum, L. S. Light-induced conical intersections in polyatomic molecules: general theory, strategies of exploitation, and application. J. Chem. Phys. 139, 154314 (2013).
ADS PubMed Article CAS Google Scholar
Moiseyev, N., Šindelka, M. & Cederbaum, L. S. Laser-induced conical intersections in molecular optical lattices. J. Phys. B At. Mol. Opt. Phys. 41, 221001 (2008).
ADS Article CAS Google Scholar
Šindelka, M., Moiseyev, N. & Cederbaum, L. S. Strong impact of light-induced conical intersections on the spectrum of diatomic molecules. J. Phys. B At. Mol. Opt. Phys. 44, 045603 (2011).
Halász, G. J., Vibók, Á., Moiseyev, N. & Cederbaum, L. S. Light-induced conical intersections for short and long laser pulses: Floquet and rotating wave approximations versus numerical exact results. J. Phys. B At. Mol. Opt. Phys. 45, 135101 (2012).
Sändig, K., Figger, H. & Hänsch, T. W. Dissociation dynamics of H2 + in intense laser fields: investigation of photofragments from single vibrational levels. Phys. Rev. Lett. 85, 4876–4879 (2000).
ADS PubMed Article Google Scholar
Pavičić, D., Kiess, A., Hänsch, T. W. & Figger, H. Intense-laser-field ionization of the hydrogen molecular ions H2 + and D2 + at critical internuclear distances. Phys. Rev. Lett. 94, 163002 (2005).
Ben-Itzhak, I. et al. Dissociation and ionization of H2 + by ultrashort intense laser pulses probed by coincidence 3D momentum imaging. Phys. Rev. Lett. 95, 73002 (2005).
ADS CAS Article Google Scholar
Wang, P. Q. et al. Highlighting the angular dependence of bond softening and bond hardening of H2 + in an ultrashort intense laser pulse. J. Phys. B At. Mol. Opt. Phys. 38, L251–L257 (2005).
Wang, P. Q. et al. Dissociation of H2 + in intense femtosecond laser fields studied by coincidence three-dimensional momentum imaging. Phys. Rev. A 74, 43411 (2006).
Aubanel, E. E., Gauthier, J.-M. & Bandrauk, A. D. Molecular stabilization and angular distribution in photodissociation of H2 + in intense laser fields. Phys. Rev. A 48, 2145–2152 (1993).
Numico, R., Keller, A. & Atabek, O. Intense-laser-induced alignment in angularly resolved photofragment distributions of H2 +. Phys. Rev. A 60, 406–413 (1999).
Halász, G. J., Vibók, Á., Meyer, H.-D. & Cederbaum, L. S. Effect of light-induced conical intersection on the photodissociation dynamics of the D2 + molecule. J. Phys. Chem. A 117, 8528–8535 (2013).
Halász, G. J., Vibók, Á., Moiseyev, N. & Cederbaum, L. S. Nuclear-wave-packet quantum interference in the intense laser dissociation of the D2 + molecule. Phys. Rev. A 88, 43413 (2013).
Halász, G. J., Vibók & Cederbaum, L. S. Direct signature of light-induced conical intersections in diatomics. J. Phys. Chem. Lett. 6, 348–354 (2015).
Bouakline, F. Unambiguous signature of the berry phase in intense laser dissociation of diatomic molecules. J. Phys. Chem. Lett. 9, 2271–2277 (2018).
Natan, A. et al. Observation of quantum interferences via light-induced conical intersections in diatomic molecules. Phys. Rev. Lett. 116, 143004 (2016).
Bandrauk, A. D. & Sink, M. L. Photodissociation in intense laser fields: predissociation analogy. J. Chem. Phys. 74, 1110–1117 (1981).
Bandrauk, A. D. in Frontiers of Chemical Dynamics 131–150 (Springer, 1995).
Wunderlich, C., Kobler, E., Figger, H. & Hänsch, T. W. Light-induced molecular potentials. Phys. Rev. Lett. 78, 2333–2336 (1997).
Niikura, H. et al. Probing molecular dynamics with attosecond resolution using correlated wave packet pairs. Nature 421, 826–829 (2003).
Sussman, B. J., Townsend, D., Ivanov, M. Y. & Stolow, A. Dynamic stark control of photochemical processes. Science 314, 278–281 (2006).
Corrales, M. E. et al. Control of ultrafast molecular photodissociation by laser-field-induced potentials. Nat. Chem. 6, 785–790 (2014).
McCann, J. F. & Bandrauk, A. D. Two-color photodissociation of the lithium molecule: anomalous angular distributions of fragments at high laser intensities. Phys. Rev. A 42, 2806–2816 (1990).
Ibrahim, H., Lefebvre, C., Bandrauk, A. D., Staudte, A. & Légaré, F. H2: the benchmark molecule for ultrafast science and technologies. J. Phys. B At. Mol. Opt. Phys. 51, 42002 (2018).
Bucksbaum, P. H., Zavriyev, A., Muller, H. G. & Schumacher, D. W. Softening of the H2+ molecular bond in intense laser fields. Phys. Rev. 64, 1931–1934 (1990).
Giusti-Suzor, A., He, X., Atabek, O. & Mies, F. H. Above-threshold dissociation of H2 + in intense laser fields. Phys. Rev. Lett. 64, 515–518 (1990).
Zavriyev, A., Bucksbaum, P. H., Muller, H. G. & Schumacher, D. W. Ionization and dissociation of H2 in intense laser fields at 1.064 µm, 532 nm, and 355 nm. Phys. Rev. A 42, 5500–5513 (1990).
Niikura, H., Corkum, P. B. & Villeneuve, D. M. Controlling vibrational wave packet motion with intense modulated laser fields. Phys. Rev. Lett. 90, 203601 (2003).
Rudenko, A. et al. Fragmentation dynamics of molecular hydrogen in strong ultrashort laser pulses. J. Phys. B At. Mol. Opt. Phys. 38, 487–501 (2005).
Xu, H. et al. Coherent control of the dissociation probability of H2 + in ω-3ω two-color fields. Phys. Rev. A 93, 63416 (2016).
Li, H. et al. Intensity dependence of the attosecond control of the dissociative ionization of D2. J. Phys. B At. Mol. Opt. Phys. 47, 124020 (2014).
Bandrauk, A. D., Chelkowski, S. & Nguyen, H. S. Attosecond localization of electrons in molecules. Int. J. Quant. Chem. 100, 834–844 (2004).
Zuo, T. & Bandrauk, A. D. Charge-resonance-enhanced ionization of diatomic molecular ions by intense lasers. Phys. Rev. A 52, 2511–2514 (1995).
ADS Article Google Scholar
Xu, H., He, F., Kielpinski, D., Sang, R. T. & Litvinyuk, I. V. Experimental observation of the elusive double-peak structure in R-dependent strong-field ionization rate of H2 +. Sci. Rep. 5, 13527 (2015).
ADS CAS PubMed PubMed Central Article Google Scholar
Martín, F. et al. Single photon-induced symmetry breaking of H2 dissociation. Science 315, 629–633 (2007).
Li, H. et al. Sub-cycle directional control of the dissociative ionization of H2 in tailored femtosecond laser fields. J. Phys. B At. Mol. Opt. Phys. 50, 172001 (2017).
Sansone, G. et al. Electron localization following attosecond molecular photoionization. Nature 465, 763–766 (2010).
Charron, E., Giusti‐Suzor, A. & Meis, F. H. Coherent control of photodissociation in intense laser fields. J. Chem. Phys. 103, 7359–7373 (1995).
Gong, X. et al. Two-dimensional directional proton emission in dissociative ionization of H2. Phys. Rev. Lett. 113, 203001 (2014).
Roudnev, V., Esry, B. D. & Ben-Itzhak, I. Controlling HD+ and H2 + dissociation with the carrier-envelope phase difference of an intense ultrashort laser pulse. Phys. Rev. Lett. 93, 163601 (2004).
Kling, M. F. et al. Control of electron localization in molecular dissociation. Science 312, 246–248 (2006).
Kling, N. G. et al. Carrier-envelope phase control over pathway interference in strong-field dissociation of H2 +. Phys. Rev. Lett. 111, 163004 (2013).
Rathje, T. et al. Coherent control at its most fundamental: carrier-envelope-phase-dependent electron localization in photodissociation of a H2 + molecular ion beam target. Phys. Rev. Lett. 111, 093002 (2013).
Znakovskaya, I. et al. Subcycle controlled charge-directed reactivity with few-cycle midinfrared pulses. Phys. Rev. Lett. 108, 063002 (2012).
Staudte, A. et al. Angular tunneling ionization probability of fixed-in-space H2 molecules in intense laser pulses. Phys. Rev. Lett. 102, 033004 (2009).
Roudnev, V. & Esry, B. D. HD+ in a short strong laser pulse: practical consideration of the observability of carrier-envelope phase effects. Phys. Rev. A 76, 23403 (2007).
Kim, K. T. et al. Petahertz optical oscilloscope. Nat. Photonics 7, 958 (2013).
Kübel, M. et al. Streak camera for strong-field ionization. Phys. Rev. Lett. 119, 183201 (2017).
Légaré, F. et al. Imaging the time-dependent structure of a molecule as it undergoes dynamics. Phys. Rev. A 72, 52717 (2005).
Kelkensberg, F. et al. Molecular dissociative ionization and wave-packet dynamics studied using two-color XUV and IR pump-probe spectroscopy. Phys. Rev. Lett. 103, 123005 (2009).
Ullrich, J. et al. Recoil-ion and electron momentum spectroscopy: reaction-microscopes. Rep. Prog. Phys. 66, 1463–1545 (2003).
Bunkin, F. V. & Tugov, I. I. Multiphoton processes in homopolar diatomic molecules. Phys. Rev. A 8, 601–612 (1973).
Herzberg, G. Molecular Spectra and Molecular Structure: Volume I - Spectra of Diatomic Molecules (Krieger Publishing, 1989).
Hänggi, P. in Quantum Transport and Dissipation, 249–286 (Wiley-VCH, 1998).
Chu, S. I. & Telnov, D. A. Beyond the Floquet theorem: generalized Floquet formalisms and quasienergy methods for atomic and molecular multiphoton processes in intense laser fields. Phys. Rep. 390, 1–131 (2004).
ADS MathSciNet CAS Article Google Scholar
Bayfield, J. E. Quantum Evolution: An Introduction to Time-Dependent Quantum Mechanics (Wiley-VCH, 1999).
The authors thank D. Crane, R. Kroeker, and B. Avery for technical assistance. We acknowledge fruitful discussions with F. Bouakline, M. Richter, A. M. Sayler, G. G. Paulus, M. F. Kling, and B. Bergues. This project has received funding from the EU's Horizon2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 657544. Financial support from the National Science and Engineering Research Council Discovery Grant No. 419092-2013-RGPIN, and from the U.S. Air Force Office of Scientific Research (Grant No. FA9550-16-1-0109) is gratefully acknowledged.
Joint Attosecond Science Laboratory, National Research Council and University of Ottawa, 100 Sussex Drive, Ottawa, ON, K1A 0R6, Canada
M. Kübel, M. Spanner, Z. Dube, A. Yu. Naumov, P. B. Corkum, D. M. Villeneuve & A. Staudte
Department of Physics, Ludwig-Maximilians-Universität Munich, Am Coulombwall 1, D-85748, Garching, Germany
M. Kübel
Institute for Optics and Quantum Electronics, University of Jena, Max-Wien-Platz 1, D-07743, Jena, Germany
Laboratoire de Chimie Théoretique, Faculté des Sciences, Université de Sherbrooke, Sherbrooke, QC, J1K 2R1, Canada
S. Chelkowski & A. D. Bandrauk
Max-Born-Institute, Max-Born-Straße 2A, D-12489, Berlin, Germany
M. J. J. Vrakking
M. Spanner
Z. Dube
A. Yu. Naumov
S. Chelkowski
A. D. Bandrauk
P. B. Corkum
D. M. Villeneuve
A. Staudte
M.K., Z.D., and A.S. conceived and conducted the experiment, and analyzed the results. M.S., M.K., S.C., M.J.J.V., and D.M.V. performed simulations, and interpreted the data with P.B.C. and A.S. All authors discussed the results and contributed to the final manuscript.
Correspondence to M. Kübel or A. Staudte.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Kübel, M., Spanner, M., Dube, Z. et al. Probing multiphoton light-induced molecular potentials. Nat Commun 11, 2596 (2020). https://doi.org/10.1038/s41467-020-16422-2
Quantum light-induced nonadiabatic phenomena in the absorption spectrum of formaldehyde: Full- and reduced-dimensionality studies
Csaba Fábri
, Benjamin Lasorne
, Gábor J. Halász
, Lorenz S. Cederbaum
& Ágnes Vibók
The Journal of Chemical Physics (2020)
Clocking Enhanced Ionization of Hydrogen Molecules with Rotational Wave Packets
Yonghao Mi
, Peng Peng
, Nicolas Camus
, Xufei Sun
, Patrick Fross
, Denhi Martinez
, Zack Dube
, P. B. Corkum
, D. M. Villeneuve
, André Staudte
, Robert Moshammer
& Thomas Pfeifer
Physical Review Letters (2020)
Editors' Highlights
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Skip to main content Skip to sections
September 2017 , Volume 20, Issue 3, pp 1977–1994 | Cite as
OnlineElastMan: self-trained proactive elasticity manager for cloud-based storage services
Ying Liu
Daharewa Gureya
Ahmad Al-Shishtawy
Vladimir Vlassov
The pay-as-you-go pricing model and the illusion of unlimited resources in the Cloud initiate the idea to provision services elastically. Elastic provisioning of services allocates/de-allocates resources dynamically in response to the changes of the workload. It minimizes the service provisioning cost while maintaining the desired service level objectives (SLOs). Model-predictive control is often used in building such elasticity controllers that dynamically provision resources. However, they need to be trained, either online or offline, before making accurate scaling decisions. The training process involves tedious and significant amount of work as well as some expertise, especially when the model has many dimensions and the training granularity is fine, which is proved to be essential in order to build an accurate elasticity controller. In this paper, we present OnlineElastMan, which is a self-trained proactive elasticity manager for cloud-based storage services. It automatically evolves itself while serving the workload. Experiments using OnlineElastMan with Cassandra indicate that OnlineElastMan continuously improves its provision accuracy, i.e., minimizing provisioning cost and SLO violations, under various workload patterns.
Elasticity controller Cloud storage Workload prediction SLO Online training Time series analysis
Hosting services in the Cloud are becoming more and more popular due to a set of desired properties provided by the platform, such as low application setup cost, professional platform maintenance and elastic resource provisioning. Elastically provisioned services are able to use platform resources on demand. Specifically, VMs are spawned when they are needed for handling an increasing workload and removed when the workload drops. Since users only pay for the resources that are used to serve their demand, elastic provisioning saves the cost of hosting services in the Cloud.
On the other hand, services are usually provisioned to match a certain level of quality of service (QoS), which is usually defined as a set of service level objectives (SLOs) in Cloud context. Thus, there are two contradictory goals to be achieved, i.e., saving the provisioning cost and meeting the SLO, while services are elastically provisioned.
Elastic provisioning is usually conducted automatically by an elasticity controller, which monitors the system status and makes corresponding decisions to add or remove resources. An elasticity controller needs to be trained, either online or offline, in order to make it smart enough to make such decisions. Generally, the training process allows the controller to build up a model that correlates monitored parameters, such as CPU or incoming workload, to controlled parameters, i.e., the SLO, which could be, for example, percentile request latency. The accuracy of the model, directly affects the accuracy of the elasticity controller, which dominates service provisioning cost and commitment of the SLO.
It is non-trivial to build an accurate and efficient elasticity controller. Recent works have been focusing on improving the accuracy of elasticity controllers by building different control models with various monitored/controlled metrics [1, 2, 3, 4, 5, 6, 7, 8]. However, none of the works have considered the practical usefulness of an elasticity controller, which involves the following challenges. First, an elasticity controller usually needs to be tailored according to a specific application. To be concise, sometimes, it requires complicated instrumentations to the provisioned application or even not possible to obtain the metrics that are used to build the control model. Furthermore, even with all the metrics, it requires tremendous and tedious works to train the control model. A general training procedure involves the redeployment and reconfiguration of the application and collecting and analyzing data by running various workloads against various configurations of the application. Second, the hosting environment of the provisioned application may change due to some unmonitored factors, for example, platform interference or background maintenance tasks. Then, even with well-trained control models, it may not be able to adjust to these factors and leads to inaccurate control decisions. Third, it is always too late for the elasticity controller to react to a workload increase when the workload is already saturating the application. Thus, we argue that prediction of the workload is always a compulsory element to an elasticity controller.
In this work, we propose OnlineElastMan, which is a generic elasticity controller for distributed storage systems. It excels its peers with its practical aspects, which includes straightforward obtainable control metrics, automatically online trained control models and embedded generic workload prediction module. It makes OnlineElastMan an "out-of-the-box" elasticity controller, which can be deployed and adopted by different storage systems without complicated tailoring/configuring efforts. Specifically, OnlineElastMan requires only monitoring on the two most generic metrics, i.e., incoming workload and service latency, which is obtainable from most of the storage systems without complicated instrumentation. Using the monitored metrics, OnlineElastMan analyzes the workload composition in depth, which includes read/write request intensity and data size of the requested item, which defines the dimensions of a control model. OnlineElastMan can easily plug in more interested dimensions if needed. After fixing the dimensions, a multi-dimensional control model can be automatically built and trained online while the storage system is serving requests. After a sufficient amount of warm up on the control model, OnlineElastMan is able to issue accurate control decision based on the incoming workload. Furthermore, the control model continuously improves itself online to adjust to unknown/unmodeled events of the operating environment. Additionally, a generic workload prediction module is also integrated to facilitate the decision making of OnlineElastMan. It allows OnlineElastMan to scale the storage system well in advance to prevent SLO violations caused on workload increase and scaling overhead [7]. Specifically, the prediction module aggregates multiple prediction algorithms and chooses the most appropriate prediction algorithm based on the current workload pattern using a weight majority selection algorithm. Contributions of the paper are as follows.
Implementation of an "out-of-the-box" generic elasticity controller framework, which is easily applicable to most of the distributed storage systems.
Integration of an online self-trained control model to OnlineElastMan, which avoids repetitive and tedious system reconfiguring and model training.
Proposal of a multi-dimensional control model based on workload characteristics, which proves to have better control accuracy.
Realization of a generic workload prediction module in OnlineElastMan, which is adjustable to multiple workload patterns.
Open-source implementation1 of OnlineElastMan framework.
2 Problem statement
There is a large body of work on elasticity controllers for the Cloud [2, 3, 4, 5, 6, 7, 8]. Most of them focus on improving the control accuracy of the controller by introducing novel control techniques and models. However, none of them tackles the practical issues regarding the deployment and application of the controllers. We examine the usefulness of an elasticity controller while deploying it in a Cloud environment. Specifically, we investigate the configuration steps for an elasticity controller before it starts provision services. Typically, it involves the following steps to setup an elasticity controller.
Acquire metrics for the elasticity controller from the provisioned application or the host platform.
Deploy the provisioned application in order to construct a training case for the elasticity controller.
Configure the provisioned application according to the deployment.
Configure and run a specific synthesized workload against the application.
Collect training data from the training case and train the control model accordingly.
Repeat step 2–5 until the control model is fully trained before serving the real workload.
It is intuitively clear that the more metrics considered in a control model, the more accurate it will be. However, increasing the metric dimensions of a control model comes with a significant amount of overhead during the training phase. Specifically, training a control model with only 3 dimensions results in 27 \((3^3)\) training cases when only 3 trials/runs are conducted for each dimension. This means that steps 2–5 needs to be repeated 27 times to train the control model. Obviously, it is extremely time consuming to train a control model manually, especially when the model has many dimensions, which is needed for higher control accuracy.
OnlineElastMan alleviates the training process with online training. Specifically, the model automatically trains and evolves itself while serving the workload. After a short period of warm up, the controller is able to provision the underlying application accurately. Thus, it is no longer needed to manually and repetitively reconfigure the system in order to train the model. Furthermore, in order to make OnlineElastMan as general as possible, its input metrics are easily obtainable from the application. Specifically, it directly uses the information in the incoming workload, which does not need application specific instrumentation, and service latency, which is the most accurate and direct reflection of QoS and can be easily sampled from system entry points or proxies.
On the other hand, previous works [2, 7] have demonstrated that, in order to keep the SLO commitment, a storage system needs to scale up in advance to tackle with a workload increase since scaling a storage system involves non-negligible overhead. Thus, we have made a design choice to integrate a workload prediction module for OnlineElastMan. Again, to make it as general as possible, the workload prediction module is able to produce accurate workload prediction for various workload patterns. Specifically, it has integrated several prediction algorithms that are designed to cope with different time series patterns. The most appropriate prediction algorithm is chosen online using a weight majority selection algorithm.
In this section we lay out the necessary background for the paper. This include Cloud computing, elastic services, stateful services, feedback control, and feedforward control.
3.1 Cloud computing and elastic services
Cloud computing, with its pay-as-you-go pricing model, provides an attractive solution to host the ever-growing number of web applications [9]. This is mainly because it is difficult, especially for startups, to predict the future load that might be imposed on the application and thus to predict the amount of resources needed to serve that load. Another reason is the initial investment, in the form of buying the servers, is avoided in the Cloud pricing model.
To leverage the Cloud pricing model and to efficiently handle the dynamic workload, Cloud services are designed to be elastic. An elastic service is able to scale horizontally at runtime, by provisioning additional resources, without disrupting the service. An elastic service can be scaled up in the case of increasing workload by adding resources in order to meet SLOs. In the case of decreasing load, the service can be scaled down by removing resources and thus reducing the cost without violating the SLOs.
3.2 Stateful services
Modern applications, such as Social Networks, Wikis, and Blogs, are data-centric, which require frequent data access [10]. This poses new challenges on the data-tier of multi-tier applications because the performance of the data-tier is typically governed by strict SLOs [11]. With the rapid increase of the number of users, the poor scalability of a typical data-tier with ACID [12] properties limited the scalability of web applications. This has led to the development of NoSQL databases with relaxed consistency guarantees and simpler operations in order to achieve horizontal scalability and high availability. Examples of NoSQL data-stores include, among others, key-value stores such as Voldemort [13], Dynamo [14], and Cassandra [15]. In this work, we focus on key-value stores, which typically provide simple key-value pair storage with eventual consistency guarantees. The simplified data and consistency models of key-value stores enable them to efficiently scale horizontally by adding more servers and thus serve more clients.
Another problem facing web applications is that a certain service, feature, or topic might suddenly become popular resulting in a workload spike [16, 17]. The fact that storage is a stateful service complicates the problem since only a particular subset of servers host data of the popular item. For stateful services, scaling is usually combined with a rebalancing step necessary to redistribute the data among the new set of servers.
These challenges have led to the need for an automated management of the data-tier, to make it capable to quickly and efficiently respond to changes in the workload in order to meet the required SLOs of the storage service.
Open image in new window
Multi-tier web application with elasticity controller deployed in a cloud environment
3.3 Feedback versus feedforward control
In computing systems, a controller [18] or an autonomic manager [19] is a software component that regulates the nonfunctional properties (performance metrics) of a target system. Nonfunctional properties are properties of the system such as the response time or CPU utilization. From the controller perspective these performance metrics are the system output. The regulation is achieved by monitoring the target system through a monitoring interface and adapting the system's configurations, such as the number of servers, accordingly through a control interface (control input). Controllers can be classified into feedback or feedforward controllers depending on what is being monitored.
In feedback control, the system's output (e.g., response time) is monitored. The controller calculates the control error by comparing the current system's output to a desired value set by the system administrators. Depending on the amount and sign of the control error, the controller changes the control input (e.g., number of servers to add or remove) in order to reduce the control error. The main advantage of feedback control is that the controller can tolerate noise and disturbance such as unexpected changes in the behaviour of the system or its operating environment. Disadvantages include oscillation, overshoot, and possible instability if the controller is not properly designed. Due to the nonlinearity of most systems, feedback controllers are approximated around linear regions called the operating region. Feedback controllers work properly only in the operating region they where designed for.
In feedforward control, the system's output is not monitored. Instead the feedforward controller relies on a model of the system that is used to calculate the system's output based on the current system state. For example, given the current request rate and the number of servers, the system model is used to calculate the corresponding response time and act accordingly to meet the desired response time. The advantages of feedforward control include being faster than feedback control in reaching the optimum point and avoiding oscillations and overshoot.
The major drawback of feedforward control is that it is sensitive to unexpected disturbances that are not accounted for (modelled) in the system model. Addressing this issue may results in a relatively complex system model, compared to feedback control, that tries to capture all possible states of the modelled system. Another approach is to apply online training that continuously adapts the system model in order to reflect changes in the physical system.
3.4 Target system
We are targeting multi-tier web applications (the left side of Fig. 1). We are focusing on managing the data-tier because of its major effect on the performance of web applications, which are mostly data centric [10]. For the data-tier, we assume horizontally scalable key-value stores due to their popularity in many large scale web applications such as Facebook and LinkedIn. A typical key-value store provides a simple put/get interface. This simplicity enables efficient partitioning of the data among multiple servers and thus to scale well to a large number of servers.
The minimum requirements to enable elasticity control of a key-value store are as follows. The store must provide a monitoring interface to monitor the workload and the latency of put/get operations. The store must also provide an actuation interface that allows horizontal scalability by adding or removing servers. As storage is a stateful service, actuation must be combined with a rebalance operation, which redistributes the data among the new set of servers. Many stores, such as Voldemort [13] and Cassandra [15], provide rebalancing tools.
We target applications running in the Cloud (right side of Fig. 1). We assume that each service instance runs on its own VM; each physical machine hosts multiple VMs. The Cloud environment hosts multiple applications (not shown in the figure). Such environment complicates the control problem mainly due to the fact that VMs compete for the shared resources. This environmental noise makes it difficult to model and predict the performance of VMs [20, 21].
3.5 Cassandra
We have chosen Cassandra as our targeted underlying distributed storage system. Cassandra [15] is open sourced under Apache licence. It is a distributed storage system which is highly available and scalable. It stores column-structured data records and provides the following key features:
Distributed and decentralized architecture Cassandra is organized in a peer-to-peer fashion. Specifically, each node performs the same functionality in a Cassandra cluster. However, each node manages a different namespace, which is decided by the hash function in the DHT. Comparing to Master-slave, the design of Cassandra avoids single point of failure and maximizes its scalability.
Horizontal scalability The peer to peer structure enables Cassandra to scale linearly. The consistent hashing implemented in Cassandra allows it to swiftly and efficiently locate a queried data record. Virtual node techniques are applied to balance the load on each Cassandra node.
Tunable data consistency level Cassandra provides tunable data consistency options, which is realized through different combinations of read and write APIs. These APIs use ALL, EACH_QUORUM, QUORUM, LOCAL_QUORUM, ONE, TWO, LOCAL_ONE, ANY, SERIAL, LOCAL_SERIAL to describe read/write calls. For example, the ALL option means the Cassandra reads/writes all the replicas before returning to clients. The explanation of each read/write option can be easily found on Apache Cassandra website.
An SQL like query tools—CQL The common access interface in Cassandra is exposed using Cassandra Query Language (CQL). CQL is similar to SQL in its semantics. For example, a query to get a record whose id equals to 100 results the same statement in both of CQL and SQL (SELECT * FROM USER_TABLE WHERE ID= 100). It reduces the learning curve for developers to use CQLs and get started with Cassandra.
4 OnlineElastMan design
In this section, we present the design of OnlineElastMan by explaining its three major components, i.e., workload prediction, online model training, and elasticity controller. Figure 2 presents the architecture of OnlineElastMan. Components operate concurrently and communicate by message passing. Briefly, workload prediction module takes input from the current workload and predicts workload for a near future (the next control period). Online Model training module updates the current model by mapping and analyzing the monitored workload and the performance of the system. Then, the elasticity controller takes the predicted workload and consults the updated performance model to issue scaling commands by calling the Cloud API to add or remove servers for the underlying storage system.
OnlineElastMan architecture
4.1 Monitored parameters
Auto-scaling technique requires a monitoring component that gathers various metrics that reflect the realtime status of the targeted system at an appropriate granularity (e.g per second, per minute, per hour). It is essential to review the metrics that can be obtained from the target system and the metrics that best reflect the status of the system. To ease the configuration of OnlineElastMan framework and to make it as general as possible, we consider the target storage system as a black box. OnlineElastMan adopts the most general and direct metrics that dominate the QoS of the targeted storage system. Specifically, we take the workload, which causes the variations in those system metrics, directly as the input. OnlineElastMan requires the workload monitoring to provide the read/write intensity and the size of the requested data in the workload in small intervals. The monitored data can be obtained by sampling the traffic passing through the entry points, e.g. proxies or load balancers, of the storage system. The percentile latency, which defines and directly reflects the QoS, is collected either from entry proxies or the storage system itself depending on the design and workflows of storage systems. Then, the collected percentile latency is used to adjust and improve control decisions/models. In Sect. 5.1.1, we provide details on how we obtain these metrics in a distributed storage system, such as, Cassandra [15].
4.2 Multi-dimensional online model
One of the core components in OnlineElastMan is the multi-dimensional SML (Statistical Machine Learning) model, which is learnt online. It correlates the input metrics (workload characteristics) with the SLO (percentile latency). The goal of the model is to keep the target system operating with the percentile latency varying only in a small controlled range. It is intuitively clear that with more provisioned resources (VMs), the system is able to respond to requests with reduced latency. However, on the other hand, we would also like to provision as little VMs as possible to save the provisioning cost. Thus, the controlled latency range is always desired to be slightly under (just satisfying) the percentile latency requirement defined in the SLO to minimize the provisioning cost. We refer this region to be optimal operational region (OOR), where a system is not very much over-provisioned but satisfying the SLO.
In order to keep the system operating in the OOR while the incoming workload is dynamic, an elasticity controller needs to react to the workload changes and allocating/de-allocating VMs to the system. Previous works [3, 4, 7, 22] designs an elasticity controller based on an offline statistical model. OnlineElastMan builds the model online and continuously improves/updates the model while serving requests. The online training feature frees system administrators from the tedious offline model training procedure, which includes repetitive system configurations, system deployments, model updates, etc., before putting the controller online. Additionally, the continuous evolving model in OnlineElastMan enables the system to survive with factors that are not considered in the model, e.g. platform interference [23, 24].
Specifically, the online model is built with the monitored parameters mentioned in Sect. 4.1. It classifies whether a VM is able to operate in the OOR under the current workload, which breaks down to the intensity of read and write requests and the requested data size. Ideally, a storage node hosted in a VM can be either operating under commitment to the SLO or with violation to the SLO. Therefore, with a given workload and VM flavor, the classifier model is a line that separates the plane into two regions, in which the SLO is either met or violated as shown in Fig. 5. Different models need to be built for different VM flavors and different storage systems hosted. While building the model as depicted in Fig. 3, there are several configurable parameters that affect the accuracy of the model.
Granularity of the model Since the collected data for the model can be decimal, it is impossible to analyze the data with infinite combinations. We group the collected data with a pre-defined granularity, which makes a two-dimensional plane to be separated to small squares or a three-dimensional plane to be separated to small cubes. These squares and cubes are the groups where data are accumulated and analyzed. The granularity of data groups can be configured depending on the memory limits and the precision requirements of the model.
Historical data buffer For data collected and mapped to each group, we maintain a historical record for the most recent n reads and writes.
Confidence level The historical data in each group is analyzed to define whether the workload that corresponds to the data collected in this group violates the SLO or not. For example, \(95\%\) confidence level implies that \(95\%\) of all the Read/write percentile latency sampled satisfy the SLO.
Update frequency The model updates itself periodically with a fixed configurable rate. A higher update frequency allows the model to swiftly adapt to execution environment changes while a lower update frequency makes the model more stable and tolerate transient execution environment changes.
Building the multi-dimensional online model
Classification using SVM
4.2.1 SVM binary classifier
SVMs have become popular classification techniques in a wide range of application domains [25]. They provide good performance even in cases of high-dimensional data and a small set of training data. Figure 4 shows the flow of a classification task using SVM. Briefly, we first train the model offline using systematically profiled data. Then, we put the model online to let it evolve itself.
Here, we describe the algorithm of SVM applied to build the model of OnlineElastMan. Each instance of the training set contains a class label and several features or observed variables. The goal of SVM is to produce a model based on the training set. More concretely, given a training set of instance-label pairs \((x_i, y_i), i = 1,...,l\) where \(x_i \in R^n\) and \(y_i \in \{1, -1\}^l\), the SVM classification solves the following optimization problem:
$$\begin{aligned} min_{w,b} \;\;\;\;\;\; \parallel w \parallel ^2 + C\sum _{i} \xi _{i} \end{aligned}$$
subject to:
$$\begin{aligned} y^{(i)}(w^Tx^i + b) \ge 1 - \xi _{i}, \;\;\;\; i=1,2,\ldots , m\nonumber \\ \xi _{i} \ge 0, \;\;\;\; i=1,2,\ldots , m \end{aligned}$$
After solving, the SVM classifier predicts 1 if \(w^Tx + b \ge 0\) and −1 otherwise. The decision boundary is defined by the following line:
$$\begin{aligned} w^Tx + b = 0 \end{aligned}$$
Generally, the predicted class can be calculated using the linear discriminant function:
$$\begin{aligned} f(x) = wx + b \end{aligned}$$
\(\mathbf x \) refers to a training pattern, \(\mathbf w \) as the weight vector and \(\mathbf b \) as the bias term. \(\mathbf wx \) refers to the dot product, which calculates the sum of the products of vector components \(w_ix_i\). For example, in case of training set with three features (e.g. x, y, z), the discriminant function is simply:
$$\begin{aligned} f(x) = w_1x + w_2y + w_3z + b \end{aligned}$$
SVM provides the estimates for \(w_1, w_2, w_3\) and b after training.
2 dimensional SVM performance model
3 dimensional SVM performance model taking into account request data size
Top angle view of the SVM model, where the model plane is projected to a 2 dimensional surface and the shaded area is caused by the varying data sizes
Given Eq. 3, the SML model is a line (Fig. 5) when only monitoring read/write request intensity in the workload or a plane (Fig. 6) when another dimension, i.e., data size, is modeled. Figure 7 is a 2 dimensional projection of Fig. 6, which shows that different data sizes cause different separations of the 2 dimensional model space. It indicates that data size plays an essential role to build an accurate control model for storage systems. The line/plane separation in the model represents the maximum workload that a VM can serve under the specified SLO (percentile latency).
Online model Training Using the SVM model training technique, the performance model is updated periodically according to the update frequency using the data in the historical data buffer processed with the confidence level.
We believe that every VM can have significant performance difference even when they are spawned with the same favor. This can be caused by the interference from the host platform [23, 24] or background tasks, such as data migration [7]. Thus, individual SML model is built for each VM participating in the system. They automatically evolve and update continuously while the system is serving workload. Periodically, the updated SML models for each VM are sent to the elasticity controller module to make scaling decisions.
4.3 Elasticity controller
An elasticity controller makes scaling decisions in configurable control periods/intervals to prevent system from oscillations. When making a scaling decision, the elasticity controller collects the aggregation of the input workload of all VMs (W) and the aggregation of the capacity of all VMs (C). The capacity of a VM is the maximum workload that it can handle under the SLO, which is obtained from the multi-dimensional SML model. The elasticity controller also observes the input workload (\(w_x\)) and capacity (\(c_x\)) for each VM individually to identify fine-grained SLO violations. Specifically, the capacity of each VM is calculated by intersecting the plane of its SML model with a line from the origin that points to the current workload representation, which is a point corresponding to read and write workload intensity and the averaged data size. The capacity of the VM is the intersection point, which represents the capability to serve workload with specific read/write request intensity of a specific data size. If the current workload representation point is beyond the capacity representation point in the model, the SLO is violated.
The responsibility of an elasticity controller is to keep the provisioned system operating with commitment to the SLO. The strictest requirement is that each VM operates with the commitment to the SLO, which can be denoted by \(\forall i \in N, w_i < c_i\), where N is the complete set of all participating VMs. However, this is not trivial to achieve without over-provisioning the system because of the imbalance of workload distribution. It is challenging to balance workload in storage systems with respect to each VM. This is because that storage systems are stateful, i.e., usually each VM is responsible only for a part of the total data stored. Thus, a specific request can only be served by a specific set of VMs, which host the requested data. Given that different storage systems have different data distribution as well as load balancing strategies and OnlineElastMan is designed to be a generic framework to provision storage systems elastically, we choose not to manage workload/data distribution for provisioned systems. Furthermore, managing data distribution or rebalancing among VMs is orthogonal to the design goal of OnlineElastMan. Nevertheless, OnlineElastMan provides/outputs suggestions for workload distributions to each participating VMs based on their capacity learnt from our SML models.
In order to tolerant load imbalance among VMs to some extent, OnlineElastMan introduces an optional tolerance factor \(\alpha \) when computing scaling decisions to prevent too much over-provisioning. Specifically, a scaling up decision is issued when the SLO violation \(c_x < w_x\) is observed from more than \(\alpha \) VMs, where \(\alpha \ge 0\). When \(\alpha = 0\), there is no tolerance on load imbalance. The number of VMs to add is calculated individually for each VM and aggregated globally. \(\frac{w_x - c_x}{c_x}\) number of VMs with the same flavor as \(c_x\) is expected to be added. Thus, when \(\frac{w_x - c_x}{c_x} < 0\), it represents that a VM has more capacity than the incoming workload. We aggregate results of \(\frac{w_x - c_x}{c_x}\) on each VM flavors and ceiling the aggregated results. When the result on a specific VM flavor is negative, we do nothing because it is in a scaling up procedure. When the result on a specific VM flavor is positive, we add the number of VMs of that flavor accordingly.
For scaling down, there is also a corresponding load imbalance tolerance factor \(\beta \). \(\beta \) denotes the number of VMs, which are over-provisioned, in each VM flavor. A scaling down procedure is triggered by first satisfying that there is no VM that violates the SLO, which gives \(\forall i \in N, w_i < c_i\), where N is the complete set of all participating VMs. Then, the number of VMs to de-allocate is calculated through similar process comparing to scaling up. The aggregated results of \(\frac{w_x - c_x}{c_x}\) on each VM flavors are floored after subtracting \(\beta \). Last, the corresponding number of VMs are de-allocated when the floored results are greater than zero.
When a scaling up/down decision is made, the elasticity controller interact with Cloud/platform API to request/release VMs. Where applicable, the elasticity controller also calls the API to rebalance data to the newly added VMs or to decommission the VMs that are about to be removed. Adding/removing VMs to a distributed storage system introduce a significant amount of data rebalance load in the background. This leads to fluctuations on sensitive performance measures, such as percentile latency. Usually, the extra data rebalancing load is not long lasting. So, this fluctuation can be filtered out in our SML model with proper setting on the confidence level and update frequency introduced in Sect. 4.2.
4.4 Workload prediction
An optional but essential component of OnlineElastMan is the workload prediction module. It is always too late to make a scaling out decision when the workload is already increased since preparing VMs involve non-negligible overhead, especially for storage systems, which require data to be migrated to the newly added VMs. Thus, there is a prediction module that facilitates OnlineElastMan to make decisions in advance.
Often, there are patterns that can be found in the workload, such as the diurnal pattern [26]. These patterns become vague when the workload is distributed to each VM. Thus, we are not predicting the incoming workload for each VM. Rather, the workload is predicted for the whole system. Then, it is proportionally calculated for each VM based on the current workload portion that is served by the VM. Finally, instead of using the current incoming workload to make a scaling decision in the previous section, we are able to use the predicted workload as the input.
However, even predicting the workload for the whole system is not trivial since there are many factors that contribute to the fluctuation of the workload [27]. Some workloads have repetitive/cyclic pattern, such as diurnal patterns or seasonal patterns while some other workloads experience exponential growth over a short period of time, which can be caused by market campaigns or special offers. Considering that there are no perfect predictors and different applications' workloads are distinct, no single prediction algorithm is general enough to be suitable for most workloads. Thus, we have studied and analyzed several prediction algorithms that are designed for different workload patterns, i.e., the regression trees, first-order autoregressive, differenced first-order autoregressive, exponential smoothing, second-order autoregressive and random walk. Then, a weighted majority algorithm (Sect. 4.4.3) is used to select the best prediction algorithm.
4.4.1 Regression trees model
Regression trees predict responses to data and are considered as a variant of decision trees. They specify the form of the relationship between predictors and a response. We first build a tree using the time series data through a process known as recursive partitioning (Algorithm 1) and then fit the leaves values to the input predictors like Neural Networks. Particularly, to predict a response, we follow the decisions in the tree from the root node all the way to a leaf node which contains the response.
4.4.2 ARIMA
Autoregressive moving average (ARMA) is one of the most widely used approaches to time series forecasting. ARMA model is convenient for modelling time series data which is stationary. In order to handle non-stationary time series data, ARMA model adopts a differencing component to help deal with both stationary and non-stationary data. This class of models with differencing component is referred to as the autoregressive integrated moving average (ARIMA) model. Specifically, ARIMA model is made up of autoregressive (AR) component of lagged observations, a moving average (MA) of past errors and a differencing component (I) needed to make a time series to be stationary. The MA component is impacted by past and current errors while the AR component shows the recent observations as a function of past observations [28].
In general, an ARIMA model is parametrized as ARIMA(p,d,q), where p is the number of autoregressive terms (order of AR), d is the number of differences needed for stationarity, and q is the number of lagged forecast errors in the prediction equation (order of MA). The following equation represents a time series expressed in terms of AR(n) model:
$$\begin{aligned} Y^{'}(t) = \mu {+} \alpha _1Y(t-1) + \alpha _2Y(t-2) + \cdots {+} \alpha _nY(t-n) \end{aligned}$$
Equation 7 represents a time series expressed in terms of moving averages of white noise and error terms.
$$\begin{aligned} Y^{'}(t) = \mu {+} \beta _1\epsilon (t-1) + \beta _2\epsilon (t-2) + \cdots {+} \beta _n\epsilon (t-n) \end{aligned}$$
In OnlineElastMan, apart from regression tree, we have integrated five ARIMA models, which are the first-order autoregressive (ARIMA(1, 0, 0)), the differenced first-order autoregressive (ARIMA(1, 1, 0)), the simple exponential smoothing (ARIMA(0, 1, 1)), the second-order autoregressive (ARIMA(2, 0, 0)) and the random walk (ARIMA(0, 1, 0)). In our view, they can capture almost all the common workload patterns. For example, the first-order autoregressive model performs well when the workload is stationary and autocorrelated while, for non-stationary workload, a random walk model might be suitable. Then, the challenge is to detect and select the most appropriate prediction model during runtime.
4.4.3 The weighted majority algorithm
A Weighted Majority Algorithm (WMA) is implemented to select the best prediction model during runtime. It is a machine learning algorithm that is used to build a combined algorithm from a pool of algorithms [29]. The algorithm assumes that one of the known algorithms in the pool will perform well under the current workload without prior knowledge about the accuracy of the algorithms. The WMA have many variations suited for different scenarios including infinite loops, shifting targets and randomized predictions. We present our WMA implementation in Algorithm 2. Specifically, the algorithm maintains a list of weights \(w_1\),...,\(w_n\) for each prediction algorithm. The prediction result from the most weighted algorithm, based on a weighted majority vote, is selected and returned.
The prediction module of OnlineElastMan is shown in Fig. 8. Additional prediction algorithms can be plugged into the prediction module to handle more workload patterns.
Architecture of the workload prediction module
4.5 Putting everything together
OnlineElastMan operates according to the flowchart as illustrated in Fig. 9. The incoming workload is fed to two modules, i.e., the prediction module and the online training module. The prediction module utilizes the current workload characteristics to predict the workload in the next control period using the algorithm described in Sect. 4.4. The online training module records the current workload composition and samples the service latency under current workload. Then, the module trains the performance model with the update frequency. The actuation is calculated based on the predicted workload for the next control period using the updated performance model according to the algorithm explained in Sect. 4.3. Finally, the actuation is carried out on the Cloud platform that hosts the storage service.
Control flow of OnlineElastMan
We evaluate OnlineElastMan from two aspects. First, we show the accuracy of the prediction module, which consists of six prediction algorithms. It directly influences the provision accuracy of OnlineElastMan since it is an essential input parameter for the performance model. Then, we present the evaluation results of OnlineElastMan when it dynamically provisions a Cassandra cluster with the application of the online multi-dimensional performance model.
Our evaluation is conducted in a private Cloud, which runs OpenStack software stack. Our experiments are conducted on VMs with two virtual cores (2.40 GHz), 4 GB RAM and 40 GB disk size. They are spawned to host storage services or benchmark clients. OnlineElastMan is configured separately on one of the VMs. The overview of the evaluation setup is presented in Fig. 10.
Fig. 10
Different number of YCSB clients are used to generate workload with different intensity. OnlineElastMan resizes the Cassandra cluster according to the workload
5.1 Evaluation environment
5.1.1 Underlying storage system
Cassandra (version 2.0.9) is deployed as the underlying storage system and provisioned by OnlineElastMan. Cassandra is chosen because of its popularity to be used as a scalable backend storage by many companies, e.g. Facebook. Breifly, Cassandra is a distributed replicated database, which is organized with distributed hash tables. Since a Cassandra cluster is organized in a peer to peer fashion, it achieves linear scalability. Minimum instrumentation is introduced to Cassandra's read and write path as shown in Fig. 11. The instrumented library samples and stores service latency of requests in its repository. OnlineElastMan's data collector component periodically, every 5 min in our experiments, pulls collected access latencies from the repository on each Cassandra node. The collected request samples from each Cassandra node are used by the prediction module and the online training module of OnlineElastMan as shown in Fig. 9 The Cassandra rebalance API is called to redistribute data when adding/removing Cassandra nodes.
Cassandra instrumentation for collecting request latencies
5.1.2 Workload benchmark
We adopt YCSB (Yahoo! Cloud System Benchmark) (version 0.1.4) to generate workload for our Cassandra cluster. We choose YCSB because of its flexibility to synthesize various workload patterns, including the varying read/write request intensity and the size of the data propagated. Specifically, we configure YCSB clients with the parameters shown in Table 1. In order to generate stable workload to Cassandra, a fixed request rate (1200 req/s) is set to each YCSB client hosted on a separate VM. We vary the total amount of workload generated by adding or removing VMs that host YCSB clients.
YCSB configuration
Request distribution
Record count
Read proportion
Varied (0.0–1.0)
Update proportion
Data size
varied (1–20) KB
Consistency level
5.1.3 Multi-dimensional performance model
The training illustration of the 3 dimensional performance model. a–c are ordered by the length of training period. d–f are the visualization of a–(c with data size dimension projected on the other 2 dimensions
Our performance model is trained automatically when the input workload varies. OnlineElastMan takes input from the monitored parameters as specified in Sect. 4.1. Specifically, the workload features, including read and write request intensity and request data size, and the corresponding service latency, obtained from Cassandra instrumentation, are associated to train the model. Details on model training is presented in Sect. 4.2.
In practice, the model starts empty and needs to get trained online automatically for some time. This is because that the model is application and platform specific. Thus, it needs a warm up training phase. According to our experiment experience, it takes approximately 20–30 min to train a performance model from scratch. After warm up, the model can be used to facilitate the decision making process of the elasticity controller while serving the workload.
Figure 12 depicts the model built and used in our evaluation. It consists of three input parameters or dimensions, i.e., read/write request intensity and the data size. The controlled parameter is the 99th percentile read latency, which is set to be 35ms in our case. As shown in the figure, with more training data, the model (the shaded surface) evolves itself to a more accurate state. Practically, the performance model is dynamic and evolves while serving the workload. So, it can automatically evolves to a more accurate model that reflects the changes of the operating environment and the provisioned storage system. To be specific, the model adapts to unknown factors, such as application interference or platform maintenance, gradually using updated training data. A more accurate model leads to better provision accuracy when the elasticity controller consults it.
In our experiments, we found out that the rate at which the model evolves affects the accuracy of the decisions made by the controller. The confidence level and update frequency (as introduced in Sect. 4.2) dictates how fast the model evolves. Ideally, we should have enough confidence about the status (violate SLO or satisfy SLO) of a data point before its status changes. Setting the confidence level low and the update frequency high may result into the model oscillating (unstable model) while the opposite settings of these two parameters may delay the evolution of the model. In our experiments, we set the confidence level as 0.5, i.e., if \(50\%\) of all read and write latency queue samples satisfy the SLO then the corresponding data point satisfies SLO and vice versa. The update frequency is set to 5 min. For applications that have distinct phases of operations, to prevent frequent retraining, one can maintain a set of models and dynamically selects the best model for the current input pattern [30].
5.2 Evaluation on workload prediction
We evaluate the prediction accuracy of the workload prediction module using a synthetic workload generated by YCSB. We have synthesized workload with different shapes of workload increase and decrease regarding the total request intensity with a fixed read/write ratio. Figure 13 presents the actual workload generated and the workload predicted by our prediction module. In addition, the choice of the dominant prediction algorithm proposed by the weight majority algorithm is also shown in the figure. As a result, our prediction module is able to achieve as low as \(4.60\%\) on the Mean Absolute Percentage Error for such a dynamic workload pattern.
Workload prediction: the actual workload V.S. the predicted workload
VMs allocated according to the predicted workload and the updated control model
The aggregated 99th percentile latency from all Cassandra VMs with the allocation of VMs indicated by OnlineElastMan under the dynamic workload
5.3 Evaluation of OnlineElastMan over Cassandra
We set the goal of OnlineElastMan to keep the 99th percentile of read latency to be 35ms as stated in the SLO. The evaluation is conducted with control period set to be 5 min. Even the workload of YCSB is configured to be uniform in our case, we still observe a non-trivial difference on the amount of workload served from different Cassandra storage VMs. To make a tradeoff between the uneven workload served on each VM and preventing over-provisioning, we set the tolerance factor \(\alpha = 1\) and \(\beta = 0.5\).
As shown in Fig. 14, we start the experiment with 3 Cassandra VMs. From 0 to 40 min, the multi-dimensional performance model is trained and warmed up. The elasticity controller starts to function from 40 min. From 40 to 90 min, the workload increases gradually. It is observable that from 40 to 70 min, the system is over-provisioned, as the percentile latency is far below the SLO boundary as shown in Fig. 15. This is because that the elasticity controller is set to operate with a minimum number of 3 VMs, which corresponds to the replication factor of Cassandra. With the increasing of workload, the elasticity controller gradually adds two VMs from 80 min. Then, the workload experienced a sharp decrease from 90 min, but the controller maintains a minimum of 3 Cassandra VMs. We continue to evaluate the performance of OnlineElastMan with another two rounds of workload increase and decrease with different scales (shown from 150 to 220 min and from 220 to 360 min). The evaluation indicates that OnlineElastMan is able to keep the 99th percentile latency commitment most of the time. On the other hand, we observe a small amount of SLO violations under the provisioning of OnlineElastMan. It is because of the tolerance factor \(\alpha \) and \(\beta \), which allows us to tolerate some imbalance of workload distribution to Cassandra nodes.
6 Related work
6.1 Elasticity controllers in practice
Most of the elasticity controllers available in public Cloud services and used nowadays in production systems are policy based and rely on simple if-then threshold based triggers. Examples of such systems include Amazon Auto Scaling (AAS) [31], Rightscale [32], and Google Compute Engine Autoscaling [33]. The wide adoption of this approach is mainly due to its simplicity in practice as it doesn't require pre-training or expertise to get it up and running. Policy based approaches are suitable for small-scale systems in which adding/removing a VM when a threshold is reached (e.g., CPU utilization) is sufficient to maintain the desired SLO. For larger systems, it might be non-trivial for users to set the thresholds and the correct number of VMs to add/remove.
Scryer [34] is a Netflixs predictive auto-scaling engine. It allows them to provision the right number of instances needed to handle the traffic of their customers. Unlike systems such as AAS, Scryer predicts what the needs will be prior to the time of need and provisions the instances based on those predictions. However, its genesis was triggered more by their relatively predictable traffic patterns, which is not always true in a dynamic environment such as Cloud.
6.2 Research on elasticity controllers
Most of the elasticity controllers, which go beyond a simple threshold based triggers, require a model of the target system in order to be able to reason about the status of the system and decide on control actions needed to improve the system. The system model is typically trained offline using historical data and the controller is tuned manually using expert knowledge of the expected workload patterns and service behavior.
Work in this area focuses on developing advanced models and novel approaches for elasticy control such as, ElastMan [4], SCADS Director [3], scaling HDFS [2], ProRenata [7], and Hubbub-scale [8]. Although achieving very good results, most of these controllers ignore the practical aspects of the solution which slowed down the adoption of such controllers in production systems. For example, SCADS Director [3] is tailord for a specific storage service with pre-requisits that are not common in storage systems (fine grained monitoring and migration of storage buckets). ElastMan [4], uses two controllers in order to efficiently handle diurnal and spiky workloads but it requires offline manual training of both controllers. Lim et al. [2] on scaling Hadoop distributed file system (HDFS) adopts CPU utilization, which highly correlates request latency, for scaling but it relies on the data migration API integrated in HDFS. ProRenaTa [7] minimizes the SLO violation during scaling by combining both proactive and reactive control approaches but it requires a specific prediction algorithm and the control model needs to be trained offline. Hubbub-Scale [8] and Augment Scaling [35] argue that platform interference can mislead an elasticity controller during its decision making, however, the interference measurement needs the access of many low level metrics, e.g. cache counters, of the platform.
OnlineElastMan, on the other hand, focuses on the research of the practical aspects of an elasticity controller. It relies only on the most generic and obtainable metrics from the system and alleviates the burden of applying an elasticity controller in production. Specifically, the auto-training feature of OnlineElastMan makes its deployment, model training and configuration effortless. Furthermore, an generic and extendable prediction model is integrated to provide workload prediction for various workload patterns.
6.2.1 Elastic scaling
The goal of an auto-scaling system is to automatically fine-tune acquired resources of a system to minimize resource provisioning costs while meeting SLOs. An auto-scaling technique automatically scales resources according to demand. Different techniques exist in the literature that addresses the problem of auto-scaling. As a result of the wide diversity of these techniques, that are sometimes combination of two or more methods, it is a challenge to find a proper classification of auto-scaling techniques [36]. However, these techniques could be divided into two categories: reactive and proactive. In outline, reactive approach reacts to real time system changes such as incoming workload while a proactive approach relies on historical access patterns of a system to anticipate future needs so as to acquire or release resources in advance. Each of these approaches have its own merits and demerits [7]. Under the proactive and reactive categories, the following are some of the widely used auto-scaling techniques: threshold-based policies, reinforcement learning, queuing theory, control theory and time series analysis. Time series analysis is purely a proactive approach, whereas threshold-based rules (used in Amazon and RightScale) is a reactive approach. Contrary, reinforcement learning, queuing theory and control theory could be used with both proactive and reactive approaches, But they also exhibit the following demerits:
Reinforcement learning: This technique is efficient when used against slowly changing conditions. Therefore, it cannot be applied to real applications that usually suffer from sudden traffic bursts. The elasticity controller presented in [37] integrates several empirical models and switches among them to obtain better predictions. Diego [38] presents an elasticity controller that uses analytical modeling and machine-learning. They explained that by combining both approaches, it results in better controller accuracy.
Queuing theory: Impose hard assumptions that may not be valid for real, complex systems. They are intended for stationary scenarios, thus models need to be recalculated when conditions of the application change. For example, [39] model a cloud service using queuing theory. Using that model they build two adaptive proactive controllers that estimate the future load on a service.
Control theory: Setting the gain parameters can be a difficult task. Previous works [4, 40, 41] have extensively studied applying control theory to achieve fine grained resource allocations that conform to a given SLO. However, the offline training feature of the existing approaches, makes the deployment, model training and configuration of the elasticity controller difficult.
In time series techniques, a given performance metric is sampled periodically at fixed intervals and analysed to make future predictions. Typically these techniques are utilized for workload or resource usage prediction and are used to derive a suitable scaling action plan. For example, [42] used a Fourier transform-based scheme to perform offline extraction of long-term cyclic workload patterns. CloudScale [43] and PRESS [22] perform long-term cyclic pattern extraction and resource demand prediction to scale up. The techniques used in these works complement our work.
6.2.2 Online profiling and prediction
A significant amount of literature exists that can be applied for predicting the traffic incident on a service i.e. [3, 7, 22, 34, 44]. In most cases, to support different workload scenarios, more than one prediction algorithms are used. To support different workload scenarios, at least more than one prediction algorithm is used. In most cases the pattern of the workload to be predicted is defined or known, which is not in our case. The most important aspect is how switching is carried out among the prediction algorithms which is not clear in most of these previous works. We therefore propose a simple weighted majority algorithm to handle this.
For instance, AGILE [30] provides online, wavelet-based medium-term (up to 2 min) resource demand prediction with adequate upfront time to start new application servers before performance degrades i.e. before application SLO is affected by the changing workload pattern. In addition, AGILE uses online profiling to obtain a resource pressure model for each application it controls. This model calculates the amount of resources required to maintain an applications SLO violation rate at a minimal level. Unlike our model, AGILE derives resource pressure models for just CPU without considering other resources such as memory, network bandwidth, disk I/O, applications workload intensity etc. A multi-resource model can be built in two ways. Each resource can have a separate resource pressure model or a single resource pressure model can represent all the resources. In this work, we adopt the latter approach.
In this paper, since we do not know the pattern of our workload, we have chosen some of the types of ARIMA models that are commonly encountered. For a time series that is stationary and autocorrelated, a possible model for it is a first-order autoregressive model. On the other hand, if the time series is not stationary, the simplest possible model for it is a random walk model. However, if the errors of a random walk model are autocorrelated, perhaps a differenced first-order autoregressive model may be more suitable. [45] presents a detailed explanation of these models.
7 Conclusions and future works
In this paper, we have designed, implemented and open-sourced2 OnlineElastMan, which is an "out-of-the-box" elasticity controller for distributed storage systems. It includes a self-training multi-dimensional performance model to alleviate model training efforts and provide better provision accuracy, a self-tuning prediction module to adjust the prediction to various workload patterns, and an elasticity controller to calculate and carry out the scaling decisions by analyzing the inputs from the performance model and the prediction module. The evaluation results of OnlineElastMan on Cassandra show that OnlineElastMan is able to provision a Cassandra cluster efficiently and effectively with respect to the percentile latency SLO in the showcase experiment.
For future work, the OnlineElastMan framework can be extended in two directions. First, it would be useful to extend the control model of OnlineElastMan with comprehensive metrics, e.g., CPU utilization, network statistics, disk I/Os, etc. Second, OnlineElastMan is essentially stateless. States are only preserved and used in the prediction and model training modules, which can be generated/trained during runtime. Thus, it is not difficult to decentralize OnlineElastMan for better scalability and fault tolerance.
https://github.com/gureya/OnlineElasticityManager.
This work was supported by the Erasmus Mundus Joint Doctorate in distributed computing program funded by the EACEA of the European Commission under FPA 2012-0030 and the End-to-End Clouds project funded by the Swedish Foundation for Strategic Research under the contract RIT10-0043. The authors would also like to thank the reviewers for their constructive comments and suggestions to improve the quality of the paper.
Lorido-Botran, T., Miguel-Alonso, J., Lozano, J.A.: A review of auto-scaling techniques for elastic applications in cloud environments. J. Grid Comput. 12(4), 559–592 (2014)CrossRefGoogle Scholar
Harold, C. Lim, S.B., Jeffrey, S.C.: Automated control for elastic storage. In: Proceedings of the 7th International Conference on Autonomic Computing (ICAC '10), pp. 1–10. ACM, New York (2010)Google Scholar
Beth, T., Peter, B., Armando, F., Michael, J.F., Michael, I.J., David, A.P.: The scads director: scaling a distributed storage system under stringent performance requirements. In: Proceedings of the 9th USENIX Conference on File and Stroage Technologies (FAST'11), pp. 12–12. USENIX Association, Berkeley, CA (2011)Google Scholar
Al-Shishtawy, A., Vlassov, V.: Elastman: autonomic elasticity manager for cloud-based key-value stores. In: Proceedings of the 22Nd International Symposium on High-Performance Parallel and Distributed Computing (HPDC '13), pp. 115–116. ACM, New York (2013)Google Scholar
Al-Shishtawy, A., Vlassov, V.: Elastman: elasticity manager for elastic key-value stores in the cloud. In: Proceedings of the 2013 ACM Cloud and Autonomic Computing Conference (CAC '13), pp. 7:1–7:10. ACM, New York (2013)Google Scholar
Moulavi, M.A., Al-Shishtawy, A., Vlassov, V.: State-space feedback control for elastic distributed storage in a cloud environment. In: The 8th International Conference on Autonomic and Autonomous Systems (ICAS 2012), pp. 589–596 (2012)Google Scholar
Liu, Y., Rameshan, N., Monte, E., Vlassov, V., Navarro, L.: Prorenata: proactive and reactive tuning to scale a distributed storage system. In: 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), pp. 453–464 (2015)Google Scholar
Navarro, L., Vlassov, V., Rameshan, N., Liu, Y.: Hubbub-scale: towards reliable elastic scaling under multi-tenancy. In: 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid) (2016)Google Scholar
Armbrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabkin, A., Stoica, I., Zaharia, M.: A view of cloud computing. Commun. ACM 53(4), 50–58 (2010)CrossRefGoogle Scholar
Moriyoshi, O., Peter, N., Yohei, U., Kazuaki, I.: The data-centricity of web 2.0 workloads and its impact on server performance. In: IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pp 133–142 (2009)Google Scholar
Beth, T., Peter, B., Armando, F., Michael, J.F., Michael, I.J., David, A.P.: The SCADS director: scaling a distributed storage system under stringent performance requirements. In: Proceedings of the 9th USENIX conference on File and stroage technologies (FAST'11), pp. 12–12 (2011)Google Scholar
Ramakrishnan, R., Gehrke, J.: Database Management Systems, 2nd edn. Osborne/McGraw-Hill, Berkeley, CA (2000)zbMATHGoogle Scholar
Roshan, S., Jay, K., Lei, G., Alex, F., Chinmay, S., Sam, S.: Serving large-scale batch computed data with project voldemort. In: The 10th USENIX Conference on File and Storage Technologies (FAST'12) (2012)Google Scholar
Giuseppe, D., Deniz, H., Madan, J., Gunavardhan, K., Avinash, L., Alex, P., Swaminathan, S., Peter, V., Werner, V.: Dynamo: amazon's highly available key-value store. In: Proceedings of Twenty-First ACM SIGOPS Symposium on Operating Systems Principles (SOSP '07), pp. 205–220. ACM, New York (2007)Google Scholar
Lakshman, A., Malik, P.: Cassandra: a decentralized structured storage system. SIGOPS Oper. Syst. Rev. 44(2), 35–40 (2010)CrossRefGoogle Scholar
Animoto's Facebook scale-up. http://blog.rightscale.com/2008/04/23/animoto-facebook-scale-up/ (2012)
Peter, B., Armando, F., Michael, J.F., Michael, I.J., David A.P.: Characterizing, modeling, and generating workload spikes for stateful services. In: Proceedings of the 1st ACM Symposium on Cloud Computing (SoCC '10), pp. 241–252 (2010)Google Scholar
Joseph, L., Hellerstein, Y.D., Sujay, P., Dawn, M.T.: Feedback Control of Computing Systems. Wiley, New York (2004)Google Scholar
Horn, P.: Autonomic computing: IBM's perspective on the state of information technology. October 15 (2001)Google Scholar
Tickoo, O., Iyer, R., Illikkal, R., Newell, D.: Modeling virtual machine performance: challenges and approaches. SIGMETRICS Perform. Eval. Rev. 37(3), 55–60 (2010)CrossRefGoogle Scholar
Iyer, R., Illikkal, R., Tickoo, O., Zhao, L., Apparao, P., Newell, D.: VM3: measuring, modeling and managing VM shared resources. Comput. Netw. 53(17), 2873–2887 (2009)CrossRefGoogle Scholar
Zhenhuan, G., Xiaohui, G., Wilkes, J.: Press: predictive elastic resource scaling for cloud systems. In: International Conference on Network and Service Management (CNSM), pp. 9–16 (2010)Google Scholar
Vasić, N., Novaković, D., Miučin, S., Kostić, D., Bianchini, R.: Dejavu: accelerating resource allocation in virtualized environments. ACM SIGARCH Comput. Archit. News 40(1), 423–436 (2012)CrossRefGoogle Scholar
Novakovic, D., Vasic, N., Novakovic, S., Kostic, D., Bianchini, R.: Transparently identifying and managing performance interference in virtualized environments. Technical report, Deepdive (2013)Google Scholar
Gunn, S.R.: Support vector machines for classification and regression. Technical report, University of Southampton (1998)Google Scholar
Arlitt, M., Jin, T.: A workload characterization study of the 1998 world cup web site. IEEE Netw. 14(3), 30–37 (2000)CrossRefGoogle Scholar
Gusella, R.: Characterizing the variability of arrival processes with indexes of dispersion. IEEE J. Sel. Areas Commun. 9(2), 203–211 (1991)CrossRefGoogle Scholar
Box, G.E.P., Jenkins, G.: Time Series Analysis, Forecasting and Control. Holden-Day, Incorporated (1990)Google Scholar
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Inf. Comput. 108(2), 212–261 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
Nguyen, H., Shen, Z., Gu,X., Subbiah, S., Wilkes, J.: AGILE: elastic distributed resource scaling for infrastructure-as-a-service. In: Proceedings of the 10th International Conference on Autonomic Computing (ICAC 13), pp. 69–82. USENIX, San Jose (2013)Google Scholar
Amazon Elastic Compute Cloud. http://aws.amazon.com/ec2/
Right Scale. http://www.rightscale.com/
Google Compute Engine. https://cloud.google.com/compute/docs/load-balancing-and-autoscaling
Yuan, D., Joshi, N., Jacobson, D., Oberai, P.: Scryer: Netflix's Predictive Auto Scaling Engine. http://techblog.netflix.com/2013/12/scryer-netflixs-predictive-auto-scaling.html. Accessed June 2015
Navarro, L., Vlassov, V., Rameshan, N., Liu, Y.: Augmenting elasticity controllers for improved accuracy. In: 13rd IEEE International Conference on Autonomic Computing (ICAC) (2016)Google Scholar
Malkowski, S.J., Hedwig, M., Li, J., Pu, C., Neumann, D.: Automated control for elastic n-tier workloads based on empirical modeling. In: Proceedings of the 8th ACM International Conference on Autonomic Computing, (ICAC '11), pp. 131–140. ACM, New York (2011)Google Scholar
Didona, D., Romano, P., Peluso, S., Quaglia, F.: Transactional auto scaler: elastic scaling of in-memory transactional data grids. In: Proceedings of the 9th International Conference on Autonomic Computing (ICAC '12), pp. 125–134. ACM, New York (2012)Google Scholar
Ali-Eldin, A., Tordsson, J., Elmroth, E.: An adaptive hybrid elasticity controller for cloud infrastructures. In: The 13th IEEE/IFIP Network Operations and Management Symposium (NOMS'12). Hawaii (2012)Google Scholar
Zhu, X., Young, D., Watson, B.J., Wang, Z., Rolia, J., Singhal, S., Hyser, C., Gmach, D., Gardner, R., Christian, T., Cherkasova, L., et al.: 1000 islands: Integrated capacity and workload management 17 the next generation data center. In: In Proceedings of the 5th International Conference on Autonomic Computing (ICAC), pp. 172–181Google Scholar
Lim, H.C., Babu, S., Chase, J.S.: Automated control for elastic storage. In: Proceedings of the 7th International Conference on Autonomic Computing (ICAC '10), pp. 1–10. New York (2010)Google Scholar
Cherkasova, L., Gmach, D., Rolia, J., Kemper, A.: Capacity management and demand prediction for next generation data center. In: IEEE International Conference on Web Services (ICWS), pp. 43–50. (2007)Google Scholar
Shen, Z., Subbiah, S., Gu, X., Wilkes, J.: Cloudscale: elastic resource scaling for multi-tenant cloud systems. In: Proceedings of the 2Nd ACM Symposium on Cloud Computing (SOCC '11), pp. 5:1–5:14. ACM, New York (2011)Google Scholar
Roy, N., Dubey, A., Gokhale, A.: Efficient autoscaling in the cloud using predictive models for workload forecasting. In: IEEE International Conference on Cloud Computing (CLOUD), pp. 500–507 (2011)Google Scholar
Nau, R.: Statistical forecasting: notes on regression and time series analysis. http://people.duke.edu/~rnau/411home.htm. Accessed June 2015
1.KTH Royal Institute of TechnologyStockholmSweden
2.Swedish Institute of Computer ScienceKistaSweden
Liu, Y., Gureya, D., Al-Shishtawy, A. et al. Cluster Comput (2017) 20: 1977. https://doi.org/10.1007/s10586-017-0899-z
Received 10 March 2017
DOI https://doi.org/10.1007/s10586-017-0899-z
|
CommonCrawl
|
Geneva Tropical Wiki
Piste: • fables
Séminaire "Fables Géométriques".
The normal starting time of this seminar is 16.30 on Monday.
2020, Wednesday, May 20, 16:00 (CEST), Virtual seminar, Lionel Lang (Stockholm University)
Co-amoebas, dimers and vanishing cycles
In this joint work in progress with J. Forsgård, we study the topology of maps P:(\C*)^2 \to \C given by Laurent polynomials P(z,w). For specific P, we observed that the topology of the corresponding map can be described in terms of the co-amoeba of a generic fiber. When the latter co-amoeba is maximal, it contains a dimer (a particularly nice graph) whose fundamental cycles corresponds to the vanishing cycles of the map P. For general P, the existence of maximal co-amoebas is widely open. In the meantime, we can bypass co-amoebas, going directly to dimers using a construction of Goncharov-Kenyon and obtain a virtual correspondence between fundamental cycles and vanishing cycles. In this talk, we will discuss how this (virtual) correspondence can be used to compute the monodromy of the map P.
2020, Tuesday, April 7, 17:00, Virtual seminar (EDGE seminar) Grigory Mikhalkin (Geneva)
https://zoom.us/j/870554816?pwd=bERmR0ZQTitYNXJ1aFZLckxzeXZJZz09 Meeting ID: 870 554 816 Password: 014504
Area in real K3-surfaces
Real locus of a K3-surfaces is a multicomponent topological surface. The canonical class provides an area form on these components (well defined up to multiplication by a scalar). In the talk we'll explore inequalities on total areas of different components as well a link between such inequalities and a class of real algebraic curves called simple Harnack curves. Based on a joint work with Ilia Itenberg.
2020, Monday, March 31, 17:00, Virtual seminar, Vladimir Fock (Strasbourg)
https://unige.zoom.us/j/737573471 Meeting ID: 737 573 471
Higher measured laminations and tropical curves
We shall define a notion of a higher lamination - a graph embedded into a Riemann surface with edges coloured by generators of an affine Weyl group. This notion generalises the notion of the ordinary integral measured lamination and on the other hand of a tropical curve and can be constructed out of a integral Lagrangian submanifold of the cotangent bundle.
2020, Monday, March 16, 16:30, Battelle, Alexander Veselov (Loughborough University)[POSTPONED]
On integrability, geometrization and knots
I will start with a short review of Liouville integrability in relation with Thurston's geometrization programme, using as the main example the geodesic flows on the 3-folds with SL(2,R)-geometry.
A particular case of such 3-folds the modular quotient SL(2,R)/SL(2,Z), which is known, after Quillen, to be equivalent to the complement in 3-sphere of the trefoil knot. I will show that remarkable results of Ghys about modular and Lorenz knots can be naturally extended to the integrable region, where these knots are replaced by the cable knots of trefoil.
The talk is partly based on a recent joint work with Alexey Bolsinov and Yiru Ye.
2020, Monday, February 17, 16:30, Battelle, Karim Adiprasito
(University of Copenhagen, Hebrew University of Jerusalem)
Algebraic geometry of the sphere at infinity, polyhedral de Rham theory and L^2 vanishing conjectures
I will discuss a conjecture of Singer concerning the vanishing of L^2 cohomology on non-positively curved manifolds, and relate it to Hodge theory on a Hilbert space that arises as the limit of Chow rings of certain complex varieties.
2019, Friday, December 6, 15:00, Battelle, Tomasz Pelka (UniBe)
Q-homology planes satisfying the Negativity Conjecture
A smooth complex algebraic surface S is called a Q-homology plane if H_i(S,Q)=0 for i>0. This holds for example if S is a complement of a rational cuspidal curve in P^2. The geometry of such S is understood unless S is of log general type, in which case the log MMP applied to the log smooth completion (X,D) of S is insufficient. The idea of K. Palka was to study the pair (X,(1/2)D) instead. This approach gives much stronger constraints on the shape of D, and leads to the Negativity Conjecture, which asserts that the Kodaira dimension of K_X+(1/2)D is negative. It is a natural generalization e.g. of the Coolidge-Nagata conjecture about rational cuspidal curves, which was recently proved using these methods by M. Koras and K. Palka.
If this conjecture holds, all Q-homology planes of log general type can be classified. It turns out that, as expected by tom Dieck and Petrie, they are arranged in finitely many discrete families, each obtainable in a uniform way from certain arrangements of lines and conics on P^2. As a consequence, they all satisfy the Strong Rigidity Conjecture of Flenner and Zaidenberg; and their automorphism groups are subgroups of S_3. To illustrate this surprising rigidity, I will show how to construct all rational cuspidal curves (with complements of log general type, satisfying the Negativity Conjecture) inductively, by iterating quadratic Cremona maps. This construction in particular shows that any such curve is uniquely determined, up to a projective equivalence, by the topology of its singular points.
2019, Monday, November 25, 16:30, Battelle, Felix Schlenk (UniNe)
(Real) Lagrangian submanifolds
We start with describing how Lagrangian submanifolds of symplectic manifolds naturally appear in many ways: In celestial mechanics, integrable systems, symplectic geometry, and algebraic geometry. We then look at real Lagrangians, namely those which are the fixed point set of an anti-symplectic involution. How special is the property of being real? While many of the examples discussed above are real, we explain why the central fibres in toric symplectic manifolds are real only if the moment polytope is centrally symmetric. The talk is based on work of and with Joé Brendel, Yuri Chekanov, and Joontae Kim.
2019, Friday, November 8, 14:00, Battelle, Johannes Rau (University of Tübingen)
The dimension of an amoeba
Amoebas are projections of algebraic varieties in logarithmic coordinates and were originally introduced by Gelfand, Kapranov and Zelevinsky in their influential book. Based on some computation, Nisse and Sottile formulated some questions concerning the dimension of amoebas. In a joint work with Jan Draisma and Chi Ho Yuen, we answer these questions by providing a general formula that computes the dimension of amoebas. If time permits, we also discuss the consequences of this formula for matroidal fans.
2019, Monday, November 4, 16.30, Battelle, Pierrick Bousseau (ETH Zurich)
Quasimodular forms from Betti numbers
This talk will be about refined curve counting on local P2, the noncompact Calabi-Yau 3-fold total space of the canonical line bundle of the projective plane. I will explain how to construct quasimodular forms starting from Betti numbers of moduli spaces of dimension 1 coherent sheaves on P2. This gives a proof of some stringy predictions about the refined topological string theory of local P2 in the Nekrasov-Shatashvili limit. Partly based on work in progress with Honglu Fan, Shuai Guo, and Longting Wu.
2019, Monday, October 28, 16.30, Battelle, Ilia Itenberg, (Sorbonne University)
Planes in four-dimensional cubics
We discuss possible numbers of 2-planes in a smooth cubic hypersurface in the 5-dimensional projective space. We show that, in the complex case, the maximal number of planes is 405, the maximum being realized by the Fermat cubic. In the real case, the maximal number of planes is 357.
The proofs deal with the period spaces of cubic hypersurfaces in the 5-dimensional complex projective space and are based on the global Torelli theorem and the surjectivity of the period map for these hypersurfaces, as well as on Nikulin's theory of discriminant forms.
Joint work with Alex Degtyarev and John Christian Ottem.
2019, Monday, October 14, 16:30, Battelle, Igor Krichever (Columbia University)
Degenerations of real normalized differentials
The behavior of real-normalized (RN) meromorphic differentials on Riemann surfaces under degeneration is studied. In particular, it is proved that the residues at the nodes are solutions of a suitable Kirchhoff problem on the dual graph of the curve. It is further shown that the limits of zeroes of RN differentials are the divisor of zeroes of a twisted differential — an explicitly constructed collection of RN differentials on the irreducible components of the stable curve, with higher order poles at some nodes. Our main tool is a new method for constructing differentials on smooth Riemann surfaces, in a plumbing neighborhood of a given stable curve.
2019, Monday, October 7, 16:30, Battelle, Jérémy Blanc (University of Basel)
Quotients of higher dimensional Cremona groups
We study large groups of birational transformations $\mathrm{Bir}(X)$, where $X$ is a variety of dimension at least $3$, defined over $\mathbb{C}$ or a subfield of $\mathbb{C}$.Two prominent cases are when $X$ is the projective space $\mathbb{P}^n$, in which case $\Bir(X)$ is the Cremona group of rank~$n$, or when $X \subset \mathbb{P}^{n+1}$ is a smooth cubic hypersurface.In both cases, and more generally when $X$ is birational to a conic bundle, we produce infinitely many distinct group homomorphisms from $\mathrm{Bir}(X)$ to $\mathbb{Z}/2$.As a consequence we also obtain that the Cremona group of rank ~$n \ge 3$ is not generated by linear and Jonquières elements.
Joint work with Stéphane Lamy and Susanna Zimmermann
2019, Monday, September 30,16:30, Battelle, Roman Golovko (Charles University in Prague)
The wrapped Fukaya category of a Weinstein manifold is generated by the Lagrangian cocore discs
In a joint work with B. Chantraine, G. Dimitroglou Rizell and P. Ghiggini, we decompose any object in the wrapped Fukaya category as a twisted complex built from the cocores of the critical (i.e. half-dimensional) handles in a Weinstein handle decomposition. The main tools used are the Floer homology theories of exact Lagrangian immersions, of exact Lagrangian cobordisms in the SFT sense (i.e. between Legendrians), as well as relations between these theories.
2019, Wednesday, September 25, 11:00, Battelle, Ivan Fesenko, (University of Nottingham)
Two-dimensional local fields and integration on them
Two-dimensional local fields include formal loop objects such as $R((t))$, $C((t))$, $Q_p((t))$ and also fields such as $F_p((t_1))((t_2))$, $Q_p\{\{t\}\}$. They play the fundamental role in two-dimensional number theory, arithmetic geometry, representation theory, algebraic topology and math physics. I will explain basic things about such fields, including their unusual topology and the theory of measure and integration on such fields and Fourier transform which can be viewed as a (rigorous) arithmetic version of the Feynman integral. While one-dimensional local fields show up in tropical geometry of curves, one may expect that two-dimensional local fields should be involved in tropical geometry of surfaces.
2019, Monday, September 16, 16:30, Battelle, Gleb Smirnov, (ETH Zurich)
From flops to diffeomorphism groups
Following a short introduction to the flop surgery, I will explain how this surgery can be used to detect non-contractible loops of diffeomorphisms for many algebraic surfaces.
2019, Monday, May 20, 14:00, Battelle, Ziming Ma (Chinese University of Hong Kong)
Geometry of the Maurer-Cartan equation near degenerate Calabi-Yau varieties
In this talk, we construct a dgBV algebra PV^∗,∗(X) associated to a possibly degenerate Calabi-Yau variety X equipped with local thickening data. This gives a singular version of the (extended) Kodaira-Spencer dgLa which is applicable to both log smooth and maximally degenerated Calabi-Yau. We use this to prove an unobstructedness result about the smoothing of degenerated Log Calabi-Yau varieties X satisfying Hodge-deRham degeneracy property for cohomology of X, in the spirit of Kontsevich-Katzarkov-Pantev. We also demonstrate how our construction can be applied to produce a log Frobenius manifold structure on a formal neighborhood of the extended moduli space using Barannikov's technique. This is a joint work with Kwokwai Chan and Naichung Conan Leung.
2019, Monday, April 8, 16:00, Battelle, Michele Ancona (Institut Camille Jordan)
Random sections of line bundles over real Riemann surfaces
Given a line bundle L over a real Riemann surface, we study the number of real zeros of a random section of L. We prove a rarefaction result for sections whose number of real zeros deviates from the expected one.
2019, Monday, April 1, 16:00, Battelle, Mikhail Shkolnikov (IST Austria)
PSL-tropical limits
The classical tropical limit is defined for families of varieties in the algebraic torus. One of the ways to generalize this framework is to consider non-commutative groups instead of algebraic tori. We describe tropical limits for subvarieties in PSL(2,C):the result is spelled in terms of floor diagrams and has parallels with symplectic field theory. The talk is based on the work in progress with Grigory Mikhalkin.
2019,Tuesday, March 26, 14:00, Battelle, Enrica Mazzon (Imperial College London)
Berkovich approach to degenerations of hyper-Kähler varieties
To a degeneration of varieties, we can associate the dual intersection complex, a topological space that encodes the combinatoric of the central fiber and reflects the geometry of the generic fiber. The points of the dual complex can be identified to valuations on the function field of the variety, hence the dual complex can be embedded in the Berkovich space of the variety. In this talk I will explain how this interpretation gives an insight in the study of the dual complexes. I will focus on some degenerations of hyper-Kähler varieties and show that we are able to determine the homeomorphism type of their dual complex using techniques of Berkovich geometry. The results are in accordance with the predictions of mirror symmetry, and the recent work about the rational homology of dual complexes of degenerations of hyper-Kähler varieties, due to Kollár, Laza, Saccà and Voisin. This is joint work with Morgan Brown.
2019, Monday, March 18, 16:00, Battelle, Danilo Lewanski (Max Planck Institut für Mathematik)
Refreshing Tropical Jucys curves
We derive explicit formulae for the generating series of Grothendieck dessins d'enfant and monotone Hurwitz numbers via the semi-infinite wedge formalism, and from it we obtain bosonic Fock space expressions. This yields to a tropical geometric interpretation involving Gromov-Witten invariants as local multiplicities.
2019, Monday, March 11, 16:00, Battelle, Anton Mellit (University of Vienna)
Five-term relations
I will review how the five term relation for the Fadeev-Kashaev's quantum dilogarithm arises in the Hall algebra context, and sketch a simple proof. Then I will explain how this proof can be transported to the elliptic Hall algebra situation, where the five term relation implies identities between Macdonald polynomials conjectured by Bergeron and Haiman. This is a joint work with Adriano Garsia.
2019, Monday, March 4, 16:00, Battelle, Andras Stipsicz (Budapest University)
Knot Floer homology and double branched covers
We will review the basic constructions of (various versions of) knot Floer homologies, show some applications and extensions of the definitions to the double branched cover, also using the covering transformation.
2019, Monday, February 25, 16:00, Battelle, Erwan Brugallé (Université de Nantes)
On the invariance of Welschinger invariants
Welshinger invariants are real analogs of Gromov-Witten invariants for symplectic 4-manifolds X. In this talk, I will strengthen the original Welschinger's invariance result. Our main result is that when X is a real rational algebraic surface, Welschinger invariants eventually only depend on the number of real interpolated points, and some homological data associated to X. This result follows easily from a formula relating Welschinger invariants of two real symplectic manifolds differing by a surgery along a real Lagrangian sphere. As an application, we complete the computation of Welschinger invariants of real rational algebraic surfaces, and obtain vanishing, sign, and sharpness results generalizing previously known statements. If time permits, we will also discuss some hypothetical relations with tropical refined invariants defined by Block-Göttsche and Göttsche-Schroeter.
2018, Monday, December 10, 16:00, Battelle, Arthur Renaudineau (Lille)
Lefschetz hyperplane section theorem for tropical hypersurfaces
We will discuss variants of the Lefschetz hyperplane section theorem for the integral tropical homology groups of non-singular tropical hypersurfaces of toric varieties. As an application, we get that the integral tropical homology groups of non-singular tropical hypersurfaces are torsion free. This is a joint work with Charles Arnal and Kristin Shaw.
2018, Monday, November 26, 16:00, Battelle, Vladimir Fock (Strasbourg)
Higher complex structures on surfaces
We suggest a definition of a differential geometric structures on surfaces generalizing the notion of complex structure and discuss its properties. The moduli space of such structures share many common features and conjecturally coincide with higher Teichmüller space - the space of positive representations of the fundamental group of the surface into PGL(N) (like moduli of ordinary complex structure give a representation of the fundamental group to PGL(2)). Joint work with A.Thomas.
2018, Monday, November 19, 16:15, Battelle, Stepan Orevkov (Moscow, Toulouse)
Orthogonal polynomials in two variables
A natural generalization of classical systems of (one-variable) orthogonal polynomials is as follows. Let $D$ be a domain in $R^n$ endowed with a Riemannian metric and a mesure. Suppose that the Laplace-Beltrami operator (for the given metric) is symmetric (for the given mesure) and leave invariant the set of polynomials of a given degree. Then its eigenfunctions is a system of orthogonal polynomials.
I present a complete classification of domains in $R2$ for which this construction can be applied. The talk is based on a joint work with D. Bakry and M. Zani.
2018, Monday, October 8, 16:30, Battelle, Sione Ma'u (Auckland)
Polynomial degree via pluripotential theory
Given a complex polynomial $p$ in one variable, $\log|p|$ is a subharmonic function that grows like $(deg p)\log|z|$ as $|z|\to\infty$. Such functions are studied using complex potential theory, based on the Laplace operator in the complex plane.
Multivariable polynomials can also be studied using potential theory (more precisely, a non-linear version called pluripotential theory, which is based on the complex Monge-Ampere operator). In this talk I will motivate and define a notion of degree of a polynomial on an affine variety using pluripotential theory (Lelong degree). Using this notion, a straightforward calculation yields a version of Bezout's theorem. I will present some examples and describe how to compute Lelong degree explicitly on an algebraic curve. This is joint work with Jesse Hart.
2018, Monday, October 1, 16:00, Battelle, Mikhail Shkolnikov (Klosterneuburg)
Extended sandpile group and its scaling limit
Since its invention, the sandpile model is believed to be renormalizable due to the presence of power-laws. It appears that, the sandpile group, made of recurrent configurations of the model, approximates a continuous object that we call the extended sandpile group. In fact, this is a tropical abelian variety defined over Z and the subgroup of its integer points is exactly the usual sandpile group. Moreover, the extended sandpile group is naturally a sheaf on discrete domains and, thus, brings an explicit scale renormalization procedure for recurrent configurations. We compute the (projective) scaling limit of sandpile groups along growing convex domains: its is equal to the quotient of real-valued discrete harmonic functions by the subgroup of integer-valued ones. This is a joint work with Moritz Lang.
2018, Wednesday, July 18, 16:30, Battelle, Kristin Shaw (University of Oslo)
Chern-Schwartz-MacPherson classes of matroids. Part II
Chern-Schwarz-Macpherson (CSM) classes are one way to extend the notion of Chern classes to singular and non-complete varieties. Matroids are an abstraction of the notion of independence in mathematics. In this talk, I will provide a combinatorial analogue of CSM classes for matroids, motivated by the geometry of hyperplane arrangements. In this setting, CSM classes are polyhedral fans which are Minkowski weights. One goal for defining these classes is to express matroid invariants as invariants from algebraic geometry. The CSM classes can be used to study the complexity of more general objects such as subdivisions of matroid polytopes and tropical manifolds. This is based on joint work with Lucia López de Medrano and Felipe Rincón.
2018, Monday, July 16, 16:00, Battelle, Kristin Shaw (University of Oslo)
Chern-Schwartz-MacPherson classes of matroids
2018, Monday, July 9, 16:30, Battelle, Ernesto Lupercio (CINVESTAV)
Convex geometry, complex systems and quantum physics
I will speak about our work on sandpiles and quantum integrable systems. Just as in classical mechanics toric manifolds correspond to rational convex polytopes, the irrational case in informed by the theory of sandpiles. Joint with Kalinin, Shkolnikov, Katzarkov, Meersseman and Verjovsky.
Workshop "Fables Géométriques", 2018, June Friday 15 and Saturday 16, Battelle
Friday June 15th
11:00-12:00 Yakov Eliashberg (Stanford)
14:30-15:30 Sergey Finashin (METU)
16:00-17:00 Viatcheslav Kharlamov (Strasbourg)
Saturday June 16th
14:30-15:30 Askold Khovansky (Toronto)
16:00-17:00 Stepan Orevkov (Toulouse)
17:30-18:30 Oleg Viro (Stony Brook)
19:30 dinner
2018, Monday, May 28, 15:00, Battelle, Alexander Esterov (Higher School of Economics - Moscow)
Tropical characteristic classes and Plücker formulas
Given a proper generic map of manifolds, the Thom polynomial counts (in terms of characteristic classes of the manifolds), how many fibers of the map have a prescribed singularity. However, this tool cannot be directly applied to the study of generic polynomial maps $C^m \to C^n$, because they are not proper. An attempt to extend Thom polynomials in this natural direction leads to what can be called tropical Thom polynomials and tropical characteristic classes.
I will introduce tropical characteristic classes of (very) affine algebraic varieties, compute the tropical version of the simplest Thom polynomials (the Plücker formulas for the number of cusps and nodes of a projectively dual curve), and outline their relation to tropical correspondence theorems and some other possible applications.
2018, Friday, May 25, 10:30, Battelle, Dimitry Kaledin (Steklov & NRU HSE - Moscow)
Witt vectors, commutative and non-commutative, II
Witt vectors were first introduced eighty years ago, but they still come up in different questions of commutative and homological algebra, algebraic geometry, and even algebraic topology. I will try to give a general introduction to this remarkable subject, and show both its classical parts and some recent discoveries. The first talk will be quite elementary, first-year algebra should be enough. In the somewhat more advanced second talk, I will try to explain how the simple constructions of the first talk lead to non-commutative generalization of Grothendieck's cristalline cohomology of smooth algebraic varieties over a finite field.
2018, Tuesday, May 22, 15:30, Battelle, Dimitry Kaledin (Steklov & NRU HSE - Moscow)
Witt vectors, commutative and non-commutative, I
2018, Monday, May 14, 16:30, Battelle, Ilia Itenberg (Paris VI - ENS)
Finite real algebraic curves
The talk is devoted to real plane algebraic curves with finitely many real points. We study the following question: what is the maximal possible number of real points of such a curve provided that it has given (even) degree and given geometric genus? We obtain a complete answer in the case where the degree is sufficiently large with respect to the genus, and prove certain lower and upper bounds for the number in question in the general case. This is a joint work with E. Brugallé, A. Degtyarev and F. Mangolte.
2018, Monday, April 16, 16:30, Battelle, Ludmil Katzarkov (Vienna)
Homological mirror symmetry and the P=W conjecture
2018, Monday, March 5, 16:00, Battelle, Rahul Pandharipande (ETH)
On Lehn's conjecture for Segre classes on Hilbert schemes of points of surfaces and generalizations
Let L→S be a line bundle on a nonsingular projective surface. I will discuss recent progress concerning the formula conjectured by Lehn in 1999 for the top Segre class of the tautological bundle L^[n] on Hilb(S,n) and the parallel question for vector bundles V→S. Results of Voisin play a crucial role. The talk represents joint work with A. Marian and D. Oprea.
2018, Monday, February 26, 16:30, Battelle, Anton Fonarev (Higher School of Economics)
Embedding derived categories of curves into derived categories of moduli of stable vector bundles
One out of many interesting questions about derived categories is the following conjecture by A. Bondal: the bounded derived category of coherent sheaves of a smooth projective variety can be embedded into the bounded derived category of coherent sheaves of a smooth Fano variety. This conjecture is rather nontrivial even for curves. We will show how to embed the derived category of a generic curve of genus g > 1 into the derived category of rank 2 stable vector bundles with a fixed determinant of odd degree. The proof is a nice interplay of algebraic geometry, representation theory and categorical methods. The talk is based on a joint work with A. Kuznetsov.
2017, Monday, November 6, 16:30, Battelle, Jeffrey Giansiracusa (Swansea University)
Tropical geometry as a scheme theory
Tropical geometry has become a powerful tool set for tackling problems in algebraic geometry, combinatorics, and number theory. The basic objects have traditionally been considered as certain polyhedral sets and heuristically thought of as algebraic objects defined over the real numbers with the max-plus semiring structure. I will explain how to realize this within an extension of scheme theory and describe the particular form of the equations of tropical varieties in terms of matroids.
2017, Monday, October 30, 16:30, Battelle, Diego Matessi (Università degli Studi di Milano)
From tropical hypersurfaces to Lagrangian submanifolds
I will explain a construction of Lagrangian submanifolds of $(\mathbb{C}^*)^2$ or $(\mathbb{C}^*)^3$ which lift tropical hypersurfaces in $\mathbb{R}^2$ or $\mathbb{R}^3$. The building blocks are what I call Lagrangian pairs of pants. These can be constructed as graphs of the differential of a smooth function defined on a Lagrangian co-ameba. I will also explain some possible generalizations and applications to mirror symmetry.
2017, Monday, October 2, 16:30, Battelle, Dmitry Novikov (Weizmann Institute)
Complex cellular parameterization
(joint work with Gal Binyamini)
We introduce the notion of a complex cell, a complex analog of the cell decompositions used in real algebraic and analytic geometry. Complex cells defined using holomorphic data admit a natural notion of analytic continuation called $\delta$-extension, which gives rise to a rich hyperbolic geometric structure absent in the real case. We use this structure to prove that complex cellular decompositions share some interesting features with the classical constructions in the theory of resolution of singularities. Restriction of a complex cellular decomposition to the reals recovers the preparation theorem for subanalytic functions, and can be viewed as an analytic continuation thereof.
A key difference in comparison to the classical resolution of singularities is that the cellular decompositions are intrinsically uniform over (sub)analytic families. We deduce a subanalytic version of the Yomdin-Gromov theorem where $C^k$-smooth maps are replaced by mild maps.
2016, Friday, June 23, 11:00, Battelle, Ernesto Lupercio (CINVESTAV)
Quantum toric varieties
I will describe the theory of quantum toric varieties that generalizes usual toric geometry. Joint with Meersseman, Katzarkov and Verjovsky.
2016, Thursday, June 22, 11:30, Battelle, Conan Leung (CUHK)
Informal introduction to G_2-manifolds III
2016, Wednesday, June 21, 11:30, Battelle, Conan Leung (CUHK)
Informal introduction to G_2-manifolds II
2016, Monday, June 19, 15:00, Battelle, Conan Leung (CUHK)
Informal introduction to G_2-manifolds I
Villa Battelle, May 2, 14:00-15:00; May 3 14:15-15:15; May 5, 14:30-15:30, Aaron Bertram (Utah)
Minicourse: "Moduli Spaces of Complexes in Algebraic Geometry "
The ideal of the twisted cubic in projective three-space is completely described by a 2×3 matrix of linear forms in four variables. The space of such matrices (modulo the actions of GL(2) and GL(3)) is a smooth, projective variety compactifying the space of twisted cubics. But the objects parametrized by the points at the boundary of this moduli space are not ideals of curves. They are complexes of line bundles that are stable with respect to a "stability condition on the derived category." What does this mean? Can this be used to systematically find nice models for moduli and relate them to moduli spaces of coherent sheaves?
Day 1) Introduction to Stability Conditions. Ordinary stability of vector bundles on a Riemann Surface relies on two invariants: the rank and degree (first chern class). A stability condition on the derived category of coherent sheaves on a complex manifold relies on a generalized rank and degree, and also on an exotic t-structure on the derived category, with an abelian category of complexes at its heart. On an algebraic surface, there are stability conditions whose underlying heart can be described by a tilting construction. However, finding a single stability condition on a projective Calabi-Yau threefold (e.g. the quintic in P4) remains open.
Day 2) Models of the Hilbert Schemes of Points on a Surface. As the stability condition varies, the moduli spaces of stable objects (with respect to the stability condition) undergo a series of birational transformations. The particular example of the Hilbert scheme of ideal sheaves on an algebraic surface has been studied for various classes of surfaces. We will survey some results.
Day 3) The Euler Stability Condition on Projective Space. An interesting stability condition on P^n has the Euler characteristic playing the role of the rank. We will use this stability condition to study stratifications of the spaces of symmetric tensors, generalizing the secant varieties to the Veronese embeddings of P^n. This is joint work with Brooke Ullery.
Villa Battelle, Monday, Apr 3, 16:30-17:30, Lionel Lang (Uppsala University)
The vanishing cycles of curves in toric surfaces : the spin case
If the interior polygon of a lattice polygon $\Delta$ is divisible by 2, any generic curve $C$ of the linear system associated to $\Delta$ admits a spin structure $q$. If a loop in $C$ is a vanishing cycle, then the Dehn twist along the loop has to preserve $q$. As a consequence, the image of the monodromy of the linear system is a subgroup of the mapping class group $MCG(C,q)$ that preserves $q$. The main goal of this talk is to compare the image of the monodromy with $MCG(C,q)$. To this aim, we will show on one side that $MCG(C,q)$ admits a very explicit set of generators. On the other, we will construct elements of the monodromy by tropical means. The conclusion will be that the image of the monodromy is the full group $MCG(C,q)$ if and only if the interior polygon admits no other divisors than 2. (joint with R. Crétois)
Villa Battelle, Wednesday, Mar 8, 12:00, Maksim Karev (PDMI)
Monotone Hurwitz Numbers
Usual Hurwitz numbers count the number of covers over CP^1 with a fixed ramification profile over point \infty and simply ramified over a specified set of points. They also can be treated as a weighted count of factorizations in the symmetric group. It is known, that Hurwitz numbers can be calculated via intersection indices on the moduli spaces of complex curves by so-called ELSV-formula.
In my talk, I will discuss monotone Hurwitz numbers, which also arise as factorizations count with restrictions. It turns out, that they also can be related to the intersection indices on the moduli spaces of complex curves. I will give a definition of monotone Hurwitz numbers, and try to explain the origin of the monotone ELSV. If time permits, I will speak about the further development of the subject.
The talk is based on the joint work with Norman Do (Monash University).
Villa Battelle, Tuesday, Feb 21, 15:30, Yang-Hui He (London, Nankai and Oxford)
Calabi-Yau Varieties: From Quiver Representations to Dessins d'Enfants
We discuss how bipartite graphs on Riemann surfaces encapture a wealth of information about the physics and the mathematics of gauge theories. The correspondence between the gauge theory, the underlying algebraic geometry of its space of vacua as a quiver variety, the combinatorics of dimers and toric varieties, as well as the number theory of dessin d'enfants becomes particularly intricate under this light.
Joint session of "Fables géométriques" and "Groupes de Lie et espaces des modules" seminars.
Villa Battelle, Monday, Feb 20, 16:30, Yang-Hui He (London, Nankai and Oxford)
Sporadic and Exceptional
There tends to be exceptional structures in classifications: in geometry, there are the Platonic solids; in algebra, there are the exceptional Lie algebras; in group theory, there are the sporadic groups, to name but a few. Could these exceptional structures be related in some way? A champion for such Correspondences is Prof. John McKay. We take a casual promenade in this land of exceptionology, reviewing some classic results and presenting some new ones based on joint work with Prof. McKay.
Special lecture for "Geometry, Topology and Physics" masterclass students.
Villa Battelle, Friday, December 9, 14:30-15:30, Ozgur Ceyhan (Luxembourg)
Backpropagation, its geometry and tropicalisation
The algorithms that make current artificial neural networks successes possible are decades old. They became applicable only recently as these algorithms demand huge computational power. Any technique which reduces the needs for computation have a potential to make great impact. In this talk, I am going to discuss the basics of backpropagation techniques and tropicalisation of the problem that promises to reduce the time complexity and accelerate computations.
2016,Monday, November 7, 16:30, Battelle, Vladimir Fock.
Separation of variables in cluster integrable systems
Cluster integrable systems can be viewed from five rather different points of view. 1. As a double Bruhat cell of an affine Lie -Poisson group; 2. As a space of pairs (planar algebraic curve, line bundle on it); 3. Space of Abelian connection on a bipartite graph on a torus; 4. Hilbert scheme of points on algebraic torus. 5. Collection of flags in an infinite space invariant under the action of two commuting operators. We will see the relation between all these descriptions and discuss its quantization and possible generalizations.
2016,Friday, Nov 4, 14:30-15:15 part I, 15:30-16:15 part II, Johannes Walcher (Heidelberg).
Ideas of D-branes
Abstract: I will give an introduction to D-branes from the point of view of their origin in the physics of string theory. I will discuss both world-sheet and space-time aspects.
2016, Monday, 23 mai, 16.30, Battelle, Frédéric Bihan.
Une généralisation de la règle de Descartes pour les systèmes polynomiaux dont le support est un circuit
Résumé : La règle de Descartes borne le nombre de racines positives d'un polynôme réel en une variable par le nombre de changements de signe consécutifs de ses coordonnées dans la base monomiale (ordonnée suivant les puissances croissantes). La borne obtenue est optimale et généraliser la règle de Descartes aux systèmes polynomiaux en plusieurs variables est un problème très difficile. Dans un travail avec Alicia Dickenstein (Université de Buenos Aires), nous avons obtenu une généralisation partielle de la règle de Descartes en plusieurs variables. Notre règle s'applique aux systèmes polynomiaux en un nombre arbitraire n de variables dont le support consiste en n+2 monômes quelconques. Comme pour la règle de Descartes usuelle, notre borne est optimale et s'exprime comme un nombre de changement de signes d'une suite de nombres obtenus en considérant les mineurs maximaux de la matrice des coefficients ainsi que de celle des exposants du système.
(in English)Descartes' Rule of Signs for Polynomial Systems Supported on Circuits
Descartes' rule of signs bounds the number of positive roots of an univariate polynomial by the number of sign changes between consecutive coefficients. In particular, this produces a sharp bound depending on the number of monomials. Generalizing Descartes' rule of signs or the corresponding sharp bound to the multivariable case is a challenging problem. In this talk, I will present a generalization of Descartes' rule of signs for the number of positive solutions of any system of n real polynomial equations in n variables with at most n+2 monomials. This is a joint work with Alicia Dickenstein (Buenos Aires University).
2016, Monday, 9 mai, Battelle, Eugenii Shustin, 18.30-19.15
On refined tropical invariants of toric surfaces.
We discuss two examples of refined count of plane tropical curves. One of them is the refined broccoli invariant. It was introduce by Goettsche and Schroeter for genus zero case, and it turns into some descendant invariant or the broccoli invariant according as the parameter takes value 1 or -1. A possible extension of broccoli invariant to positive genera appeared to be rather problematic. However, the refined version turns to be easier to treat. Jointly with F. Schroeter, we have defined a refined broccoli invariant, counting elliptic tropical curves. This can be done for higher genera as well (work in progress). Another example (joint work with L. Blechman) is the refined descendant tropical invariant (involving arbitrary powers of psi-classes). We discuss also the most interesting related question: What is the complex and real enumerative meaning of these invariants?
2016, Monday, 4 april, 16.30, Battelle.
Lionel Lang (Uppsala)
The vanishing cycles of curves in toric surfaces (joint work with Rémi Crétois)
In [Do], Donaldson addressed the following : Do all Lagrangian spheres in a complex projective manifold arise from the vanishing cycles of a deformation to singular varieties? The answer might depend on the choice of the moduli space in which we are allowed to deform our manifold. Already for curves, it leads to interesting questions. In the Deligne-Mumford moduli space M_g, any loop inside a smooth curve can be contracted along a deformation towards a nodal (stable) curve, provided that the genus g>1. What happens if one restricts to a chosen linear system on a toric surface? Degree d curves in the projective plane, for instance. In the latter, two obstructions occur: the loop should not be separating for d>2 (Bezout), the Dehn twist along the loop should preserve a certain spin structure on the curve for d odd (see [Beau]). In the latter, Beauville proves (in particular) that any non-obstructed loop is homologous to a vanishing cycle. In this talk, we suggest a tropical proof of Beauville's result as well as an extension to any (big enough) linear systems on any smooth toric surface. This problem is directly related to the monodromy group given by the complement of the discriminant in the considered linear system. The proof will involve simple Harnack curves, introduced by Mikhalkin, and monodromy given by partial tropical compactifications of the linear system. If time permits, we will also discuss this problem at the isotopic level, problem that is still open.
[Beau] : Le groupe de monodromie des familles universelles d'hypersurfaces et d'intersections complètes. A. Beauville, 1986. [Do] : Polynomials, vanishing cycles and Floer homology. S.K. Donaldson, 2000.
2016, Monday, 21 mars, 16.30, Battelle.
Boris Shapiro (Stockholm)
On the Waring problem for polynomial rings
We discuss a natural analog of the classical Waring problem for $C[x_1,…,x_n]$. Namely, we show that a general form p from $C[x_1,…,x_n]$ of degree kd where k>1 can be represented as a sum of at most $k^n$ k-th powers of forms of degree d. Noticeably, $k^n$ coincides with the number obtained by naive dimension count if d is sufficiently large.
2016, Friday, March 18, 14.15, villa Battelle.
Sergey Galkin (Moscow)
Gamma conjectures and mirror symmetry
I will speak about an exotic integral structure in cohomology of Fano manifolds that conjecturally can be expressed in terms of Euler's gamma-function, how one can observe it by computing asymptotics of a quantum differential equation, and how one can prove the conjectures using mirror symmetry. This is a joint work with Vasily Golyshev and Hiroshi Iritani (1404.6407, 1508.00719).
2016, Thursday, March 17 Colloquium, villa Battelle
Vassily Golyshev (Moscow), 16:15
Around the gamma conjectures.
Abstract: We will state the gamma conjectures for Fano manifolds and explain how quantum cohomology makes it possible to enhance the classical Riemann-Roch-Hirzebruch theorem by relating the curve count on a variety to its characteristic classes. We will indicate how the gamma conjectures are proved in the known cases.
E. Abakoumov (Paris-Est)
Growth of proper holomorphic maps and tropical power series
How fast a proper holomorphic map, say, from C to C^n can grow? It turns out that the tropical power series appear naturally in answering this question, as well as in some related approximation problems on the complex plane. The talk is based on joint work with E. Dubtsov.
2015, Tuesday, 8 December, 14.30, Battelle. (joint with Séminaire "Groupes de Lie et espaces des modules")
Bernd Sturmfels (UC Berkeley)
Exponential Varieties
Exponential varieties arise from exponential families in statistics. These real algebraic varieties have strong positivity and convexity properties, familiar from toric varieties and their moment maps. Another special class, including Gaussian graphical models, are inverses of symmetric matrices satisfying linear constraints. We present a general theory of exponential varieties, with focus on those defined by hyperbolic polynomials. This is joint work with Mateusz Michalek, Caroline Uhler, and Piotr Zwiernik.
2015, Tuesday, December 8, 11:15 -- 12:15, Battelle.
Renzo Cavalieri (Colorado State)
Tropical geometry: a graphical interface for the GW/Hurwitz correspondence.
In their study of the Gromov-Witten theory of curves [OP], Okounkov and Pandharipande used the degeneration formula to express stationary descendant invariants of curves in terms of Hurwitz numbers and one point descendant relative invariants. Then they use operator formalism to organize the combinatorics of the degeneration formula, and the one point invariants into completed cycles. In joint work with Paul Johnson, Hannah Markwig and Dhruv Ranganathan, we revisit their formalism and show that the Feynmann diagrams that are secretly behind the scenes in [OP] are in fact tropical curves. This yields some mild refinements of the Gromov-Witten/Hurwitz correspondence of [OP]. Time permitting we will describe how a generalization of these tecniques should lead to unveiling a similar structure in the stationary/descendant GW theory of sliceable surfaces.
2015, Monday, 7 December, 16.15, Villa Battelle
Israel Vainsencher (Universidade Federal de Minas Gerais, Brasil)
Legendrian curves
A twisted cubic curve in 3-space is known to define a (non-integrable) distribution of planes. The planes of the distribution osculate the original twc. We show how to define virtual numbers N_d which enumerate the rational curves of degree d which are tangent to that distribution and further meet 2d+1 general lines. (Based on Eden Amorim thesis)
The next lecture of the course
"Imaginary time in Kaehler geometry, quantization and tropical amoebas" by José Mourão
will be on Monday 9 November, 17.00 Battelle.
2015, October 27, 15.15 and October 29, 16.15, and November 2, 17.00, Villa Battelle
(Minicourse) Imaginary time in Kaehler geometry, quantization and tropical amoebas.
José Mourão, Mathematics Department, Instituto Superior Tecnico, Portugal.
For a compact Kaehler manifold $M$ and a function $H$ on $M$ we give a simple definition of the continuation of the flow defined by $H$ to complex time, $\tau$, using the Groebner theory of Lie series. The resulting complexified (or complex time) symplectomorphisms are diffeomorphisms for some $|\tau|< R_H$. For larger values of $|\tau|$ they may correspond e.g. to the collapse of $M$ to a totally real submanifold. Simple examples will be discussed.
Kahler geometry applications: Imaginary time symplectomorphisms correspond to Mabuchi geodesics in the infinite dimensional space of Kaehler metrics with fixed cohomology class. We get thus a explicit way of constructing Mabuchi geodesics from Hamiltonian flows.
Quantum theory applications: By lifting the imaginary time symplectomorphisms to the quantum bundle we get generalized coerent state transforms and are able to study the unitary equivalence of quantizations corresponding to nonequivalent polarizations.
Tropical geometry applications: For toric varieties the toric geodesics of the Mabuchi metric are straight lines in the space of Guillemin-Abreu symplectic potentials. Taking a strictly convex function $H$ (as a function on the moment polytope) one has that, for large geodesic times s, there is a simple relation between the moment map $\mu_s$ and the $Log_t$ map of amoeba theory ($t=e^s$) . This relation further simplifies if one takes as $H$ the full symplectic potential, which is continuous but not smooth on $M$ and corresponds to a geodesic of Kaehler metrics with cone angle singularities. The tropical limit corresponds thus, in this setting, to the infinite geodesic time limit corresponding to convex hamiltonians.
Materials: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/2015_Geneva_University_Seminar.pdf
Index: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_index.pdf
Lecture 1 (introduction and different definitions of complex time evolution): http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L1.pdf
Lecture 2: Kahler tropicalization of C^*: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L2.pdf
Lecture 3: Kahler tropicalization of C and (strange) actions of G_C on Kahler structures: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L3.pdf
Lecture 4: C^infty Kahler tropicalization of toric varieties and of hypersurfaces in toric varieties: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L4.pdf
Lecture 5: C^0 Kahler tropicalization of toric varieties and of hypersurfaces in toric varieties: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/Lectures_UG_L5.pdf
Overview seminar: http://www.math.tecnico.ulisboa.pt/~jmourao/talkscourses/2015_Geneva_University_Seminar.pdf
2015, October 27, Tuesday, 15.15, Villa Battelle ( together with Séminaire "Groupes de Lie et espaces des modules")
Imaginary time in Kaehler geometry, quantization and tropical amoebas.
José Manuel Cidade Mourão, Mathematics Department, Instituto Superior Tecnico, Portugal.
For a compact Kaehler manifold $M$ and a function $H$ on $M$ we define a continuation of the Hamiltonian flow of $H$ to complex time $\tau$. The resulting complexified (or complex time) symplectomorphisms are diffeomorphisms for some $|\tau|< R_H$. For larger values of $|\tau|$ they may correspond e.g. to the collapse of $M$ to a totally real submanifold. We'll discuss some simple examples and applications to Kaehler geometry, quantization and tropical geometry. This talk is the first lecture of a mini-course to be given during October-November 2015.
2015, October 5, Monday, 16.20, Villa Battelle
Tropicalization of Poisson-Lie groups
Anton Alexeev (UniGe)
In the first part of the talk, we recall the notion of Poisson-Lie groups and cluster coordinates for some simple examples.
In the second part, we use the notion of tropicalization to construct completely integrable systems, and for the Poisson-Lie group SU(n)^* match it with the Gelfand-Zeiltin integrable system.
The talk is based on joint works with I. Davydenkova, M. Podkopaeva and A. Szenes.
2015, September 28, Monday, 16.15, Villa Battelle
What is moonshine?
Sergey Galkin (HSE, Moscow)
I will describe a few instances of geometric moonshines: surprising appearance of modular forms and sporadic groups as the answers to seemingly unrelated geometric and topological questions.
2015, 21 September, Monday, 16.15, Villa Battelle.
Cohomology of superforms on polyhedral complexes and Poincare duality for tropical manifolds
Kristin Shaw.
Superforms introduced by Lagerberg are bigraded differential forms on $\mathbb R^n$ which can be restricted to polyhedral complexes. We extend these forms to $\mathbb T^n = [-\infty, \infty)^n$ and show that their de Rham cohomology is equivalent to tropical $(p, q)$ cohomology Furthermore, we establish Poincaré duality for cohomology of tropical manifolds. As in the classical theory, the Poincaré pairing can be formulated in terms of integration of superforms.
old page of the seminar http://www.unige.ch/math/folks/langl/fables/
fables.1589124779.txt.gz · Dernière modification: 2020/05/10 17:32 de weronika
|
CommonCrawl
|
On periodic solutions in the Whitney's inverted pendulum problem
DCDS-S Home
Ground state homoclinic solutions for a second-order Hamiltonian system
November 2019, 12(7): 2143-2161. doi: 10.3934/dcdss.2019138
Positive solutions of doubly coupled multicomponent nonlinear Schrödinger systems
Jiabao Su 1, , Rushun Tian 1,, and Zhi-Qiang Wang 2,3,,
School of Mathematical Science, Capital Normal University, Beijing 10048, China
Center for Applied Mathematics, Tianjin University, Tianjin 300072, China
Department of Mathematics and Statistics, Utah State University, Logan, UT 84322, USA
* Corresponding author: Rushun Tian and Zhi-Qiang Wang
Received November 2017 Revised April 2018 Published December 2018
Fund Project: This paper is supported by Beijing Natural Science Foundation (1174013), National Natural Science Foundation of China (11601353, 11771302, 11771324, 11671026, 11831009)
In this paper, we study the following doubly coupled multicomponent system
$ \begin{equation*} \left\{\begin{array}{ll} -\Delta u_j + \lambda_ju_j+ \sum_{k\neq j}\gamma_{jk}u_k = \mu_ju_j^3+ u_j\sum_{k\neq j}\beta_{jk}u_k^2,\\ u_j(x)\geq0\ \ \hbox{and}\ \ u_j\in H_0^1(\Omega), \end{array} \right. \end{equation*} $
$ \Omega\subset \mathbb{R} ^N $
$ N = 2,3 $
$ \lambda_j, \gamma_{jk} = \gamma_{kj}, \mu_j, \beta_{jk} = \beta_{kj} $
are constants,
$ j, k = 1, 2, ..., n $
$ n\geq 2 $
. We prove some existence and nonexistence results for positive solutions of this system. If the system is fully symmetric, i.e.
$ \lambda_j\equiv\lambda, \gamma_{jk}\equiv\gamma, \mu_j\equiv\mu, \beta_{jk}\equiv\beta $
, we study the multiplicity and bifurcation phenomena of positive solution.
Keywords: Doubly coupled equations, bifurcation, multicomponent.
Mathematics Subject Classification: 35B05, 35J61, 58C40, 35J15, 58E07.
Citation: Jiabao Su, Rushun Tian, Zhi-Qiang Wang. Positive solutions of doubly coupled multicomponent nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2143-2161. doi: 10.3934/dcdss.2019138
A. Ambrosetti, G. Cerami and D. Ruiz, Solitons of linearly coupled systems of semilinear non-autonomous equations on $ \mathbb{R} ^n$, J. Funct. Anal., 254 (2008), 2816-2845. doi: 10.1016/j.jfa.2007.11.013. Google Scholar
A. Ambrosetti and E. Colorado, Bound and ground states of coupled nonlinear Schrödinger equations, C. R. Math. Acad. Sci. Paris, 342 (2006), 453-458. doi: 10.1016/j.crma.2006.01.024. Google Scholar
A. Ambrosetti and E. Colorado, Standing waves of some coupled nonlinear Schrödinger equations, J. Lond. Math. Soc., 75 (2007), 67-82. doi: 10.1112/jlms/jdl020. Google Scholar
T. Bartsch, Bifurcation in a multicomponent system of nonlinear Schrödinger equations, J. Fixed Point Theory Appl., 13 (2013), 37-50. doi: 10.1007/s11784-013-0109-4. Google Scholar
T. Bartsch, E. N. Dancer and Z.-Q. Wang, A Liouville theorem, a-priori bounds, and bifurcating branches of positive solutions for a nonlinear elliptic system, Calc. Vari. Part. Diff. Equ., 37 (2010), 345-361. doi: 10.1007/s00526-009-0265-y. Google Scholar
T. Bartsch, R. Tian and Z.-Q. Wang, Bifurcations for a coupled Schr dinger system with multiple components,, Angew. Math. Phys., 66 (2015), 2109-2123. doi: 10.1007/s00033-015-0498-x. Google Scholar
T. Bartsch and Z.-Q. Wang, Note on ground states of nonlinear Schrödinger systems, J. Part. Diff. Equ., 19 (2006), 200-207. Google Scholar
T. Bartsch, Z.-Q. Wang and J. Wei, Bound states for a coupled Schrödinger system, J. Fixed Point Theory Appl., 2 (2007), 353-367. doi: 10.1007/s11784-007-0033-6. Google Scholar
G. Dai, R. Tian and Z. Zhang, Global bifurcations and a priori bounds of positive solutions for coupled nonlinear Schrödinger systems., Preprint.Google Scholar
E.N. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 953-969. doi: 10.1016/j.anihpc.2010.01.009. Google Scholar
B. Deconinck, P. G. Kevrekidis, H. E. Nistazakis and D. J. Frantzeskakis, Linearly coupled Bose-Einstein condensates: From Rabi oscillations and quasiperiodic solutions to oscillating domain walls and spiral waves, Phys. Rev. A, 70 (2004), 063605.Google Scholar
B. D. Esry, C. H. Greene, J. P. Burke Jr and J. L. Bohn, Hartree-Fock theory for double condensates, Phys. Rev. Lett., 78 (1997), 3594-3597. Google Scholar
P. M. Fitzpatrick, I. Massabò and J. Pejsachowicz, Global several-parameter bifurcation and continuation thereoms: a Unified approach via complementing maps, Math. Ann., 263 (1983), 61-73. doi: 10.1007/BF01457084. Google Scholar
K. Li and Z. Zhang, Existence of solutions for a Schrödinger system with linear and nonlinear couplings, J. Math. Phys., 57 (2016), 081504, 17pp. doi: 10.1063/1.4960046. Google Scholar
T. Lin and J. Wei, Ground state of $N$ Coupled Nonlinear Schrödinger equations in $ \mathbb{R} ^n, n\leq3 $, Commun. Math. Phys., 255 (2005), 629-653. doi: 10.1007/s00220-005-1313-x. Google Scholar
T. Lin and J. Wei, Solitary and self-similar solutions of two-component system of nonlinear Schrödinger equations, Physics D: Nonlinear Phenomena, 220 (2006), 99-115. doi: 10.1016/j.physd.2006.07.009. Google Scholar
Z. Liu and Z.-Q. Wang, Multiple bound states of nonlinear Schrödinger systems, Comm. Math. Phy., 282 (2008), 721-731. doi: 10.1007/s00220-008-0546-x. Google Scholar
Z. Liu and Z.-Q. Wang, Ground states and bound states of a nonlinear Schrödinger system, Advanced Nonlinear Studies, 10 (2010), 175-193. doi: 10.1515/ans-2010-0109. Google Scholar
L. A. Maia, E. Montefusco and B. Pellacci, Positive solutions for a weakly coupled nonlinear Schrödinger system, J. Diff. Equ., 299 (2006), 743-767. doi: 10.1016/j.jde.2006.07.002. Google Scholar
J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian System, Spinger-Verlag, New York, 1989. doi: 10.1007/978-1-4757-2061-7. Google Scholar
M. Mitchell, Z. Chen, M. Shih and M. Segev, Self-trapping of partially spatially incoherent light, Phys. Rev. Lett., 77 (1996), 490-493. Google Scholar
Ch. Rüegg, N. Cavadini, A. Furrer, H.-U. Güdel, K. Krämer, H. Mutka, A. Wildes, K. Habicht and P. Vorderwischu, Bose-Einstein condensation of the triplet states in the magnetic insulator TlCuCl3, Nature, 423 (2003), 62-65. Google Scholar
B. Sirakov, Least energy solitary waves for a system of nonlinear Schrödinger equations in $ \mathbb{R} ^n$, Comm. Math. Phys., 271 (2007), 199-221. doi: 10.1007/s00220-006-0179-x. Google Scholar
R. Tian and Z.-Q. Wang, Multiple solitary wave solutions of nonlinear Schrödinger systems, Topol. Methods Nonlinear Anal., 37 (2011), 203-223. Google Scholar
R. Tian and Z.-Q. Wang, Bifurcation results on positive solutions of an indefinite nonlinear elliptic system, Discrete Contin. Dyn. Syst. - Series A, 33 (2013), 335-344. doi: 10.3934/dcds.2013.33.335. Google Scholar
R. Tian and Z.-Q. Wang, Bifurcation results on positive solutions of an indefinite nonlinear elliptic system Ⅱ, Adv. Nonlinear Stud., 13 (2013), 245-262. doi: 10.1515/ans-2013-0115. Google Scholar
R. Tian and Z.-T. Zhang, Existence and bifurcation of solutions for a double coupled system of Schrödinger equations, Sci. China Math., 58 (2015), 1607-1620. doi: 10.1007/s11425-015-5028-y. Google Scholar
Z.-Q. Wang, A Zp index theory, Acta Mathematica Sinica, New Series, 6 (1990), 18-23. doi: 10.1007/BF02108859. Google Scholar
J. Wei and T. Weth, Nonradial symmetric bound states for a system of two coupled Schrödinger equations, Rend. Lincei Mat. Appl., 18 (2007), 279-293. doi: 10.4171/RLM/495. Google Scholar
Masahiro Suzuki. Asymptotic stability of a boundary layer to the Euler--Poisson equations for a multicomponent plasma. Kinetic & Related Models, 2016, 9 (3) : 587-603. doi: 10.3934/krm.2016008
Goro Akagi. Doubly nonlinear parabolic equations involving variable exponents. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 1-16. doi: 10.3934/dcdss.2014.7.1
Jiebao Sun, Boying Wu, Jing Li, Dazhi Zhang. A class of doubly degenerate parabolic equations with periodic sources. Discrete & Continuous Dynamical Systems - B, 2010, 14 (3) : 1199-1210. doi: 10.3934/dcdsb.2010.14.1199
Alessandro Audrito. Bistable reaction equations with doubly nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 2977-3015. doi: 10.3934/dcds.2019124
Fernando Antoneli, Ana Paula S. Dias, Rui Paiva. Coupled cell networks: Hopf bifurcation and interior symmetry. Conference Publications, 2011, 2011 (Special) : 71-78. doi: 10.3934/proc.2011.2011.71
Niclas Bernhoff. Boundary layers for discrete kinetic models: Multicomponent mixtures, polyatomic molecules, bimolecular reactions, and quantum kinetic equations. Kinetic & Related Models, 2017, 10 (4) : 925-955. doi: 10.3934/krm.2017037
Vladimir S. Gerdjikov. Bose-Einstein condensates and spectral properties of multicomponent nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - S, 2011, 4 (5) : 1181-1197. doi: 10.3934/dcdss.2011.4.1181
A. Kh. Khanmamedov. Long-time behaviour of doubly nonlinear parabolic equations. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1373-1400. doi: 10.3934/cpaa.2009.8.1373
Qi Zhang, Huaizhong Zhao. Backward doubly stochastic differential equations with polynomial growth coefficients. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5285-5315. doi: 10.3934/dcds.2015.35.5285
Antonio Segatti. Global attractor for a class of doubly nonlinear abstract evolution equations. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 801-820. doi: 10.3934/dcds.2006.14.801
Yufeng Shi, Qingfeng Zhu. A Kneser-type theorem for backward doubly stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1565-1579. doi: 10.3934/dcdsb.2010.14.1565
Ismail Kombe. On the nonexistence of positive solutions to doubly nonlinear equations for Baouendi-Grushin operators. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5167-5176. doi: 10.3934/dcds.2013.33.5167
Mitsuharu Ôtani, Yoshie Sugiyama. Lipschitz continuous solutions of some doubly nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 647-670. doi: 10.3934/dcds.2002.8.647
Jochen Merker. Strong solutions of doubly nonlinear Navier-Stokes equations. Conference Publications, 2011, 2011 (Special) : 1052-1060. doi: 10.3934/proc.2011.2011.1052
Simona Fornaro, Maria Sosio, Vincenzo Vespri. Harnack type inequalities for some doubly nonlinear singular parabolic equations. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5909-5926. doi: 10.3934/dcds.2015.35.5909
Runxia Wang, Haihong Liu, Fang Yan, Xiaohui Wang. Hopf-pitchfork bifurcation analysis in a coupled FHN neurons model with delay. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 523-542. doi: 10.3934/dcdss.2017026
Alexander Krasnosel'skii, Alexei Pokrovskii. On subharmonics bifurcation in equations with homogeneous nonlinearities. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 747-762. doi: 10.3934/dcds.2001.7.747
Grégoire Allaire, Harsha Hutridurga. On the homogenization of multicomponent transport. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2527-2551. doi: 10.3934/dcdsb.2015.20.2527
Genni Fragnelli, Paolo Nistri, Duccio Papini. Corrigendum: Nnon-trivial non-negative periodic solutions of a system of doubly degenerate parabolic equations with nonlocal terms. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3831-3834. doi: 10.3934/dcds.2013.33.3831
Simona Fornaro, Ugo Gianazza. Local properties of non-negative solutions to some doubly non-linear degenerate parabolic equations. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 481-492. doi: 10.3934/dcds.2010.26.481
Jiabao Su Rushun Tian Zhi-Qiang Wang
|
CommonCrawl
|
Periodicity property of DFT in time and frequency domain
I am trying to understand the periodicity of the DFT. How can this property (both in time and frequency domain) be used and can be helpful while developing on a DSP?
It would be good to see some source code or pseudo code, in MATLAB preferably, where this property is exploited/demonstrated.
matlab dft
Peter K.♦
gpuguygpuguy
$\begingroup$ As you say right up front, what you really want is MATLAB code. This is definitely off-topic for dsp.SE. If you want to know why the DFT is periodic, then that would be a reasonable question to ask. $\endgroup$ – Dilip Sarwate Apr 2 '13 at 13:45
If the expression that defines the DFT is evaluated for all integers $k$ instead of just for $k = 0, \dots, N-1$ , then the resulting infinite sequence is a periodic extension of the DFT, periodic with period $N$.
The periodicity can be shown directly from the definition:
$$ X_{k+N} \stackrel{\mathrm{def}}{=} \ \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} (k+N) n} = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} k n} \underbrace{e^{-2 \pi i n}}_{1} = \sum_{n=0}^{N-1} x_n e^{-\frac{2\pi i}{N} k n} = X_k.$$
Similarly, it can be shown that the IDFT formula leads to a periodic extension.
While the periodic property of DFT isn't very widely utilized, it often causes aliasing problems.
Source: http://www.dspguide.com/ch10/3.htm
NareshNaresh
In the time domain, the DFT is periodic by definition. While DFT stands for Discrete Fourier Transform, the operation is in fact a discrete fourier series. The signal to be analyzed is assumed to be periodic in the lenght of the signal. This periodic signal is decomposed into a series of periodic sequences. The DFT frequency bins are located at f = 1/T and its integer multiples, where T is the duration of the signal to be analyzed.
In the frequency domain, the DFT is periodic because the time domain signal being analyzed is sampled. Recall that any periodic sequence cannot be uniquely represented for frequencies above fs/2 where fs is the sampling frequency of the sequence (also known as the nyquist frequency). Above fs/2 all signal energy is reflected back into the frequency range 0-fs/2. Between fs/2 and fs, the reflection is in reverse order which gives rise to a DFT period equal to fs.
$\begingroup$ 0down votefavorite "I am trying to understand the periodicity of the DFT." ... and this gets a down vote! Go figure. $\endgroup$ – user2718 Apr 3 '13 at 11:32
$\begingroup$ I didn't downvote your answer, but feel that the downvote is perfectly understandable. If someone is wanting to understand the periodicity of the DFT, saying that "the DFT is periodic by definition" is like a mother answering a child's "Why?" by "Because I say so." What is it in the definition of the DFT that gives it the property of periodicity since the typical definition does not make any mention of periodicity, though it is sometimes discussed right below the definition in a section called "Properties of the DFT"? $\endgroup$ – Dilip Sarwate Apr 3 '13 at 12:17
$\begingroup$ @Dilip You didn't understand my answer. The reason the DFT is periodic in the time domain is because it is a Fourier series, which by definition describes a periodic function. Do I have to explain why a sine wave is periodic? If you don't 'get' an answer, that isn't reason to down vote it. Just don't choose it as an answer you like. $\endgroup$ – user2718 Apr 3 '13 at 14:02
$\begingroup$ @DilipSarwate: The only answer that is justified to be down voted (given people take time out of their lives to offer up answers) is one that is incorrect or deliberately misleading. Just my opinion as a working professional. $\endgroup$ – user2718 Apr 3 '13 at 14:06
$\begingroup$ Wow - Aother down vote and the answer is essentially the same answer that gets 2 up votes. "The periodicity can be shown directly from the definition:" There must be some very stange visitors to this site. $\endgroup$ – user2718 Apr 10 '13 at 15:00
Not the answer you're looking for? Browse other questions tagged matlab dft or ask your own question.
FFT of sine wave not coming as expected i.e single point
Frequency resolution and timestep in DFT
What is leakage in frequency domain adaptive filters?
Frequency filtering with a DFT and meaning of removing complex conjugates
DFT sinusoid's fundamental frequency, intuitively
Relation between sawtooth Fourier coefficients and its DFT
DFT Frequency domain analysis and interpolation
DFT - Removing window effect in spectral domain with convolution
Basics of leakage phenomena in DFT and its approximation with sinc function
different solutions in matlab / octave using dft and fft
Why do DFT frequency buckets need to be divided by sample period?
|
CommonCrawl
|
1. a crazy fish
In an aquarium of a spherical shape with the radius of $r=10\;\mathrm{cm}$ which is completely filled with water, swim two identical fish in opposite directions. The fish has a cross-sectional area of $S=5\;\mathrm{cm}$, Newton's drag coefficient $C=0.2$ and it swims with a speed of $v=5\;\mathrm{km}\cdot h^{-1}$ relative to the water. How long have the fish to swim in the aquarium to increase the temperature of the water by 1 centigrade?
thermodynamicshydromechanics
2. alchemist's apprentice
The young alchemist George has learnt to measure electrochemical equivalents. He measured quite precisely the electrochemical equivalent $A=(6.74±0.01)\cdot 10^{-7}\;\mathrm{kg}\cdot C^{-1}$ of an unknown sample. How can he determine what substance was his sample made of?
3. will it jump?
Consider a massless spring with spring constant $k$. Weights are attached to both ends with masses $m$, and $Mrespectively$. This system is placed on a horizontal surface so that weight of mass $Mlies$ on the surface and the spring with the second weight points up. The system is in equilibrium (i.e. top weight does not oscillate) and length of the spring in this state is $l$. How much do we have to compress the spring so that the weight of mass $M$ jumps up when it is released? Consider only vertical motion.
4. break, break, break!
After we press the break pedal, the car does not start to break immediately. During the time $t_{r}$ the breaking force grows linearly up to the maximum force $F_{m}$. Coefficient of static friction between the tire and a road is $f$. What is the maximum speed of car so that the car does not skid even during emergency breaking?
5. running notebook
The notebook of a size of A4 (297 x 210 mm) lies on a desk with an inclination of $α=5°$. The notebook weights $m$, between the desk and the notebook there acts a static friction force with coefficient $f_{0}=0.52$. Then, we hit the desk so it starts to oscillate (in the direction of the inclination of the desk) with a frequency $ν=10\;\mathrm{Hz}$ and an amplitude $A=1\;\mathrm{mm}$.
Determine by which extra force (perpendicular to the desk) we have to act on the notebook so it does not start to move.
Determine how long it takes the notebook to fall off the desk if at the beggining its bottom edge (the shorter one) is at the bottom edge of the desk. Dynamic friction coeficient is $f$, consider notebook as a rigid plate.
mechanics of rigid bodiesmechanics of a point mass
P. Lukas' hole
Lukas has been weightlifting and he managed to make a black hole of mass 1 kg. As he isn't too fond of quantum field theory in curved spacetime, the black hole does not radiate. Lukas drops this hole and it begins oscillating within the earth. Try to estimate how long would it take for the mass of the black hole to double. Is it safe to make black holes at home?
E. hydrogel
Examine the dependence of a weight of a hydrogel ball on a time of submersion in a water and on a concentration of salt dissolved in water. Note We do not send the experimental material abroad, therefore the hydrogel you buy must be described in detail.
hydromechanicschemistry
S. serial
All states of ideal gas can be shown on various diagrams: $pV$ diagram, $pT$ diagram and so on. The first quantity is shown is on vertical axis, the second on horizontal. Every point therefore determines 2 parameters. Sketch in a $pV$ diagram the 4 processes with ideal gas that you know. Do the same on a $Tp$ diagram. How would $UT$ diagram look like? Explain how would the unsuitability of these two variables appear on the diagram.
What are the dimensions of entropy? What other quantities with the same dimensions do you know?
In the text for this series we analysed a case of entropy increasing as heat flows into a gas. Perform a similar analysis for the case of heat flowing out of the gas.
We know that entropy does not change during an adiabatic process. Therefore, the expression for entropy as a function of volume and pressure $S(p,V)$ can only contain a combination of pressure and volume that does not change during an adiabatic process.
What is this expression? Draw lines of constant entropy on a $pV$ diagram ($p$ on vertical axis, $V$ on horizontal). Does this agree with the expression for entropy we have derived?
Express the entropy of an ideal gas as functions $S(p,V)$, $S(T,V)$ and $S(U,V)$.
thermodynamicsgas mechanicsmolecular physics
|
CommonCrawl
|
Phototransformation kinetics of cyanobacterial toxins and secondary metabolites in surface waters
Regiane Natumi1,
Sandro Marcotullio2 &
Elisabeth M.-L. Janssen ORCID: orcid.org/0000-0002-5475-67301
Cyanobacteria and their toxins occur in high concentrations during the so-called bloom events in surface waters. To be able to assess the risks associated with cyanobacterial blooms, we need to understand the persistence and fate processes of these toxins and other bioactive metabolites. In this study, we investigated the photochemical fate of 54 cyanopeptides extracted from two strains of Microcystis aeruginosa (PCC7806 and UV006), Planktothrix rubescens, and Dolichospermum flos aquae. We determined half-lives during sunlight exposure in lake water and inspected the effect of pH on transformation kinetics for 27 microcystins, 8 anabaenopeptins, 14 cyanopeptolins, 2 cyclamides, and 3 aeruginosins.
For cyanopeptides from D. flos aquae and P. rubescens, we observed the highest removal of 28 and 26%, respectively, after 3-h sunlight exposure. Most cyanopeptides produced by the two M. aeruginosa strains were rather persistent with only up to 3% removal. The more reactive cyanopeptides contained amino acids known to undergo phototransformation, including methionine and tyrosine moieties or their derivatives. Photochemical half-lives of 14 tyrosine-containing cyanopeptides decreased by one order of magnitude from nearly persistent conditions at pH 7 (half-life > 70 h) to shorter half-lives at pH 10 (< 10 h).
More work is needed to distinguish the contribution of different photochemical reaction pathways including the contributions to the pH effect. To the best of our knowledge, this is the first assessment of transformation kinetics of such a wide range of cyanopeptides. The abundant and persistent cyanopeptides that have not been studied in detail yet should be prioritized for the evaluation of their ecosystem and human health risks and for their abatement during drinking water treatment.
Under favorable environmental conditions, such as elevated nutrient concentrations and warm temperatures, cyanobacteria can proliferate to so-called bloom events [1,2,3]. There is a scientific consensus that cyanobacterial blooms are intensifying globally due to continued anthropogenic nutrient inputs and the effects of climate change on thermal and hydrological conditions [1, 4, 5]. Cyanobacteria blooms are also called "harmful" because they result in poor water quality and cyanobacteria are able to produce toxic metabolites including microcystins, cylindrospermopsins, anatoxins, and saxitoxins [6,7,8,9,10,11,12]. Over the past decades, a legion of additional bioactive secondary metabolites has been identified in laboratory cultures and biomass collected from cyanobacterial bloom events [13,14,15,16,17]. These compounds occupy a wide chemical space from 100 to 2500 Da with 65% being peptide-based metabolites, called cyanopeptides [18]. Cyanopeptides can be classified by structural similarities including microcystins, aeruginosins, anabaenopeptins, cyanopeptolins, and microginins. One major limitation for their identification is the absence of commercially available reference standards for most cyanopeptides. Consequently, the identification and absolute quantification of most cyanopeptides remain challenging. Few studies quantified absolute concentrations of cyanopeptides beyond the hepatotoxic class of microcystins in lake samples using commercially available bioreagents with slightly lower purity than reference materials or produced their own gravimetric reference materials [16, 19, 20]. These studies detected other cyanopeptides at similar concentrations and frequency as microcystins (µg L−1 range). Furthermore, a recent study demonstrated that the total load of cyanopeptolins and anabaenopeptins can be comparable to microcystins even at the intake of drinking water treatment plants and the concentrations correlated with abundance of cyanobacteria (cell count, chlorophyll-a) [21]. In addition to the release of toxins due to cell lysis, observable concentrations depend on the persistence of each cyanobacterial metabolite in surface water. While most research in the past decades focused on exploring new cyanobacterial metabolites, studies of environmental processes remain mostly elusive with the exception of few microcystins.
Microcystin LR can persist in surface water for several days or weeks [22,23,24,25]. Microbial degradation rates of microcystins can vary significantly across microbial species and can depend on their history of exposure to cyanobacterial blooms [24, 26, 27]. Although biodegradation has been reported to be a likely degradation pathway for microcystins, in general, there is a lag period of hours to weeks, which can be caused either by the initially low abundance of bacteria that are able to degrade microcystins or by a change in metabolic activity [22, 27]. In addition to biotic processes, sunlight-driven transformation of cyanopeptides can contribute to their fate in surface waters. Depending on the light penetration and on the naturally occurring photosensitizers, the reported photochemical half-life of microcystin LR ranges from days to months [28,29,30,31]. Organic matter in surface waters can act as photosensitizers when they absorb sunlight to form a triplet excited state molecule and subsequently produce other reactive species [32]. In the presence of pigments or dissolved organic matter, photosensitized reactions appear to play an important role in the environmental transformation of microcystins, including reactions with hydroxyl radicals and triplet state excited molecules [33,34,35]. Thus far, studies mostly focused on the fate of four microcystin variants (MC-LR, MC-RR, MC-YR, MC-LA), while 279 microcystins are known to date [36]. Even less is known about the fate of other cyanopeptides beyond microcystins.
To assess the health risks associated with bloom contaminated waters, more knowledge about the persistence of cyanopeptides in aquatic systems is essential. In this study, we investigated the photochemical fate of the main cyanopeptides produced by two strains of Microcystis aeruginosa as well as Planktothrix rubescens, and Dolichospermum flos aquae. We evaluated the removal during sunlight exposure in lake water and the effects of pH on transformation kinetics for microcystins, anabaenopeptins, cyanopeptolins, cyclamides, and aeruginosins. Our data indicate that some cyanopeptides are photochemically persistent and other compounds undergo phototransformation by direct or indirect photochemical reactions.
Experimental Section
Microcystin reference standards for MC-LR, MC-YR, MC-RR, MC-LF, MC-LA, MC-LW, MC-LY, and nodularin (all > 95% purity by HPLC) were obtained from Enzo Life Science (Lausen, Switzerland) and [D-Asp3,E-Dhb7]MC-RR (> 95% purity by HPLC) from CyanoBiotech GmbH (Berlin, Germany). Bioreagents for aeruginosin 98B, cyanopeptolin A, cyanopeptolin D, anabaenopeptin A, anabaenopeptin B, anabaenopeptin NZ857, and oscillamide Y (all > 90% purity by HPLC) were obtained from CyanoBiotech GmbH (Berlin, Germany). Aerucyclamide A was obtained as purified bioreagent in dimethyl sulfoxide by Prof. Karl Gademann (University of Zurich, Switzerland) [37]. Additional materials are listed in the Supporting Information (Additional file 1: Text S1).
Cyanobacterial cultures
Microcystis aeruginosa PCC7806 was originally isolated from Braakman reservoir in The Netherlands (1972) and was obtained from the Pasteur Culture Collection of Cyanobacteria (France). Dolichospermum flos aquae NIVA-CYA 269/6 was originally isolated from Lake Frøylandsvatnet in Norway (1990) and Planktothrix rubescens K-0576 was originally isolated from Lake Borre Sø in Denmark. Both strains were obtained from the Norwegian Culture Collection of Algae (NORCCA). Microcystis aeruginosa UV006 was originally isolated from Hartebeespoort Dam in South Africa and an inoculum was provided by Prof. Jakob Pernthaler (University of Zurich, Switzerland). Primary cultures were kept in 75-mL modified WC medium (Additional file 1: Table S1) at 20 ± 2 °C and irradiated at 12 μmol photons m−2 s−1 on a 12:12-h light/dark cycle [38]. To produce significant amount of biomass, 4.5 L of sterile WC medium was inoculated with 10–15% inoculum every four weeks. The 5-L Schott bottles were cultivated at the same conditions described above and aerated with filtered air (GE Healthcare, Whatman, HEPA-VENT, 0.3 µm). All materials used for culturing were autoclaved before use and all the subculturing was performed under sterile conditions.
Cyanopeptide extraction for photochemical experiments
The cells were harvested by centrifugation (rcf of 4000 g at 10 °C, 10 min, Herolab HiCen XL), lyophilized (− 40 °C, − 3 mbar, 24 h, Lyovac GT2, Leybold) and stored at − 20 °C until further analysis. For the extraction, the weight of the dry material was recorded and MeOH/H2O (70/30% v/v) was added at a ratio of 200 µL mgdry,wt−1. The suspension was homogenized by vortexing, incubated under sonication (VWR, Ultrasonic cleaner USC-THD, level 6, 10 min at 15 °C) and pellets were separated from supernatant by centrifugation (rcf of 4660 g at 10 °C, 10 min, Megafuge 1.0 R). The supernatant was transferred to a new glass vial, the extraction was repeated twice and supernatants were combined. The solvent was evaporated from the pooled extract under a gently stream of nitrogen (40 °C, TurboVap®LV, Biotage) to reduce the methanol content to less than 5%. The extracts were then purified by liquid–liquid extraction (LLE). Therefore, extracts were diluted to a total volume of 25 mL with nanopure water in a separatory funnel containing 25 mL of hexane. The funnels were shaken vigorously for 3 min before allowing separation of the two phases again. The water phase was then collected and the hexane phase was discarded. The water phase was extracted with hexane two more times. The extraction allowed to remove a large portion of chromophoric matter with > 90% reduced absorbance at 665 and 613 nm indicative of dominating pigments (chlorophyll-a and phycocyanin, respectively) that would otherwise interfere with the photochemical tests (absorbance spectra of extracts in Additional file 1: Figure S1). The water fraction was concentrated to 300 μL by vacuum-assisted evaporation (Syncore® Analyst R-12, BÜCHI Labortechnik AG, 40 °C, 120 rpm, 20 mbar). Each volume was adjusted gravimetrically to 1.0 mL in nanopure water and stored in the fridge at 4 °C if used within 24 h or in the freezer at − 20 °C until its further use for photochemical experiments.
Cyanopeptide profile in four strains
To analyze the cyanopeptide profiles from different strains, the biomass was extracted as described above with the difference that MeOH/H2O (70/30% v/v) was added at a ratio of 15 µL mgdry,wt−1. The extracts were then individually purified by solid phase extraction (SPE). For the SPE (12-fold vacuum extraction box, Visiprep, 12 ports, Sigma Aldrich) the extracts were diluted to a total volume of 3 mL with nanopure water. The SPE cartridges (Oasis HLB 3 cc, 60 mg) were consecutively conditioned with methanol and water (9 mL each). The extracts were loaded onto the cartridges, washed with 9 mL nanopure water followed by 9 mL MeOH/H2O (20/80% v/v) prior to elution with 9 mL MeOH/H2O (85/15% v/v) at a flow rate of 1 mL min−1. The eluted fraction was concentrated to 300 μL by vacuum-assisted evaporation (Syncore® Analyst R-12, BÜCHI Labortechnik AG, 40 °C, 120 rpm, 20 mbar), and each volume was adjusted gravimetrically to 1.0 mL in nanopure water. The cyanopeptides were analyzed as described below.
Simulated sunlight exposure
Irradiation experiments were carried out in a benchtop xenon instrument that simulates sunlight (Heraeus, Suntest CPS + , 700 W m−2, light emission spectrum in Additional file 1: Figure S2). Cyanopeptide degradation over time was studied in lake matrix, using lake water collected from Greifensee (06/08/2019, 47.3663°N, 8.665°E), and in buffered nanopure water at pH 7 and 8 (5-mM phosphate buffer) and pH 9 and 10 (10-mM carbonate buffer) with constant ionic strength (13 mM adjusted with sodium chloride). Aqueous cyanobacterial extracts (200 µL) were added to either lake matrix or buffer solutions (total volume of 4 ml). Furfuryl alcohol (FFA, 40 µM) was added for quantification of singlet oxygen. The solutions were exposed to simulated sunlight for three hours in open quartz vials (Pyrex, 7.5 cm, inner diameter 1 cm), positioned at 50° angle from the horizontal plane, assuring that the solutions were completely submerged in a temperature-controlled water bath (20 °C ± 1 °C). The experiments were conducted in experimental duplicate for the buffered solutions and in triplicates for the lake matrix. To monitor the light flux, the chemical actinometer system PNA-PYR was used and solutions in nanopure water (10 µM para-nitroanisole, 0.5 mM pyridine) were irradiated along with the experiment in simulated sunlight. During the irradiation experiment, three technical replicates of 150 µL were collected for time point 0 and two technical replicates were collected at different time points (0.5, 1, 2, 3 h). To account for transformation independent of light, aliquots of each solution in glass vials were covered from light with tin foil and positioned next to the other exposure vials serving as dark controls. All the samples were immediately analyzed for FFA and PNA upon sampling. These samples were then frozen at − 20 °C for further cyanopeptide analysis as detailed below.
FFA and PNA analysis
To assess the steady-state concentration of singlet oxygen [1O2]ss and the photon fluence rate, the degradation of FFA and PNA was monitored, respectively. Both FFA and PNA analyses were performed by high-performance liquid chromatography (HPLC) coupled to a UV–VIS/DAD detector (Dionex UltiMate3000 HPLC, Thermo Fischer Scientific). Chromatographic separation was carried out on an Atlantis T3 C18 column (3 µm, 3 × 150 mm, Waters) with pre-column (VanGuard® Cartridge, Waters) and inline filter (BGB®). The mobile phases consisted of (A) sodium acetate buffer (pH 5.9; 15.6 mM; 10%ACN) and (B) acetonitrile. Isocratic elution was carried out at a flow rate of 350 µL min−1 with an isocratic ratio of 90:10 (A:B) for FFA and 40:60 (A:B) for PNA. The injection volume was 20 µL and detection occurred at 219 nm for FFA and at 316 nm for PNA. Measured peak areas of both FFA and PNA at each timepoint (At) were normalized to their initial concentration (A0) and the natural logarithm of this ratio was plotted against time. The observed degradation rate constants (kobs in s−1) were calculated as the slope of a linear regression. Steady-state concentrations of singlet oxygen [1O2]ss (M) were determined as [39]:
$$\left[ {1O2} \right] = \frac{{k_{obs} ,{\text{FFA}}}}{{k_{rxn} ,{\text{FFA}}}} ,$$
where kobs,FFA (s−1) is the observed degradation rate constant of FFA of each sample vial and krxn,FFA (M−1 s−1) is the temperature-dependent second-order reaction rate constant of FFA with singlet oxygen that can be calculated according to:
$$\ln {k_{\text{rxn,FFA}}} = {\rm{ }}\frac{({1.59 \pm 0.06 )\times {{10}^8}}}{{273.16 + {\rm{T}}[^\circ {\rm{C}}]}} + (23.82 \pm 0.21)$$
The photon fluence rate was calculated based on the kobs of PNA in the actinometer solutions according to established procedures (details in Additional file 1: Text S2) [40].
Cyanopeptide analysis
Cyanopeptide analysis was performed by HPLC (Dionex UltiMate3000 RS pump, Thermo Fischer Scientific) coupled to a high-resolution tandem mass spectrometer (HRMS/MS, LumosFusion Orbitrap, ThermoFisher Scientific). Chromatographic separation was carried out on a XBridgeTM C18 column (3.5 μm, 2.1 × 50 mm, Waters) with pre-column (VanGuard® Cartridge, Waters) and inline filter (BGB®). The mobile phases consisted of (A) nanopure water and (B) methanol both acidified with formic acid (0.1%). Binary gradient elution was carried out at a flow rate of 200 μL min−1 and increasing eluent B from 10 to 95% between 0 and 25 min. The injection volume was 20 μL. Detection of analytes was achieved by HRMS/MS with electrospray ionization (ESI), 320 °C capillary temperature, 4 kV electrospray voltage and 3500 V capillary voltage in positive ionization mode. Full scan accurate mass spectra were acquired from 450 to 1350 m/z with a nominal resolving power of 240,000 referenced at m/z 250, automated gain control (AGC) of 5⋅104, maximal injection time of 100 ms with 1 ppm mass accuracy. Data-dependent high-resolution product ion spectra were obtained by stepped normalized collision energy for HCD (10, 20, 30, 40 and 50%) and CID (30 and 35%), at a resolving power of 15,000 at 400 m/z, AGC of 1⋅104 and maximal injection time of 22 ms. For triggering data-dependent MS/MS acquisition we included cyanopeptides from the publicly available list CyanoMetDB [18]. The suspect screening included 1219 cyanopeptides in total with 160 microcystins, 177 cyanopeptolins, 73 anabaenopeptins, 65 cyclamides, 78 microginins, 79 aeruginosins and 587 other compounds, accounting structural isomers and the mass window of 450–1350 m/z.
Cyanopeptide identification
Data evaluation and peak area extraction were performed with Skyline 20.1 (MacCoss Lab Software). Charge states (z = 1 and z = 2) and adducts (H+, Na+) were considered for all compounds. The identification of most cyanopeptides needed to be carried out without available reference standard materials. Thus, a comprehensive data analysis workflow established for suspect screening of micropollutant was modified and applied [41]. One major difference to micropollutant suspect screening is the fact that no spectral libraries exist for most cyanopeptide suspects. Therefore, we used in-silico fragmentation predictions to facilitate compound identification (Mass Frontier 7.0, mMass 5.5.0) and the confidence level scheme widely used for mass spectrometry by Schymanski et al. [41]. Herein, only those cyanopeptides were reported that could be identified as one of the following criteria: a cyanopeptide was identified as a tentative candidate (Level 3) based on exact mass (< 5 ppm mass error), accurate isotopic pattern (Skyline idotp value > 0.9), and evidence from fragmentation data; a cyanopeptide was identified as probable structure (Level 2) based on complete fragmentation information confirming the connectivity of the building blocks of the peptide; and a cyanopeptide was identified as confirmed structure (Level 1) when these parameters were in agreement with available reference standards or bioreagents. The fragmentation spectra of reference standards (or bioreagents) were compared to confirmed structures in our experiments (i.e., cyanobacteria extracts) with head to tail plots using the R packages RMassBank [42] and MSMSsim [43] (Additional file 1: Figures S3–S15). Data analysis was performed in RStudio with R version 3.6.1 [44]. Therefore, the HRMS measurement data files were converted to open.mzXML data format using the msconvert tool from ProteoWizard [45].
The peak areas of selected ion chromatograms were extracted for all cyanopeptides identified with Level 1–3 in Skyline (Version 20.1). For all cyanopeptides the M + H ion was dominating with the exception of microcystins that contain two arginine moieties, here the M + 2H was selected for area extraction. The identified cyanopeptides were quantified by external calibration curves of available reference standards and bioreagents in the range of 0.5–500 µg L−1. Concentrations were only reported when the peak area was above the limits of quantification (LOQs), defined as 10 times the ratio of standard deviation of the response over slope of the logarithmic calibration curve (details in Additional file 1: Table S2). For those cyanopeptides for which no reference standard or bioreagent was available, we calculated class-specific equivalents, which were calculated from external calibration curves with the structurally most similar bioreagent or standard assigned for each compound (details in Additional file 1: Table S3) according to previous work [46].
Photodegradation assessment
All first-order degradation rate constants, kobs (s−1), were assessed as the slope of a linear regression of natural log-transformed normalized peak area (ln(At/A0)) versus irradiation time. A kobs was only reported for regressions with a correlation coefficient r2 > 0.6 and when the final concentration after irradiation was statistically significantly different from both, the dark control and initial concentration (t-test, p-value < 0.05). If a compound degraded (i.e., significant difference between final concentration and dark control and initial concentration) but the loss did not follow pseudo-first-order kinetics (i.e., r2 < 0.6), we report "no first-order kinetics" or "n.f.k.". If we did not observe any significant loss of concentration relative to the dark control and the initial concentration during irradiation, we report "no degradation detected" or "n.d.".
One-way analysis of variance (ANOVA) followed by Tukey pairwise comparison was employed to detect statistical significance influence of amino acid moieties and cyanopeptide class on the observed degradation rate constants by comparison of the 95% confidence intervals in RStudio (version 3.6.1).
Cyanopeptide profiles in four strains
We investigated the cyanopeptide profiles of four cyanobacterial strains, D. flos aquae (NIVA-CYA 269/6), M. aeruginosa PCC7806, M. aeruginosa UV006, and P. rubescens (K-0576) by suspect screening. We identified 92 different cyanopeptides in total, 39 in D. flos aquae, 41 in M. aeruginosa PCC7806, 40 in M. aeruginosa UV006 and 25 in P. rubescens (details in Additional file 1: Table S4). Total cyanopeptide concentration normalized to dried biomass varied from 8 to 34 µg mg−1, with D. flos aquae being the strain with the highest cyanopeptide content (Fig. 1a). Data in Fig. 1b show the relative cyanopeptide profiles and all strains produced microcystins with different contributions ranging from 8 to 19%. The cyanopeptide profiles of D. flos aquae and P. rubescens were dominated by anabaenopeptins, with contributions of 78 and 63% to the total cyanopeptide pool, respectively. The profiles of M. aeruginosa strains were dominated by cyanopeptolins (84% in UV006 and 50% in PCC7806). Overall, cyclamides were only produced by M. aeruginosa PCC7806 with 42% relative abundance. These results support previous observations that cyanobacteria are able to produce a variety of different cyanopeptides and that the cyanopeptide profile is not only specific to one specie but also individual strains with the example of Microcystis here [17, 46,47,48,49].
Cyanopeptide profiles of four cyanobacterial strains, Dolichospermum flos aquae, Mirocystis aeruginosa PCC7806, Microcystis aeruginosa UV006 and Planktothrix rubescens, as a cyanopeptide concentration per dried biomass (µg mg−1 dry wt) and b relative abundance of cyanopeptide classes including aeruginosins (green), anabaenopeptins (blue), cyanopeptolins (orange), cyclamides (purple), microcystins (yellow) and microginins (red). The number of total cyanopeptides produced by each strain is reported to the side of each bar (#)
Fate of cyanopeptides in sunlit surface waters
Cyanopeptides may undergo several transformation pathways once they are released to the surface water when cyanobacterial cells lyse. Since cyanobacterial bloom events occur predominantly during the summer period, phototransformation may be an important fate pathway. After purification of extracts from all four strains, we followed the degradation of 54 cyanopeptides during exposure to simulated sunlight in natural lake water that had a dissolved organic carbon content of 5.5 mgC L−1 and pH 9 (Lake Greifensee water). Data in Fig. 2 show the relative abundance of the dominating cyanopeptides (> 1%) in each strain and the loss after 3 h of sunlight exposure (hashed portion of the bars). Cyanopeptides from D. flos aquae and P. rubescens showed the highest removal with 28 and 26%, respectively. Most cyanopeptides produced by the two M. aeruginosa strains were rather persistent with only up to 3% removal in total. To the best of our knowledge, this is the first assessment of transformation that considers not only 27 microcystins variants but also 27 cyanopeptides beyond microcystins in sunlit surface water. The simulated sunlight matched the natural sunlight spectrum and the absolute intensity was approximately half of the near surface radiation in July 2013 in Zurich, Switzerland (Additional file 1: Text S2, Figure S2), which allows to compare the observed rates to other light conditions of interest.
Cyanopeptide profiles before and after exposure to simulated sunlight (pH 9.3; DOM 5.5 mgC L−1) in relative abundance (%) for a Microcystis aeruginosa PCC7806; b Dolichospermum flos aquae; c Microcystis aeruginosa UV006; d Planktothrix rubescens. The hashed portion of the bars represents the percentage removal of each cyanopeptide after 3 h exposure to simulated sunlight. The total percentage removal of the whole cyanopeptide pool for each specie is represented by the bar at the top of each chart (dark blue). Only cyanopeptides that contributed with more than 1% to the total cyanopeptide pool are represented here. Compound names indicated as "group" refer to structural isomers detailed in Table S5 and for the compounds marked with an asterisk (Microcystin-(H4)YR-Group-1035, Microcyclamide 7806B, Aerucyclamide B, Aerucyclamide C) the degradation could not be followed in the presented experiment
Under solar-simulated conditions, some cyanopeptides did not show measurable degradation, few degraded significantly and the majority degraded rather slowly. In the following, we inspected the degradation kinetics and amino acid composition of different cyanopeptides in more detail.
Phototransformation kinetics of cyanopeptides
We followed 54 cyanopeptides during exposure to simulated sunlight in lake water. During three hours exposure, 37 cyanopeptides degraded with half-lives of 5–14 h, and 2 cyanopeptides degraded rather fast with half-lives of < 4 h, while 14 cyanopeptides did not follow pseudo-first-order kinetics (n.f.k.) and 1 cyanopeptide did not show any measurable degradation (n.d.); all rates are tabulated in Additional file 2: Table S6 (for data plots see Additional file 1: Figure S16). The overview in Fig. 3 shows that cyanopeptides within each class share a common core structure with variable building blocks, which can affect their susceptibility to transformation reactions. To understand the differences in observed reactivity among cyanopeptides, we inspected these structures and the observed photochemical rates herein.
Chemical structures of cyanopeptides of the classes anabaenopeptins, cyanopeptolins, cyclamides, microcystins, and aeruginosins. The characteristic core structure of each cyanopeptide class is highlighted in black, while the variable building blocks of the molecule are depicted in gray. Main peptides produced by the four cyanobacterial strains (> 1% to total abundance) are listed with their amino acid building block strings in addition to the represented structure above. The photochemically reactive moieties of tyrosine and methionine are highlighted in red and blue, respectively. The light absorbance range and the second-order reaction rate constants with hydroxyl radical and singlet oxygen for tyrosine and methionine are listed
Direct phototransformation can occur when cyanopeptides absorb light in the solar spectrum (> 280 nm wavelength), which requires the presence of chromophoric moieties. Tryptophan is the most reactive amino acid for direct photochemical processes but does not occur in the cyanopeptides studied herein [50, 51]. In the absence of chromophores, cyanopeptides may still undergo indirect photochemical transformations. Other chromophores present in the surface water (CDOM) can absorb sunlight and react with cyanopeptides in the excited state (3CDOM*). In addition, 3CDOM* can produce reactive oxygen species, such as singlet oxygen and hydroxyl radicals. Hydroxyl radical is a non-selective oxidant that can oxidize most amino acids with up to diffusion-controlled reaction rate constants but shows relatively low steady-state concentrations in surface waters of 10−15 to 10−17 M [52,53,54]. Singlet oxygen is a more selective oxidant known to oxidize the amino acids tyrosine, methionine, histidine, tryptophan and cysteine [54]. Together with significantly higher steady-state concentrations in sunlit surface waters ranging from 10−12 to 10−14 M [55, 56], singlet oxygen may be responsible for differences of observed transformation rates among cyanopeptides. The concentration of singlet oxygen in the presented experiment (lake Greifensee water, pH 9) was 1.4 ± 0.05 × 10–13 M, which is comparable to previous observations of surface waters and supports the possibility that reactions with cyanopeptides could have occurred.
Data in Fig. 4a show the reactivity of several microcystins in irradiated lake water. MC-LA was the slowest degrading microcystin, followed by MC-LR and MC-RR-group-1024 (list of isomers in Additional file 1: Table S5) with half-lives ranging from 14 to 8 h. These microcystins contain leucine (L) in position 2, arginine (R) or alanine (A) in position 4 and no chromophoric moieties that absorb light in the solar spectrum. Therefore, previous work observed no significant direct phototransformation for similar microcystins in sunlight [29, 30, 57]. However, in the presence of CDOM present in the lake matrix microcystins can degrade [33, 34]. The class of microcystins is characterized by the presence of a characteristic Adda moiety, being an (2S,3S,4E,6E,8S,9S)-3-amino-9-methoxy-2,6,8-trimethyl-10-phenyldeca-4,6-dienoic acid, or derivatives thereof. Previously, the majority of MC-LR photodegradation (approx. 60%) was attributed to the reaction of the Adda side chain with triplet sensitizer leading to double bond isomerization and the formation of 6(Z)Adda-MC-LR [29, 33]. Since we used mass spectrometry as the detection method, we were not able to differentiate between these Adda-isomers and hence this cannot explain the observed loss herein. Singlet oxygen, in theory, can also react with the double bond of the Adda side chain, but previous work suggests that singlet oxygen contribution to phototransformation of MC-LR is only minor [33, 34]. In addition, hydroxy radicals can react with the Adda side chain by addition to the aromatic ring and diene and dominated the photochemical reaction for MC-LR other than isomerization [58]. The second-order rate constant for the reaction of hydroxyl radical with MC-LR was determined to be 2.3 × 10–10 M−1 s−1 [58] and 6–18% of MC-LR total degradation in surface waters was attributed to the reaction with hydroxyl radical before [34]. Second-order reaction rate constants of individual amino acid moieties with hydroxyl radicals are available in the literature [51, 59]. The increased reactivity from MC-LA to MC-LR and MC-RR observed here during sunlight exposure in lake water, agrees with the increase in second-order reaction rate constants with hydroxyl radicals of the amino acids alanine (A, 0.08 × 109 M−1 s−1), leucine (L, 1.7 × 109 M−1 s−1) and arginine (R, 3.5 × 109 M−1 s−1). Data in Fig. 4a also show the slightly more reactive microcystins MC-LM, MC-MR-group-1013.51 (list of isomers in Additional file 1: Table S5), and MC-YR with half-lives of 6–7 h. These microcystin variants contain a methionine (M) or tyrosine (Y), which react even faster with hydroxyl radicals (7.5 × 109 M−1 s−1 and 13 × 109 M−1 s−1, respectively) [59]. Methionine and tyrosine also react with singlet oxygen with second-order reaction rate constant as freely dissolved amino acids of 1.6 × 107 M−1 s−1 and 0.8 × 107 M−1 s−1, respectively [60]. Here, we observed the formation of the oxidation product [D-Asp-3]MC-M(O)R, which further supports the conclusion that methionine oxidation by singlet oxygen took place (Additional file 1: Figure S17). Given the singlet oxygen concentration of 1.4 ± 0.05 × 10–13 M during exposure and the second-order rate constant, the expected half-lives based on reaction with singlet oxygen alone would be 83 h for methionine. The difference between these estimates and the observed rates is likely related to several factors including additional indirect phototransformation reactions that may occur (e.g., reaction with hydroxyl radicals or 3CDOM*) [54]. The fact that most photochemical half-lives of microcystins decreased from nearly persistent conditions in buffered nanopure water at pH 9 to short half-lives in lake water (pH 9, Additional file 1: Figure S18) further supports the conclusion that additional indirect phototransformation reactions contributed to cyanopeptide decay in the presence of lake matrix. Among microcystins, only MC-YR contributed significantly to the overall removal of cyanopeptides from D. flos aquae as other microcystins either degraded too slowly (MC-RR variants, MC-LR) or did not account for a high abundance of the cyanopeptide profile (MC-LA, MC-MR, MC-LM, Fig. 2).
Cyanopeptide degradation during exposure to simulated sunlight at pH 9 in lake Greifensee water as the natural log-transformed peak area versus irradiation time (min) a for microcystins: MC-LA (gray), MC-LR (black), MC-RR-group-1024 (blue), MC-LM (green), MC-YR (pink) and MC-MR-group-1013.51 (orange); and b for aeruginosin 298A (green), aerucyclamide A (black), aerucyclamide D (pink), and the cyanopeptolin oscillapeptin J (blue). The filled symbols represent dark controls and the error bars represent one standard deviation
Data in Fig. 4b present examples of degradation in sunlight for cyanopeptides beyond microcystins including the rather stable aerucyclamide A and quite reactive aerucyclamide D. Cyclamides are cyclic penta-peptides characterized by the presence of oxazoles/oxazolines and thiazoles/thiazolines moieties that are produced by M. aeruginosa PCC7806 in high abundance [37, 61]. Aerucyclamide A was stable during sunlight exposure in agreement with the lack of light absorbance in the solar spectrum and a very low reaction rate constant with singlet oxygen reported earlier (1.2 × 104 M−1 s−1) [62]. Aerucyclamide D, on the other hand, degraded significantly with a half-life of 3.4 h, which we also attribute to the presence of a methionine moiety in position 6 that differentiates it from aerucyclamide A (Fig. 3). M. aeruginosa PCC7806 presented the lowest removal of total cyanopeptides (1.5%) and it is the only strain that contain high percentage of cyclamides in the profile. Although observed decay constants could not be obtained for Aerucyclamide B, Aerucyclamide C and Microcyclamide 7806B, due to poor recovery during the purification process required for the experiments, we would not expect significant degradation since these peptides do not contain any photolabile building blocks.
Tyrosine-containing peptides contributed mostly to the overall high removal of cyanopeptides from D. flos aquae and P. rubescens. Data in Fig. 4 show the degradation of tyrosine-containing cyanopeptides of three different classes: MC-YR, aeruginosin 298A and the cyanopeptolin oscillapeptin J with half-lives of 6–8 h. Similar rates were observed for a total of 17 cyanopeptides that contain a tyrosine or tyrosine-like moiety, including 4 microcystins, 3 aeruginosins, 3 cyanopeptolins and 7 anabaenopeptins (Additional file 2: Table S6). Furthermore, statistical analysis showed that cyanopeptides containing the amino acids tyrosine (or its variants) and methionine are more reactive than cyanopeptides that do not contain these moieties, while there was no significant influence of cyanopeptide class on the degradation rate (Additional file 1: Figure S19–S20). Details on the pH dependence of tyrosine-containing cyanopeptides and their decay in sunlit surface waters is discussed in more detail below.
Effect of pH on phototransformation of cyanopeptides
The majority of the experiments reported in the literature that evaluate the degradation of organic molecules in surface waters are carried out at pH 7. However, during cyanobacterial bloom events, the pH commonly increases in the surface waters to 8–9 [16]. Here, we also studied the effect of pH on the phototransformation of cyanopeptides. The cyanopeptides were exposed to buffered water and dissolved organic matter that remained from the purified cyanopeptide extracts so that direct and indirect phototransformation can occur. The singlet oxygen concentrations ranged from 0.7 to 1.3 × 10–13 M between pH 7 and 10, and were similar to conditions in the experiment with lake water (1.4 × 10–13 M). Overall, we observed an increase of phototransformation rates with increasing pH for 14 cyanopeptides and all of them contained a tyrosine or structurally related moieties (homotyrosine or Htyr; 2-hydroxy-3-(4′-hydroxy- phenyl)acetic acid or Hpla; N-methylated tyrosine or NMeTyr). Data in Fig. 5 show the observed rate constants for MC-YR, oscillapeptin J and anabaenopeptin B, three cyanopeptides that contain a tyrosine moiety. The reaction rates increased with pH. The half-lives of these tyrosine-containing cyanopeptides decreased by one order of magnitude from nearly persistent conditions at pH 7 (half-life > 70 h) to short half-lives of < 10 h at pH 10. In comparison, microcystin LR did not show detectable degradation at any pH when no dissolved organic lake matrix was present (compare to data in Fig. 4a with lake matrix).
Observed decay rate constant (kobs) versus pH in buffered nanopure water for anabaenopeptin B (blue diamonds), oscillapeptin J (red triangles), MC-YR (yellow squares) and MC-LR (gray circles). Error bars represent one standard deviation
Tyrosine is known to undergo both direct and indirect phototransformation in sunlit surface waters and both processes are pH dependent. On one hand, the absorbance spectrum of the phenolate form of tyrosine shifts towards longer wavelength centered at 290 nm instead of 265 nm for the protonated species [63]. Consequently, tyrosine has a larger spectral overlap with sunlight at higher pH resulting in an increase in direct phototransformation. On the other hand, indirect photochemical reactions of tyrosine can also be influenced by pH. The reaction with singlet oxygen is more than one order of magnitude faster at alkaline pH when tyrosine is deprotonated with 0.8 × 107 M−1 s−1 at pH 7 compared to 35 × 107 M−1 s−1 at pH 10 [64]. Those two factors combined explain the pH-dependent behavior of those cyanopeptides. Further studies are necessary to clearly delineate the contribution of direct and indirect phototransformation to this pH effect for tyrosine-containing cyanopeptides by assessing reaction rate constants and quantum yields that can be compared to available data of freely dissolved tyrosine.
We assessed the concentration change of 54 cyanopeptides during 3 h of exposure to simulated sunlight in lake water. Of those, 37 cyanopeptides degraded with half-lives of 5–14 h, and 2 cyanopeptides degraded rather fast with half-lives of < 4 h. Overall, the total cyanopeptide concentration of D. flos aquae and P. rubescens presented the highest removal with 28 and 26%, respectively. Most cyanopeptides produced by the two M. aeruginosa strains were rather persistent with only up to 3% removal. To the best of our knowledge, this is the first assessment of transformation kinetics in sunlit surface water that considered not only 27 microcystin variants but also 27 other cyanopeptides. Knowing the stability of cyanopeptides under environmental conditions helps to predict which cyanopeptides are more susceptible to reach drinking water treatment plants. Those abundant and persistent cyanopeptides should be prioritized for the evaluation of their abatement during water treatment and for further toxicological assessments.
We further assessed the degradation kinetics, with respect to building blocks, of the cyanopeptides known to undergo indirect photochemical reactions. The increased reactivity from MC-LA, to MC-LR, MC-RR, MC-LM, MC-MR and MC-YR observed generally agrees with an increase in second-order reaction rate constants of the amino acids with hydroxyl radicals. Methionine in microcystins and aerucyclamide D resulted in shorter half-lives in sunlit lake water based on the relatively fast reactivity with singlet oxygen and hydroxyl radicals. Tyrosine and structurally related moieties are photochemically reactive building blocks commonly found in cyanopeptides across the classes of anabaenopeptins, cyanopeptolins, microcystins and aeruginosins. Analogous to the known pH-dependent photochemistry of freely dissolved tyrosine, the photochemical half-lives of 14 tyrosine-containing cyanopeptides decreased by one order of magnitude from nearly persistent conditions at pH 7 (half-life > 70 h) to short half-lives of < 10 h at pH 10. This pH dependence is an important finding and needs to be considered when evaluating the degradation rate of cyanotoxins and cyanopeptides in surface waters. The majority of the experiments conducted to evaluate the degradation of organic molecules in surface waters are conducted around pH 7, which can underestimate the actual transformation rate during cyanobacterial bloom events that typically occur at higher pH. More work is needed to differentiate the contribution of distinct photochemical reaction pathways to the pH effect.
Supplementary information available and an additional datafile containing Additional file 2: Table S6. The datasets obtained and analysed in the current study are available from the corresponding author on reasonable request.
Huisman J et al (2018) Cyanobacterial blooms. Nat Rev Microbiol 16(8):471–483
Kosten S et al (2012) Warmer climates boost cyanobacterial dominance in shallow lakes. Glob Change Biol 18(1):118–126
Beaulieu M, Pick F, Gregory-Eaves I (2013) Nutrients and water temperature are significant predictors of cyanobacterial biomass in a 1147 lakes data set. Limnol Oceanogr 58(5):1736–1746
Cavicchioli R et al (2019) Scientists' warning to humanity: microorganisms and climate change. Nat Rev Microbiol 17(9):569–586
Paerl HW, Huisman J (2008) Climate—blooms like it hot. Science 320(5872):57–58
Pouria S et al (1998) Fatal microcystin intoxication in haemodialysis unit in Caruaru Brazil. Lancet 352(9121):21–26
Merel S et al (2013) State of knowledge and concerns on cyanobacterial blooms and cyanotoxins. Environ Int 59:303–327
Janssen EML (2019) Cyanobacterial peptides beyond microcystins—a review on co-occurrence, toxicity, and challenges for risk assessment. Water Res 151:488–499
Agha R, Quesada A (2014) Oligopeptides as biomarkers of cyanobacterial subpopulations toward an understanding of their biological role. Toxins 6(6):1929–1950
Carmichael WW (2001) Health effects of toxin-producing cyanobacteria: "The CyanoHABs." Hum Ecol Risk Assess 7(5):1393–1407
Matsunaga H et al (1999) Possible cause of unnatural mass death of wild birds in a pond in Nishinomiya Japan Sudden appearance of toxic cyanobacteria. Nat Toxins 7(2):81
Chen J et al (2009) First identification of the hepatotoxic microcystins in the serum of a chronically exposed human population together with indication of hepatocellular damage. Toxicol Sci 108(1):81–89
Bogialli S et al (2017) Liquid chromatography-high resolution mass spectrometric methods for the surveillance monitoring of cyanotoxins in freshwaters. Talanta 170:322–330
Flores C, Caixach J (2015) An integrated strategy for rapid and accurate determination of free and cell-bound microcystins and related peptides in natural blooms by liquid chromatography-electrospray-high resolution mass spectrometry and matrix-assisted laser desorption/ionization time-of-flight/time-of-flight mass spectrometry using both positive and negative ionization modes. J Chromatogr A 1407:76–89
Saker ML et al (2005) Variation between strains of the cyanobacterium Microcystis aeruginosa isolated from a Portuguese river. J Appl Microbiol 99(4):749–757
Beversdorf LJ et al (2017) Variable cyanobacterial toxin and metabolite profiles across six eutrophic lakes of differing physiochemical characteristics. Toxins (Basel) 9(2):62
Welker M et al (2004) Diversity and distribution of Microcystis (Cyanobacteria) oligopeptide chemotypes from natural communities studied by single-colony mass spectrometry. Microbiology-Sgm 150:1785–1796
Jones MR et al (2020) Comprehensive database of secondary metabolites from cyanobacteria. https://doi.org/10.1101/2020.04.16.038703
Chorus I et al (2006) Toxic and bioactive peptides in cyanobacteria—PEPCY report. https://www.uibk.ac.at/limno/files/pdf/final-report-pepcy.pdf.
Roy-Lachapelle A et al (2019) A data–independent methodology for the structural characterization of microcystins and anabaenopeptins leading to the identification of four new congeners. Toxins 11(11):619
Beversdorf LJ et al (2018) Analysis of cyanobacterial matebolites in surface water and raw drinking waters reveals more than microcystin. Water Res 140:280–290
Edwards C et al (2008) Biodegradation of microcystins and nodularin in freshwaters. Chemosphere 73(8):1315–1321
Mazur H, Pliñski M (2001) Stability of cyanotoxins, microcystin-LR, microcystin-RR and nodularin in seawater and BG-11 medium of different salinity. Oceanologia 43(3):329–339
Welker M, Steinberg C, Jones G (2001) Release and persistence of microcystins in natural waters. Microbial degradation of microcystins. In: Chorus I (ed) Cyanotoxins: occurrence, causes, consequences, 1st edn. Springer, Berlin, Heidelberg, pp 88–93. https://doi.org/10.1007/978-3-642-59514-1
Cousins IT et al (1996) Biodegradation of microcystin-LR by indigenous mixed bacterial populations. Water Res 30(2):481–485
Kohler E et al (2014) Biodegradation of microcystins during gravity-driven membrane (GDM) ultrafiltration. PLoS ONE. https://doi.org/10.1371/journal.pone.0111794
Lawton LA et al (2011) Novel bacterial strains for the removal of microcystins from drinking water. Water Sci Technol 63(6):1137–1142
Wormer L et al (2010) Natural photodegradation of the cyanobacterial toxins microcystin and cylindrospermopsin. Environ Sci Technol 44(8):3002–3007
Tsuji K et al (1995) Stability of microcystins from cyanobacteria. 2.Effect of UV light on decomposition and isomerization. Toxicon 33(12):1619–1631
Welker M, Steinberg C (2000) Rates of humic substance photosensitized degradation of microcystin-LR in natural waters. Environ Sci Technol 34(16):3415–3419
Welker M, Steinberg C, Jones G (2001) Release and persistence of microcystins in natural waters. Photosensitized degradation of microcystins. In: Chorus I (ed) Cyanotoxins: occurrence, causes, consequences, 1st edn. Springer, Berlin, Heidelberg, pp 93–98. https://doi.org/10.1007/978-3-642-59514-1
McNeill K, Canonica S (2016) Triplet state dissolved organic matter in aquatic photochemistry: reaction mechanisms, substrate scope, and photophysical properties. Environ Sci Process Impacts 18(11):1381–1399
Song WH, Bardowell S, O'Shea KE (2007) Mechanistic study and the influence of oxygen on the photosensitized transformations of microcystins (cyanotoxins). Environ Sci Technol 41(15):5336–5341
Yan SW, Zhang D, Song WH (2014) Mechanistic considerations of photosensitized transformation of microcystin-LR (cyanobacterial toxin) in aqueous environments. Environ Pollut 193:111–118
Sun QY et al (2018) Ultraviolet photosensitized transformation mechanism of microcystin-LR by natural organic matter in raw water. Chemosphere 209:96–103
Bouaicha N et al (2019) Structural diversity, characterization and toxicology of microcystins. Toxins 11(12):714
Portmann C et al (2008) Aerucyclamides A and B: Isolation and synthesis of toxic ribosomal heterocyclic peptides from the cyanobacterium Microcystis aeruginosa PCC 7806. J Nat Prod 71(7):1193–1196
Guillard RR, Lorenzen CJ (1972) Yellow-green algae with chlorophyllide C. J Phycol 8(1):10–000
Appiani E et al (2017) Aqueous singlet oxygen reaction kinetics of furfuryl alcohol: effect of temperature, pH, and salt content. Environ Sci Process Impacts 19(4):507–516
Dulin D, Mill T (1982) Development and evaluation of sunlight actinometers. Environ Sci Technol 16(11):815–820
Schymanski EL et al (2014) Identifying small molecules via high resolution mass spectrometry: communicating confidence. Environ Sci Technol 48(4):2097–2098
Stravs MA et al (2013) Automatic recalibration and processing of tandem mass spectra using formula annotation. J Mass Spectrom 48(1):89–99
Schollée JE (2017) MSMSsim: functions for processing HRMS2 spectra from output from RMassBank, mainly for calculating spectral similarity. https://github.com/dutchjes/MSMSsim 2017.
RStudio T (2019) RStudio.integarted development for R. RStudio, Inc., Boston, MA URL http://www.rstudio.com/.
Chambers MC et al (2012) A cross-platform toolkit for mass spectrometry and proteomics. Nat Biotechnol 30(10):918–920
Natumi R, Janssen EML (2020) Cyanopeptide co-production dynamics beyond mirocystins and effects of growth stages and nutrient availability. Environ Sci Technol 54(10):6063–6072
Welker M, Christiansen G, von Dohren H (2004) Diversity of coexisting Planktothrix (Cyanobacteria) chemotypes deduced by mass spectral analysis of microystins and other oligopeptides. Arch Microbiol 182(4):288–298
Tonk L et al (2009) Production of cyanopeptolins, anabaenopeptins, and microcystins by the harmful cyanobacteria Anabaena 90 and Microcystis PCC 7806. Harmful Algae 8(2):219–224
Carneiro RL et al (2012) Co-occurrence of microcystin and microginin congeners in Brazilian strains of Microcystis sp. FEMS Microbiol Ecol 82(3):692–702
Janssen EML, Erickson PR, McNeill K (2014) Dual roles of dissolved organic matter as sensitizer and quencher in the photooxidation of tryptophan. Environ Sci Technol 48(9):4916–4924
Lundeen RA et al (2014) Environmental photochemistry of amino acids peptides and proteins. Chimia (Aarau) 68(11):812–817
Page SE, Arnold WA, McNeill K (2011) Assessing the contribution of free hydroxyl radical in organic matter-sensitized photohydroxylation reactions. Environ Sci Technol 45(7):2818–2825
Haag WR, Hoigne J (1985) Photo-sensitized oxidation in natural-water via OH radicals. Chemosphere 14(11–12):1659–1671
Boreen AL et al (2008) Indirect photodegradation of dissolved free amino acids: the contribution of singlet oxygen and the differential reactivity of DOM from various sources. Environ Sci Technol 42(15):5492–5498
Wolff CJM, Halmans MTH, Vanderheijde HB (1981) The formation of singlet oxygen in surface waters. Chemosphere 10(1):59–62
Zepp RG et al (1977) Singlet oxygen in natural-waters. Nature 267(5610):421–423
Leon C et al (2019) Study of cyanotoxin degradation and evaluation of their transformation products in surface waters by LC-QTOF MS. Chemosphere 229:538–548
Song WH et al (2009) Radiolysis studies on the destruction of microcystin-LR in aqueous solution by hydroxyl radicals. Environ Sci Technol 43(5):1487–1492
Buxton GV et al (1988) Critical-review of rate constants for reactions of hydrated electrons, hydrogen-atoms and hydroxyl radicals (.Oh/.O-) in aqueous-solution. J Phys Chem Ref Data 17(2):513–886
Matheson IBC, Lee J (1979) Chemical-reaction rates of amino-acids with singlet oxygen. Photochem Photobiol 29(5):879–881
Portmann C et al (2008) Isolation of aerucyclamides C and D and structure revision of microcyclamide 7806A: Heterocyclic ribosomal peptides from Microcystis aeruginosa PCC 7806 and their antiparasite evaluation. J Nat Prod 71(11):1891–1896
Manfrin A et al (2019) Singlet oxygen photooxidation of peptidic oxazoles and thiazoles. J Org Chem 84(5):2439–2447
Antosiewicz JM, Shugar D (2016) UV–vis spectroscopy of tyrosine side-groups in studies of protein structure part 1: basic principles and properties of tyrosine chromophore. Biophys Rev 8(2):151–161
Bertolotti SG, Garcia NA, Arguello GA (1991) Effect of the peptide-bond on the singlet-molecular-oxygen-mediated sensitized photooxidation of tyrosine and tryptophan dipeptides—a kinetic-study. J Photochem Photobiol B Biol 10(1–2):57–70
We thank Martin Jones for improving the cyanopeptide database and valuable discussions, Marta Reyes and Francesco Pomati for culturing support, Karl Gademann for providing aerucyclamide A, and Jakob Pernthaler for inoculum of M. aeruginosa UV006.
This study has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 722493.
Department of Environmental Chemistry, Swiss Federal Institute of Aquatic Science and Technology (Eawag), 8600, Dübendorf, Switzerland
Regiane Natumi & Elisabeth M.-L. Janssen
School of Architecture, Civil and Environmental Engineering (ENAC), Environmental Engineering Institute (IIE), Laboratory for Water Quality and Treatment (LTQE), École Polytechnique Fédérale de Lausanne (EPFL), 1015, Lausanne, Switzerland
Sandro Marcotullio
Regiane Natumi
Elisabeth M.-L. Janssen
RN: conceptualization, investigation, experimental analysis, data evaluation and visualization, writing (original draft) and editing. SM: experimental analysis, data evaluation. EJ: supervision, conceptualization, data evaluation, reviewing and editing writing. All authors read and approved the final manuscript.
Correspondence to Elisabeth M.-L. Janssen.
Competing interest
The authors declare that they have no competing interest.
Additional file1
: Figure S1. Absorbance spectra for extracts of cyanobacteria pooled from Dolichospermum flos aquae, Microcystis aeruginosa PCC7806, Microcystis aeruginosa UV006 and Planktothrix rubescens before (dark green upper line) and after (light green lower line) purification by liquid-liquid extraction. The absorbance spectra were corrected by extract volume analysed. Figure S2. Light spectrum comparison of the solar simulator Heraeus Suntest CPS+(black) versus natural sunlight measured in Zurich on July 2013 (blue) showing the absolute light flux (A) and relative light intensities (B). Figure S3. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Aerucyclamide A bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S4. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Anabaenopeptin A bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S5. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Anabaenopeptin B bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S6. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Cyanopeptolin A bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S7. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Cyanopeptolin D bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S8. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin LA standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S9. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin LF standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S10. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin LR standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S11. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin LW standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S12. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin LY standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S13. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Microcystin YR standard (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S14. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between Oscillamide Y bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the HCD collision energy are noted in the title line. Figure S15. Comparison of relative intensity over m/z range for mass spectrometry fragmentation spectra between [D-Asp3,E-Dhb7]-MC-RR bioreagent (top, orange) and cyanobacterial extract spiked in lake matrix (bottom, blue) as head to tail plots. The m/z value, the retention time (RT in min) and the CID collision energy are noted in the title line. The head to tail plot suggests that the compound present in the cyanobacterial extract is a different structural isomer from the Microcystin-Group-1024. Figure S16. Cyanopeptide degradation during exposure to simulated sunlight at pH 9 in lake Greifensee water as the natural log-transformed peak areas versus irradiation time (min) for seven anabaenopeptins, two aeruginosins, three cyanopeptolins and 18 microcystins. The filled symbols represent dark controls, the error bars represent one standard deviation, the dark red line represent the linear model and the dashed red lines represent the 95% confidence interval. Figure S17. Change of concentration of [D-Asp-3]MC-MR and its oxidated variant [DAsp- 3]MC-M(O)R, which was present in the starting material but increased during photochemical exposure indicating that it was formed, potentially from oxidation of [D-Asp-3]MC-MR. Figure S18. Comparison of cyanopeptide degradation during exposure to simulated sunlight at pH 9 in lake Greifensee water (full line) and in buffered water (dashed line) as the natural log-transformed peak area versus irradiation time (min) for microcystins: MC-LR (A), MC-LA (B), MC-RR-variant group 1024 (C), MC-LM (D), MC-MR (E) and MC-YR (F); The filled symbols represents dark controls and the error bars represents one standard deviation. The first-order degradation rate constants, kobs (s-1) for lake matrix and buffered water and the p-value of the t-test of the slope between conditions for each cyanopeptide are listed. Figure S19. Observed decay rate (kobs, s-1) for the 37 cyanopeptides that presented first-order decay relative to the presence of selected amino acid moieties: no photolabile moiety ("none", coral), methionine (yellow), oxidized tryptophan (green), tyrosine variant (blue), and phenylalanine (pink). P-values of Tukey pairwise ANOVA from comparison to cyanopeptides with no photolabile amino acids are listed as **p-value <0.001 and *p-value<0.05. Figure S20. Observed decay rate (kobs, s-1) for the 37 cyanopeptides that presented first-order decay relative to the cyanopeptide class of microcystin (coral), cyclamide (yellow), aeruginosin (green), anabaenopeptin (blue), and cyanopeptolin (pink). One-way ANOVA did not indicate any significance difference between classes (p-value = 0.501). Figure S21. MS/MS annotation at HCD30 for Anabaenopeptin F. Figure S22. MS/MS annotation at HCD20 for Microcystin-LM. Text S1. Additional Materials. Text S2. Photon fluence rate. Table S1. Composition of modified WC growth medium (Guillard 1972). Table S2. Standard analytical information including: limit of detection (LOD) and limit of quantification (LOQ) for the reference standards and bioreagents in lake matrix experiment and buffered nanopure water for pH range experiments. Table S3. List of compounds and their respective bioreagent or reference standard used for quantification as class equivalents by external calibration curve. Table S4. List of all tentatively identified cyanopeptides in cell extracts of Dolichopermum flos aquae, Microcystis aeruginosa PCC7806, Microcystis aeruginosa UV006 and Planktothrix rubescens. Only the peptides that could be classified as tentative candidate (level 3), probable structure (level 2) or confirmed structure (level 1) are reported. Cyanopeptide references can be found at CyanometDB (Jones et al.). Only compounds above LOQ are reported. Table S5. Isobaric compound group name (cyanopeptide class-group-MW) and individual compounds within each group with the same molecular formula. Only compounds above LOQ are reported.
Additional file 2
: Table S6. Observed degradation rates (kobs) for 54 cyanopeptides in lake matrix and buffered solutions at pH 7-10 are provided as a separate datasheet supplied as supporting information of this manuscript.
Natumi, R., Marcotullio, S. & Janssen, E.ML. Phototransformation kinetics of cyanobacterial toxins and secondary metabolites in surface waters. Environ Sci Eur 33, 26 (2021). https://doi.org/10.1186/s12302-021-00465-3
Cyanopeptide
Microcystin
Phototransformation
Anabaenopeptin
Natural toxins
|
CommonCrawl
|
Association analysis and functional annotation of imputed sequence data within genomic regions influencing resistance to gastro-intestinal parasites detected by an LDLA approach in a nucleus flock of Sarda dairy sheep
Sara Casu1,
Mario Graziano Usai ORCID: orcid.org/0000-0002-6002-22231,
Tiziana Sechi1,
Sotero L. Salaris1,
Sabrina Miari1,
Giuliana Mulas1,
Claudia Tamponi2,
Antonio Varcasia2,
Antonio Scala2 &
Antonello Carta1
Genetics Selection Evolution volume 54, Article number: 2 (2022) Cite this article
Gastroinestinal nematodes (GIN) are one of the major health problem in grazing sheep. Although genetic variability of the resistance to GIN has been documented, traditional selection is hampered by the difficulty of recording phenotypes, usually fecal egg count (FEC). To identify causative mutations or markers in linkage disequilibrium (LD) to be used for selection, the detection of quantitative trait loci (QTL) for FEC based on linkage disequilibrium-linkage analysis (LDLA) was performed on 4097 ewes (from 181 sires) all genotyped with the OvineSNP50 Beadchip. Identified QTL regions (QTLR) were imputed from whole-genome sequences of 56 target animals of the population. An association analysis and a functional annotation of imputed polymorphisms in the identified QTLR were performed to pinpoint functional variants with potential impact on candidate genes identified from ontological classification or differentially expressed in previous studies.
After clustering close significant locations, ten QTLR were defined on nine Ovis aries chromosomes (OAR) by LDLA. The ratio between the ANOVA estimators of the QTL variance and the total phenotypic variance ranged from 0.0087 to 0.0176. QTL on OAR4, 12, 19, and 20 were the most significant. The combination of association analysis and functional annotation of sequence data did not highlight any putative causative mutations. None of the most significant SNPs showed a functional effect on genes' transcript. However, in the most significant QTLR, we identified genes that contained polymorphisms with a high or moderate impact, were differentially expressed in previous studies, contributed to enrich the most represented GO process (regulation of immune system process, defense response). Among these, the most likely candidate genes were: TNFRSF1B and SELE on OAR12, IL5RA on OAR19, IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF on OAR20.
This study performed on a large experimental population provides a list of candidate genes and polymorphisms which could be used in further validation studies. The expected advancements in the quality of the annotation of the ovine genome and the use of experimental designs based on sequence data and phenotypes from multiple breeds that show different LD extents and gametic phases may help to identify causative mutations.
Gastrointestinal nematodes (GIN) are one of the major health problems in grazing animals [1]. GIN infections result in important yield reductions and higher production costs due to veterinary treatments and higher culling rates [2]. Moreover, chemical treatments involve the risk of drug residues in the food and environment and the appearance of anthelmintic resistance, that has been reported in several countries [3,4,5,6]. In sheep, GIN control strategies may also include management practices such as soil tillage or rotational grazing that aim at reducing pasture contamination [7, 8]. Alternative approaches to limit GIN infection rely on nutritional schemes based on either grazing crops with anthelmintic proprieties, such as chicory (Cichorium intybus), sulla (Hedysarum coronarium), sainfoin (Onobrychus viciifolia) and sericea lespedeza (Lespedeza cuneata) [9], or supplementation with tannins and/or proteins; but even these approaches are difficult to apply, especially in extensive or semi-extensive systems.
Fecal egg count (FEC), i.e. the number of parasite eggs per g of faeces, has been largely used as a proxy trait to measure individual resistance to GIN. Selective breeding of animals with enhanced resistance to GIN has been suggested for the sustainable control of parasite infections in sheep since genetic variation between individuals and breeds has been documented. Indeed, estimates of the heritability of proxy traits for GIN resistance in sheep ranges from 0.01 to 0.65 [10], but it is generally moderate for FEC (0.25–0.33 [11]; 0.16 [12]; 0.21–0.55 [13]; and 0.18–0.35 [14]). Thus, breeding for resistance to GIN can be considered in sheep but implies structured selection schemes and accurate recording of both performances and pedigree information, which are essential for genetic evaluation. However, the inclusion of GIN resistance in current breeding schemes is hampered by the difficulty to record FEC on a large scale since its measure is too laborious and costly in field conditions. For this reason, several studies have been carried out to dissect the genetic determinism of GIN resistance with the final aim of setting up breeding schemes that are based on molecular information rather than large-scale recording for progeny testing. Such studies have followed the development of the molecular biology and omic sciences and the concomitant advancement of the statistical methodologies. The first studies were based on sparse maps of molecular markers, such as microsatellites, and used linkage analysis on family-structured populations [15]. In spite of the large number of genomic regions detected in sheep [16,17,18], low significance levels and the low accuracy of localisations made marker-assisted selection unfeasible. Later on, the development of single nucleotide polymorphism (SNP) arrays with medium and high densities and the application of enhanced statistical methods allowed to extend the analysis at the population level and to increase the power of detection and the accuracy of localisations [19,20,21,22,23]. More recently, the availability of high-throughput sequencing technologies and increasingly accurate genome annotations may allow the discovery of new polymorphisms in DNA or RNA sequences and the classification of their effects on genes that are more and more well-known in terms of functions.
The Sarda breed is the most important Italian dairy sheep breed with around three million heads in approximately 10,000 flocks (Regional Department for Agriculture, unpublished observations). Sheep breeding has traditionally been the most important livestock production in Sardinia. Farming systems vary from semi-extensive to semi-intensive with a wide-spread use of grazing on natural pastures and forage crops where infection from GIN is unavoidable. The most represented nematodes species are Teladorsagia circuncincta, Trichostrongylus spp., Haemonchus contortus, Teladorsagia trifurcata, Cooperia spp., while Oesophagostomum venulosum and Nematodirus spp. are found in negligible quantities [24]. The prevalence rate in terms of worm egg count generally increases in the summer-autumn period. In these conditions, most farmers have to administer anthelmintics, often without well planned protocols in terms of individual diagnosis, doses and frequency of treatments. Anthelmintic treatments concern 99.4% of the sheep farms on the island, with on average 1.54 treatments per year that are mainly carried out with benzimidazoles (47.8%), levamisole 21.1%, avermectin (12.7%) and probenzimidazoles (11.5%) [25]. Thus, the control of GIN implies high costs, organizational efforts and further economic losses related to the rules that limit drug residues in milk. In this situation, selective breeding is an attractive option also for Sarda sheep. The current breeding scheme is implemented on about 8% of the purebred population for which yield traits and pedigree data are recorded (Herd Book). The main selection objectives are milk yield per lactation, scrapie resistance, and udder morphology [26]. With the aim of assessing the feasibility of a marker-assisted selection (MAS) scheme for resistance to GIN based on causative mutations or markers in linkage disequilibrium (LD), which does not need large-scale FEC recording, the Regional Agency for Agricultural Research (AGRIS) has established since the late 1990s an experimental population for which the individuals are genotyped with SNP arrays and routinely measured for FEC, as well as other production and functional traits. More recently, a target sample of influential animals from this population was whole-genome re-sequenced and SNP genotypes were imputed to the whole population.
The aim of this study was to identity QTL segregating in the Sarda breed and to search for candidate genes and causative mutations by the functional annotation and association analysis of imputed Sarda sequence data in these target regions.
Experimental population
The nucleus flock of the Sarda breed, that is described in more detail in [26, 27], derives from a backcross population of Sarda \(\times\) Lacaune ewes created in 1999 by mating 10 F1 Sarda \(\times\) Lacaune rams with purebred Sarda ewes. Thereafter, the generations of ewes that were produced until now, were obtained by mating adult ewes of the nucleus flock exclusively with rams coming from the Sarda Herd Book. This has led to a progressive reduction of the proportion of Lacaune blood in the experimental population, which is negligible in the latest generations (around 0.4%). The average size of the flock is about 900 milked ewes per year with a replacement rate of 25 to 30%. The flock is raised on an experimental farm located in the south of Sardinia that generally shows a semi-arid Mediterranean climate with important variations in rainfall and temperatures across seasons and years. The flock is managed following the traditional farming system adopted on the island, which is based on grazing natural or cultivated swards (mainly ryegrass and berseem clover) and supplements of hay, silage and concentrate. Lambings of most of adult ewes occur in the autumn, and those of the remaining ewes and of the primiparous ewes occur in late winter or early spring. Ewes are usually bred in management groups depending on the lambing period. They are milked twice a day by machine from after lamb separation (one month after lambing) until the early summer period when they are progressively and almost simultaneously dried off.
Molecular data
All the ewes of the experimental population born from 1999 to 2017 (n = 4355) and their sires (n = 181, including the 10 F1) and 11 Sarda grandsires were genotyped with the OvineSNP50 Beadchip (50 k hereafter). SNP editing was performed using the call rate and minor allele frequency (MAF) thresholds of 95% and 1%, respectively. The ovine genome assembly v4.0 and the SNPchimMpv.3 software [28] were used to construct the genetic map by assuming 1 Mb = 1 cM. Unmapped SNPs and SNPs on sex chromosomes were not included in the study. Finally, 43,390 SNPs were retained for further analyses.
Among the 4547 genotyped animals, 56 had also been fully re-sequenced within the framework of previous projects. The choice of these 56 animals was based on the assumption that they carried opposite alleles for specific QTL segregating in the Sarda breed and identified in our previous investigations [29] or because they had many progeny in the experimental population. The first group (24 animals, including two Sarda rams and 22 daughters of Sarda rams) had been whole-genome re-sequenced with a target coverage of 12X. The other 32 animals were Sarda sires chosen among those with a higher impact on the population, more recently re-sequenced on an Illumina HiSeq 3000 sequencer and a 30X target coverage. Whole-genome sequence (WGS) data was processed with a pipeline implemented with Snakemake [30], and developed at CRS4 (Center For Advanced Studies, Research and Development in Sardinia https://www.crs4.it/)) available at https://github.com/solida-core. Briefly, adapter sequences were removed from the short reads, then low-quality ends were trimmed, and sequences shorter than 25 bp after trimming were removed with the TrimGalore (v0.4.5) software [31]. The quality of the reads, before and after trimming, was evaluated with the Fastqc (v0.11.5) tool [32]. Trimmed reads were aligned to the Ovis aries reference genome v4.0 (https://www.ncbi.nlm.nih.gov/assembly/GCF_000298735.2) using the Burrow-Wheeler Aligner (BWA v0.7.15) program [33]. Alignments were further sorted, converted to a CRAM file and indexed with Samtools (v1.6) [34]. PCR duplicates were detected with the Picard (v2.18.9) tool [35]. After alignment, joint single nucleotide variants (SNV) (SNPs and insertion-deletions (INDELs)) calling was performed using the GATK (v4.0.11.0) software [36], according to the GATK Best practices workflow [37]. In order to apply the GATK Variant Quality Score Recalibration, first we ran an initial round of SNP calling and only used the top 5% SNPs with the highest quality scores.
FEC was the proxy trait used to assess GIN resistance under natural conditions of infection in the experimental flock. Periodically, a sample of ~ 50 ewes that represented the different management groups, was monitored to evaluate the percentage of infected animals and decide whether to sample the whole flock and possibly administrate anthelmintic treatment. The number of eggs of strongyles per g was determined using a copromicroscopic test according to the McMaster technique [38] on individual samples. When the number of infected animals and the level of infestation were considered sufficient to appreciate individual variability, individual FEC were measured on the whole flock. During the first three years of measurement, coprocultures of pooled samples were also performed at each round of scoring in order to identify GIN genera using the technique and the identification keys of [39, 40]. The results of pooled faecal cultures (mean of 4 cultures and 200 to 400 larval identifications) indicated that H. contortus, T. circumcincta and T. colubriformis were the dominant worm species.
From 2000 to 2012, individual FEC were recorded between one to three times per production year (considered from September to August), according to the level of infestation found in the periodic monitoring samplings that depended on annual variations in rainfall and temperature. Thus, since the level of infestation was low, no individual measures were carried out between July 2003 and September 2004 and between June 2006 and November 2007. The recording of FEC for the detection of QTL was closed in 2012. In 2015, FEC recording of the new generations of ewes born in the nucleus flock was started again in view of implementing marker-assisted or genomic selection in the Sarda breed. These data were added to the previous set to enhance the power of QTL detection of the analysis presented here.
Finally, 17,594 FEC measurements were recorded on 25 separate dates and on 4477 animals (Table 1). The average number of records per ewe was 3.93 ± 2.2, ranging from 1 (13.4% of animals) to 8 (14.13% of animals); almost half of the ewes (46.7%) had from 3 to 5 records.
Table 1 Dates of sampling, number of animals sampled, mean and standard deviation of Fec and LnFec [ln(Fec + 14)]
FEC measurements, that presented a skewed distribution, were log-transformed prior to further analysis using lnFec = ln(FEC + 14).
Variance components and pseudo-phenotypes for QTL detection
In order to calculate the pseudo-phenotypes for the detection of QTL and to estimate variance components, raw data were analysed by a repeatability model including the permanent environment and additive genetic random effects of individual animals and using the ASReml-R 4.1 software [41]. Environmental fixed effects were the date of sampling, the age of the animal (from 1 to 4 years) and its physiological status at the date of sampling. The levels of the physiological status were built considering the days from parturition and the number of lambs carried or born by the measured ewe in the considered production year. Five classes were considered: ewes without pregnancy and lactation, and thus with no lambs, in the considered production year; ewes sampled within 30 days before or after lambing with one lamb; ewes sampled within 30 days before or after lambing with two or more lambs; lactating ewes with one lamb; and lactating ewes with two or more lambs.
Individual FEC recorded from September to the following dry-off (July) were assigned to the same year of age. Data from animals younger than ten months (570 records), which can be considered without acquired immunisation, were not included so that a measure of the parasite resistance expressed by immunized animals was used. However, 90% of those animals had measurements at older ages which were included in the analysis. Only records from genotyped animals, i.e. born before 2017, were included in the analysis. The final dataset included 16,530 records from 4097 animals recorded on 24 separate dates. Genetic relationships between 4547 animals, including the recorded ewes and their sires and genotyped ancestors, were taken into account by calculating the genomic relationship matrix (GRM) based on 50 k genotypes, following [42] and using the GCTA software [43]. The GRM was then inverted using the Ginv function provided by the Mass R package (version 7.3–51.6), [44] which provides a generalized inverse matrix. Pseudo-phenotypes for QTL detection were then calculated as the average performance deviation (APD) of each ewe as proposed by Usai et al. [27]: i.e. by averaging single animal residuals and summing-up the genetic and permanent environment random predictions.
QTL detection analysis
The model used for the QTL detection based on 50k SNP data was the same that was applied to the experimental population for milk traits by Usai et al. [27]. It is based on the combined use of LD and linkage analysis (LA) information (LDLA) to estimate the probability of identity-by-descent (IBD) between pairs of gametes of the genotyped individuals at the investigated position. First, the paternal and maternal inherited gametes of the genotyped individuals were reconstructed by the LD multilocus iterative peeling method [27, 45] by exploiting the genotypes and the familial structure of the population. Then, the base gametes of the population were identified as the gametes inherited from an ungenotyped parent and corresponded to: the gametes of the 10 F1 rams and of the 74 Sarda (grand) sires, the maternal or paternal gametes of the 43 ewes with an unknown sire or dam, respectively, and the maternal gametes of the 928 back-cross ewes and of the 108 Sarda (grand) sires for which only the sire was genotyped. The 1247 base haplotypes (BH) were further divided according to their breed of origin in BHL (the 10 Lacaune paternal gametes carried by the F1 rams) and BHS (the remaining 1237 Sarda gametes). Finally, the remaining parental gametes of the genotyped animals which carried, at each position, an allele inherited from one out of the 1247 original BH were considered as replicates of BH (RH).
The IBD between pairs of BH were estimated by LD analysis (\(IBD_{LD}\)) based on the extent of identity-by-state (IBS) around the investigated position [46]. The \(IBD_{LD}\) between BHS and BHL were assumed to be null. The IBD between BH and RH were estimated by LA analysis (\(IBD_{LA}\)) given the known gametic phases and the pedigree information [27, 46,47,48]. The IBD between pairs of RH were, thus, calculated as the combination of \(IBD_{LD}\) and \(IBD_{LA}\) (\(IBD_{LDLA}\)). This allowed the construction, at each 50k SNP position l, of a matrix (\({\mathbf{G}}_{l}^{IBD}\)) allocating IBD between RH carried by phenotyped ewes. Moreover, in order to account for the polygenic effects, a matrix of genome-wide IBD between gametes (\({\mathbf{G}}_{g}^{IBD}\)) was constructed by averaging elements of \({\mathbf{G}}_{l}^{IBD}\) across all the investigated SNP positions. At this stage, Usai et al. [27] proposed the use of principal component analysis (PCA) to summarize the information of \({\mathbf{G}}_{l}^{IBD}\) and \({\mathbf{G}}_{g}^{IBD}\). The aim of using PCA was to overcome issues related to the non-positive definite status of \({\mathbf{G}}_{l}^{IBD}\) and to limit the computational needs in handling both the IBD matrices. In fact, PCA led to a dramatic reduction in the number of effects to be estimated, so that the principal components from \({\mathbf{G}}_{l}^{IBD}\) and \({\mathbf{G}}_{g}^{IBD}\) can be included in the model as fixed effects. The final model does not include random effects other than the residuals and is solved by a weighted least square method.
At each 50k SNP position l the model is the following:
$$ {\mathbf{y}} = \bf{1}{\upmu } + {\mathbf{ZV}}_{{\mathbf{l}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} + {{\boldsymbol{\upvarepsilon}}}, $$
where \({\mathbf{y}}\) is a vector of APD of \({\text{n}}_{{\text{p}}}\) phenotyped ewes for LnFec; \(\mu\) is the overall mean; \({{\varvec{\upbeta}}}_{l}\) is a vector of the fixed effects of the \({\text{n}}_{{{\text{PC}}_{l} }}\) principal components that explain more than 99% of the within-breed variation (\({\text{PC}}_{l}\)) of the IBD probability matrix \({\mathbf{G}}_{l}^{IBD}\), i.e. \({{\varvec{\upbeta}}}_{l}\) summarizes the effects of the gamete at the QTL position \(l\); \({{\varvec{\upalpha}}}_{l}\) is a vector of the fixed effects of the \({\text{n}}_{{{\text{PC}}_{g} }}\) principal components that explain more than 99% of the variation (\({\text{PC}}_{g}\)) of the genome-wide IBD probability matrix \({\mathbf{G}}_{g}^{IBD}\), i.e. \({{\varvec{\upalpha}}}_{l}\) summarizes the polygenic effects of the gametes; \(\bf{1}\) is a vector of \({\text{n}}_{{\text{p}}}\) ones; \({\mathbf{Z}}\) is a \({\text{n}}_{{\text{p}}} \times {\text{n}}_{{{\text{RH}}}}\) incidence matrix relating phenotypes with RH; \({\mathbf{V}}_{l}\) is a \({\text{n}}_{{{\text{RH}}}} \times {\text{n}}_{{{\text{PC}}_{l} }}\) matrix including the \({\text{PC}}_{l}\) scores of RH that summarize the IBD probabilities between the gametes at the considered position; \({\mathbf{V}}_{g}\) is a \({\text{n}}_{{{\text{RH}}}} \times {\text{n}}_{{{\text{PC}}_{g} }}\) matrix including the \({\text{PC}}_{g}\) scores of RH; \({{\varvec{\upvarepsilon}}}\) is a vector of \({\text{n}}_{{\text{p}}}\) residuals assuming that \({{\varvec{\upvarepsilon}}}\sim {\text{N}}\left( {\mathbf{0},\sigma_{{\upvarepsilon }}^{2} {\mathbf{R}}^{ - 1} } \right)\) with \({\mathbf{R}}\) being a diagonal matrix with the APD's reliability (\(r\)) as diagonal element. Reliabilities were calculated as \(r_{{\text{i}}} = 1 - {\text{se}}\left( {{\hat{\text{a}}}_{{\text{i}}} } \right)^{2} /\sigma_{{\text{a}}}^{2}\), from a repeatability linear model \({\text{y}}_{{{\text{ij}}}} = {\text{a}}_{{\text{i}}} + {\text{e}}_{{{\text{ij}}}}\), where \({\text{y}}_{{{\text{ij}}}}\) is the performance deviation \({\text{j}}\) adjusted for the fixed effects estimated with the full animal model of ewe \({\text{i}}\), \({\text{a}}_{{\text{i}}}\) is the random ewe effect assuming that \({\mathbf{a}}\sim {\text{N}}\left( {\mathbf{0}, \sigma_{{\text{a}}}^{2} {\mathbf{I}}} \right)\), and \({\text{e}}_{{{\text{ij}}}}\) is the corresponding error, assuming that \({\mathbf{e}}\sim {\text{N}}\left( {\mathbf{0}, \sigma_{{\text{e}}}^{2} {\mathbf{I}}} \right)\). Details on how the PC scores of the \({\mathbf{V}}_{l}\) and \({\mathbf{V}}_{g}\) matrices were calculated are in [27].
Since the IBD between segments of different breed origin (i.e. replicates of \({\text{BH}}^{{\text{S}}}\) and \({\text{BH}}^{{\text{L}}}\)) was set to 0, the PCA generated two sets of breed-specific \({\text{PC}}_{l}\). Thus, the matrix \({\mathbf{V}}_{l}\) can be written as \(\left[ {{\mathbf{V}}_{l}^{{\text{S}}} {\mathbf{V}}_{l}^{{\text{L}}} } \right]\) and the vector \({{\varvec{\upbeta}}}_{l}^{^{\prime}}\) as \(\left[ {{{\varvec{\upbeta}}}_{l}^{{^{\prime}{\text{S}}}} {{\varvec{\upbeta}}}_{l}^{{^{\prime}{\text{L}}}} } \right]\), where \({\mathbf{V}}_{l}^{{\text{S}}}\) and \({\mathbf{V}}_{l}^{{\text{L}}}\) are the \({\text{PC}}_{l}\) summarising IBD probabilities between the gametes of Sarda and Lacaune origin and \({{\varvec{\upbeta}}}_{l}^{{\text{S}}}\) and \({{\varvec{\upbeta}}}_{l}^{{\text{L}}}\) the corresponding effects.
The final aim of this work was to identify QTL segregating in the Sarda breed and to search for candidate genes and causative mutations by functional annotation and association analysis of the imputed Sarda sequence data in the identified regions. Thus, at each SNP position, we tested the null hypothesis that the effects of the principal components that explain 99% of the variability due to the Sarda gametes are zero (\(H_{0}\): \({{\varvec{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}}\) = 0) by an F-test that compares the sums of squared residuals of the full model in Eq. (1) and of the following reduced model including all the other effects:
$$ {\mathbf{y}} = \bf{1}{\upmu } + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} + {{\boldsymbol{\upvarepsilon}}}^{*} . $$
The Bonferroni correction for multiple testing was used to estimate the threshold corresponding to the genome-wise (GW) significance level. To be conservative, we omitted the LD between SNPs, and calculated the nominal P-value for each tested position as \(P_{nominal} = \frac{{P_{GW} }}{n Test}\), were \(P_{GW}\) is the genome-wise significance level chosen for the analysis (0.05) and \(n Test\) is the number of tested positions (43,390). The negative logarithm of \(P_{nominal}\) resulted in a threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to 5.938, which was rounded to 6.
Significant positions identified on the same chromosome were clustered into QTL regions (QTLR) in order to account for linkage between SNPs. As proposed by Usai et al. [27], the correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }} = {\mathbf{ZV}}_{l} {\widehat{\varvec{\upbeta }}}_{l}\) (corresponding to the portion of phenotypes predicted in the model by the QTL effect) were calculated for all pairs of significant SNPs on the same chromosome. The most significant SNP on the chromosome was taken as the peak of the first QTLR. Peaks that identified a further QTLR on the same chromosome were iteratively detected as the significant SNPs showing correlations lower than 0.15 with the already defined QTLR peaks. The remaining significant positions were assigned to the QTLR with which they had the highest correlation. Moreover, with the aim of appreciating the relative potential impact of a marker-assisted selection approach, we calculated the ANOVA estimator of the QTL variance for the most significant position of each QTLR as:
$$ \widehat{{\sigma_{qtlS}^{2} }} = \frac{{\frac{{SSE_{R} - SSE_{F} }}{{nPC_{S} }} - \frac{{SSE_{F} }}{{np - nPC_{g} - nPC_{L} - nPC_{S} - 1}}}}{{\frac{np}{{nPC_{S} }}}}, $$
where \( SSE_{F} = \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{S}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right) \right\rceil^{^{\prime}} \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{S}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{S}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right) \right\rceil\) is the sum of squared residuals of the full model including the Sarda PC at the peak position; and \(SSE_{R} = \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right)\right\rceil^{^{\prime}} \left\lceil{\mathbf{y}} - \left( {\bf{1}{{\boldsymbol{\upmu}}} + {\mathbf{ZV}}_{{\mathbf{l}}}^{{\mathbf{L}}} {{\boldsymbol{\upbeta}}}_{{\mathbf{l}}}^{{\mathbf{L}}} + {\mathbf{ZV}}_{{\mathbf{g}}} {{\boldsymbol{\upalpha}}}_{{\mathbf{l}}} } \right)\right\rceil\) is the sum of squared residuals of the reduced (without the Sarda PC) model; and \(nPC_{S}\) and \(nPC_{L}\) are the number of PC summarising the IBD probabilities between the gametes of Sarda and Lacaune origin, respectively; and \(nPC_{g}\) is the number of PC extracted from the genome-wide IBD probability matrix.
The ratio between the ANOVA estimators of the QTL variances (\(\widehat{{\sigma_{qtlS}^{2} }}\)) and the total phenotypic variance of the pseudo-phenotypes was calculated for the peak of each QTLR.
Analysis of sequence data
The QTLR as defined above or the 2-Mb intervals that surround the most significant locations when only one 50k SNP was significant, were further investigated using information from whole-genome sequence (WGS) data. Biallelic SNPs falling in these target QTLR were extracted from the assembled sequences of the re-sequenced animals as vcf-files. First, a functional annotation of the SNPs identified by WGS was performed using the NCBI 4.0 sheep genome annotation release 102 and the snpEff software v4.3.t [49]. Then, the parental gametes of the phenotyped ewes were imputed from 50 k data to WGS. The first step of the imputation procedure was to reconstruct the phase of each gamete \(i\) carried by the sequenced animals (\(h_{i}^{Q}\)) that consisted in estimating the probability of carrying the reference \(P\left( {h_{il}^{Q} = R} \right) \) and the alternative \(P\left( {h_{il}^{Q} = A} \right)\) allele at each WGS SNP position l based on the genotype information from sequencing and the IBD between gametes at the neighbouring SNP 50k. Then, at each WGS SNP position l of the parental gamete \(j\) carried by each of the none-sequenced phenotyped ewes, we inferred the probabilities of carrying the reference \(P\left( {h_{jl}^{p} = R} \right)\) and the alternative allele \(P\left( {h_{jl}^{p} = A} \right)\) based on the gametic phase of sequenced animals and the IBD between gametes of sequenced animals with gametes of the phenotyped ewes [50]. The accuracy of the imputation was calculated as the correlation between the probability for an imputed WGS SNP allele at each 50k SNP position and the actual occurrence of the same allele given the 50k genotyping information and the gametic phase reconstructed in the previous analysis. Moreover, to verify that the imputed data could be used for the association analysis, the information content of each WGS SNP for all imputed gametes was calculated as the squared difference of the allele probabilities \(\left[ {P\left( {h_{jl}^{p} = R} \right) - P\left( {h_{jl}^{p} = A} \right)} \right]^{2}\). These statistics were averaged across positions and gametes.
Finally, an association analysis was run in the target regions, by regressing the pseudo-phenotypes on the allele dosage calculated as the sum of the probabilities of carrying the reference allele in the paternal and maternal gametes predicted by imputation. The allele dosage was used instead of the genotype probabilities since it allows the direct estimation of the additive substitution effect of the reference allele with just one regressor in the model. However, the genotype probabilities imply a multiple regression model and are more adapted for estimating non-additive effects. As in Eq. (1), the model included the PC extracted from the genome-wide IBD probability matrix to adjust for the polygenic background.
An F test was performed to calculate the P-values of each tested WGS SNP. The analysis was performed in order to identify the most relevant WGS SNPs, which were selected by setting the threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to the maximum per region minus 2.
Searching for candidate genes
Genes that harboured variants with a potential functional impact or variants that showed the highest P-values identified in the previous analyses, were compared with functional candidate genes selected from QTL or gene expression studies related to GIN resistance. In particular, we took advantage of the recent summary provided by [51] in which a deep review of the latest literature on the subject was performed. They identified 11 SNP chip-based QTL detection analyses (based on GWAS, LA, LDLA, selection sweep mapping or regional heritability mapping methods) from which they extracted 230 significantly associated genomic regions. Moreover, they proposed a list of 1892 genes reported as highly expressed or differentially expressed after GIN infection in sheep by 12 different experiments in the field. QTL regions and GIN activated genes proposed by [51] were remapped from the Ovis aries genome 3.1 assembly to the Oar4.0 version by using Biomart and NCBI remapping services for comparison with our results.
Finally, we performed an over-representation analysis (ORA) of gene ontology (GO) biological process terms of the genes harboring significant mutations or mutations with functional consequences on the transcripts. We performed the ORA with the web-based software WebGestalt [52]. Gene symbols of human gene orthologues were retrieved from the OrthoDB v10 data base [53] starting from the NBCI ID of sheep genes from the Ovis aries annotation release 102. The human genome protein coding database was taken as reference and the following parameters were used for the analysis: default statistical method (hypergeometric); minimum number of genes included in the term = 5, multiple test adjustment = BH method (Benjamini–Hochberg FDR). The ten top categories were retained based on FDR rank.
Variance components
Table 2 shows the variance component estimates obtained with the repeatability animal model. The heritability and repeatability estimates of lnFec were 0.21 ± 0.015 and 0.27 ± 0.012, respectively (Table 2).
Table 2 Estimates and standard errors of genetic, permanent environment (Pe) and residual variances and repeatability (Rp) and heritability (h2) estimates for LnFec
Figure 1 presents the Manhattan plot of the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) corresponding to the null hypothesis that the effects of PC that explain 99% of the variability due to the Sarda base gametes at each locus (43,390 SNPs) are zero. Two hundred and two SNPs encompassed the 5% genome-wide significant threshold. With the exception of Ovis aries chromosome (OAR) 1, on which only one significant location was found, many significant SNPs mapped to the same chromosome. After clustering the significant locations on the same chromosome, ten QTLR were defined on nine chromosomes (Table 3). The ratio between the ANOVA estimator of the QTL variance and the total phenotypic variance ranged from 0.0087 to 0.0176.
Manhattan plot of the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) corresponding to the null hypothesis that the effects of principal components that explain 99% of the variability due to the Sarda base gametes at each locus are zero. The grey line indicates the 0.05 genome-wide significance threshold determined by Bonferroni correction for 43,390 tests
Table 3 QTL regions from the LDLA analysis
The most significant location (\(- {\text{log}}_{10} \left( {Pvalue} \right)\) = 12.861) was in a large region on OAR20, that covered almost 20 Mb and included 154 significant SNPs. Correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) at the peak position and the other 153 significant locations were always higher than 0.25. The second most significant peak was on OAR12 in a QTLR spanning 5.18 Mb and including another 18 significant SNPs, with correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) greater than 0.46. The third QTLR in order of significance was at the beginning of OAR4, spanned 4.6 Mb and included six SNPs. Eleven SNPs on OAR19 exceeded the 5% genome-wide significance threshold. Although the two most distant SNPs defined an interval of about 12.5 Mb, all the SNPs clustered in the same QTLR, since the correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) were always higher than 0.48. Other QTLR (approximately 500 to 700 kb long and including from 1 to 3 significant SNPs) were identified on OAR15, 6, 7 and 2. An additional significant SNP, ~ 100 Mb apart from the previous one, was also identified on OAR2. The last QTLR was defined in the 2-Mb interval surrounding the single significant SNP on OAR1.
QTLR, rounded to the closest Mb, were further investigated with WGS data. Overall, 712,987 biallelic SNPs were extracted from the target regions. Among these, 649,054 were already known in the European Variation Archive (EVA, ftp://ftp.ebi.ac.uk/pub/databases/eva/rs_releases/release_1/by_species/Sheep_9940/GCA_000298735.2), while 63,933 (8.96%) were novel variants, without an associated rs identifier.
The average mutation rate ranged from 7711 to 14,428 SNPs per Mb. Accuracy of imputation at the 50 k SNP positions ranged from 0.990 on OAR6 to 0.979 on OAR7 (Table 4). The imputation process resulted in an average information content across gametes and QTLR of 0.976 ± 0.17, which ranged from 0.967 ± 0.02 for OAR4 to 0.985 ± 0.14 for OAR12. Based on such informativeness, we performed an association analysis at each polymorphic site from WGS (Table 5). Graphical comparison between Manhattan-plots of LDLA and WGS-based data association analysis are reported in Additional file 1: Figs. S1–S10.
Table 4 Description of the QTL regions from whole-genome sequences and results of the imputation procedure
Table 5 Results of the association analysis based on imputed alleles at the polymorphic sites from WGS
QTL on OAR4, 12, 19 and 20 remained the most significant. As in the LDLA analysis, the test statistic profile in the WGS analysis was not unimodal and, in some cases, the most significant positions were at different locations compared to the previous analysis. Thus, on OAR4 the peak from the WGS association analysis mapped at 8,686,421 bp, closer to the second peak and almost 3.3 Mb from the most significant position identified with LDLA. Similarly, on OAR12, the WGS peak position was at 41,043,088 bp, 1.6 Mb from the LDLA peak and close to a SNP from the OvineSNP50 Beadchip which did not reach genome-wide significance with LDLA (\(- {\text{log}}_{10} \left( {Pvalue} \right)\) = 5.79). On OAR19, the most significant position in the LDLA and WGS analyses were only 467 kb apart, although the explored region was 14 Mb long and showed several peaks in both analyses. As far as the QTLR on OAR20 was concerned, the most significant position in the WGS association analysis, was almost 5 Mb distant from the LDLA peak. However, the other WGS significant SNPs were close to the LDLA peak. Indeed, the second peak from WGS was only 68 kb apart from the LDLA peak. Moreover, the SNPs from the OvineSNP50 Beadchip which were closer to the second (rs416381272) and third (rs411905117) significant WGS peaks also ranked third and second in the LDLA analysis. In the other analysed QTLR, with a lower significance level and smaller number of significant SNPs, peak positions from WGS data were within a distance of 500 kb from the LDLA peaks. Finally, while nominal P-values remained similar in the two analyses for most of the investigated regions, an evident drop of significance was observed on OAR15, where the \(- {\text{log}}_{10} \left( {Pvalue} \right)\) dropped from 7.36, in the LDLA analysis to 4.97 in the WGS based association analysis.
As far as the functional annotation was concerned, SNPeff provided 2,250,514 effects for the 712,987 analysed SNPs in the explored 60 Mb, since a variant can affect two genes and a gene can have multiple transcripts (Table 6).
Table 6 Summary of the genomic features in the investigated regions
The number of effects by impact (high, moderate, modifier and low), type and region according to SNPeff classification is reported in Additional file 2: Tables S1–S10. Among the SNPs that affect transcripts, 0.8 to 1% of them per region, concerned pseudogenes and were not considered. In addition, variants that were in intergenic regions (from 4.2 to 27.4% of the predicted effects per QTLR) were not further investigated.
Finally, we focused on variants that were classified as having a high impact on the transcript of protein coding genes (classified by SNPeff as: splice_acceptor_variant; splice_donor_variant; start_lost; stop_gained; stop_lost) or a moderate impact (which were all predicted as having a missense effect in our case, i.e. variants that change one or more bases, resulting in a different amino acid sequence but the length of which is preserved). On the whole, 3538 polymorphisms were predicted to cause high-impact or missense effects (340 and 9105 effects, respectively) on the multiple transcripts of 530 protein coding genes. A detailed description of the classification of the retained variants is in Additional file 3: Table S11.
The ten most significant SNPs from the WGS analysis were all classified as modifier, since they were either intergenic or intronic (see Additional file 4: Table S12), and thus had no effect on the transcript. None of the high-impact variants showed high significance levels. Indeed, only four missense variants encompassed the empirical threshold of \(- {\text{log}}_{10} \left( {Pvalue} \right)\) equal to the maximum per region minus 2: one affected three transcripts of the CIART (circadian associated repressor of transcription) gene on OAR1 (rs159646335) and three affected the transcript of the OTOG (otogelin) gene on OAR15 (rs420057627, rs401738285 and rs422155776).
The 530 genes that harbored high or moderate (missense effect) impact variants and another 13 genes with polymorphisms encompassing the empirical threshold of \(max\left( { - {\text{log}}_{10} \left( {Pvalue} \right)} \right) - 2\) were submitted to an enrichment analysis of GO biological process terms. Of the 543 genes considered, 50 did not have a human ortholog in the OrthoDb database [53] and 493 mapped to 442 human genes, since 53 shared the same human ortholog. Finally, 376 genes were annotated to the selected functional categories (GO biological process) and were used for the enrichment analysis.
None of the GO terms identified by the enrichment analysis from the biological process database was significantly enriched. The ten most abundant terms (see Additional file 5: Table S13) identified (interferon-gamma-mediated signaling pathway; sialic acid transport; T cell receptor signaling pathway; activation of immune response; positive regulation of immune system process; regulation of immune system process; immune response-activating cell surface receptor signaling pathway; immune response-regulating signaling pathway; innate immune response; and defense response) were further clustered into three superior categories according to the Weighted set cover method for redundancy reduction available in Genstalt [52]: sialic acid transport; regulation of immune system process; and defense response. The last two categories, which clearly relate to resistance to diseases, included 53 and 56 genes, respectively, 36 of which enriched both terms. Among the genes in one of these two GO higher categories, 12 were also in the list of GIN activated genes provided by Chitneedi et al. 2020 [51]: CTSS on OAR1, TNFRSF1B and SELE on OAR12, IL5RA on OAR19, IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF on OAR20.
The heritability estimate of lnFec in this study was low to moderate and consistent with previous studies in adult ewes, which reported heritabilities of FEC, after appropriate logarithmic or squared root transformation, ranging from 0.09 [54] to 0.21 [12] and 0.35 [14]. On the contrary, the repeatability estimate was higher with the permanent environmental variance equal to 6% of the total phenotypic variance. Aguerre et al. [14] did not find significant differences between heritability and repeatability estimates in naturally-infected ewes and suggested that individual variability was mainly due to differences in the genetic background rather than in differences in the immune history of the animals. Although the characterisation of worm species in individual samples was not systematically performed in our experiment, it has been demonstrated that resistance to different species of nematodes tend to be interrelated, with genetic correlations between FEC values from different species or genera of parasites being generally close to 0.5 or higher in some cases [55, 56]. Moreover, it has been shown that sheep that are selected on the basis of their response to artificial challenges respond similarly when exposed to natural infection, and a high positive genetic correlation was estimated between FEC recorded under artificial or natural infection [14, 57]. Such evidence and the heritability estimate found in our study suggest that genetic selection for parasitism resistance could be considered in the Sarda breed.
The LDLA analysis identified 202 genomic positions that were significantly associated to FEC. We grouped these positions into regions based on the correlations between the predicted effects of the QTL. Five of the ten identified QTLR (OAR4, 7, 12, 19, 20) overlapped with regions that were shown to be associated to traits related to GIN resistance in previous SNPs based studies. In particular, the QTLR on OAR4, 12, 19 and 20 overlap with significant windows identified by [21] in a meta-analysis based on the regional heritability mapping method on data including the first two generations of our experimental population. QTLR on OAR19 has also been found to be significantly associated to FEC measured in lambs [58], while several positions on OAR20 have been indicated as associated to susceptibility to parasites in other studies [17, 19, 20]. The QTLR on OAR7 falls in a region that was identified in a breed of sheep adapted to tropical climate [59] and is close to a signature of selection detected by comparing two breeds selectively bred for high and low FEC [22]. The regions associated to resistance to nematode infection on OAR2 [20, 58, 59], OAR6 [20, 23, 59, 61] and OAR15 [58, 61] were found in several studies but only our first QTLR on OAR2 (Q_02_1), was close to previously reported significant positions [20, 58, 59].
QTL associated to nematode resistance have been identified on almost all the ovine chromosomes (see [10, 62] and [51] for a recent summary) for a recent summary). However, the comparison of results between studies is complex due to the variability of the breeds and nematode species analyzed, and to the use of different statistical approaches. It is likely that resistance to GIN is a complex trait that is determined by a large number of genes [63], and, to date, no major gene has been identified.
In this study, we examined whether combining the significant results obtained from an association analysis of accurate imputed data with the functional annotation of SNPs within target regions was advantageous. The original idea was to verify if considering the significance levels of SNPs was useful to pinpoint functional variants with a potential impact on candidate genes that are identified based on their ontological classification or that are differentially expressed in studies that analyze susceptibility differences of sheep to nematodes. All these results are summarized in Additional file 3: Table S11.
The WGS association analysis was not able to provide a definite significance profile within QTLR. In all the QTLR, the number of peaks still remained large, and often the distance between them was quite big. This is likely a consequence of the large size of the chromosomal segments with high correlations between \(\widehat{{{\varvec{y}}_{{{\varvec{Q}}_{{\varvec{l}}} }} }}\) that reveals high LD levels within QTLR. Moreover, none of the most significant SNPs showed a functional effect on the genes' transcript. This result can be in part due to the fact that we focused on intragenic regions of protein coding genes, whereas it has been suggested that a large part of the genetic variability of quantitative traits lies in regulatory regions or in non-protein coding regions, which are, however, very poorly annotated in the ovine genome.
However, our results indicate that the QTLR located on OAR12, 19 and 20 are strongly involved in the complex mechanism of resistance of sheep to GIN. Not only these regions harbor the most significant SNPs in both the LDLA and WGS analyses, but they have also been reported in the literature either from other QTL detection analyses and from studies on GIN resistance based on differential gene expression. In particular, in these regions, we found genes that: (i) contain polymorphisms with a high impact or missense effect, (ii) included in list of GIN-activated genes, and (iii) contribute to enrich the most represented GO process in our enrichment analysis. Among these genes, two contributed to enrich the GO terms regulation of immune system process and defense response and mapped to the QTLR region on OAR12: the TNFRSF1B (TNF receptor superfamily member 1B) gene that harbors a missense mutation (c.103G > A) in exon 2 at position 39,567,687 bp and is very close to the peak of the LDLA analysis (3,943,0517 bp), and the SELE (selectin E) gene that contains four missense variants. According to the Entrez summary for the human ortholog, SELE encodes a protein that is found in cytokine-stimulated endothelial cells and is thought to be responsible for the accumulation of blood leukocytes at sites of inflammation by mediating the adhesion of cells to the vascular lining. In sheep, Gossner et al. [64] found that the SELE gene is down-expressed in the abomasal lymph nodes of resistant lambs infected with T. circumcincta, which suggests that a possible component of the response of resistant animals to GIN infection could be the repression of acute inflammation and tissues healing.
On OAR19, the most significant peak of the WGS association analysis falls in the first intron of the GRM7 (glutamate metabotropic receptor 7) gene, which is neither included in the list of GIN-activated genes nor contributes to the GO selected terms. However, in the explored QTLR on this chromosome, we found 13 missense variants in the IL5RA (interleukin 5 receptor subunit alpha) gene, which support the enriched GO term "defense response" in our GO enrichment analysis and appears in the list of GIN-activated genes. Indeed, the IL5RA gene was found to have an increased expression in resistant animals in several studies (Scottish Blackface lambs resistant to T. circumcincta [64]; Churra resistant sheep infected by the same species [65]; resistant lambs of two different selection flocks of merino sheep [66]).
The QTLR identified on OAR20 is indeed very large and encompasses the MHC region, although the genes from the MHC are located 4 to 6 Mb away from the LDLA most significant location. The MHC complex plays an important role in presenting processed antigens to host T lymphocytes, causing T cell activation and an immunological cascade of events that build the host immunity. Due to the highly polymorphic nature of the MHC region, it is difficult to identify causative mutations useful for selection for GIN resistance [62]. The most significant SNP in the WGS analysis (rs404860665) mapped to the fourth intron of the LOC101111058 (butyrophilin-like protein 1) gene with no function defined in NCBI for sheep. Since no human orthologue of this gene was found in the OrthoDB data base [53], it was not included in the enrichment analysis. However, it is highly expressed in the gastrointestinal tract of sheep (caecum, duodenum, colon, and rectum). Moreover, there is cumulating evidence that butyrophilin-like proteins may have a role as local regulators of intestinal inflammation in other species [67].
In the target region on OAR20, another 20 missense mutations were detected in eight genes (IL17A, IL17F, TRIM26, TRIM38, TNFRSF21, LOC101118999, VEGFA, and TNF), which are present in the list of GIN-activated genes and contributed to enrich the main GO terms "regulation of immune system process" and "defense response". Among these, the genes encoding interleukins 17 (IL17A and IL17F), have been mentioned [68] as positional candidates for GIN resistance, but to date, they have not been described in studies on sheep resistance to GIN. However, Gadahi et al. [69] found that IL-17 level was significantly increased in peripheral blood mononuclear cells (PBMC) of goats incubated with Haemonchus contortus excretory and secretory proteins (HcESP) and they suggested that such an enhanced IL-17 level might favor the survival of the worm in the host. Moreover, it has been reported that the IL17F gene showed the most significant expression difference in the response of the abomasal mucosa of Creole goat kids infected with Haemonchus contortus, i.e. its expression was three times higher in resistant compared to susceptible animals [70]. Missense mutations were also detected in the TNF (tumor necrosis factor) and TNFRSF21 (TNF receptor superfamily member 21) genes. Tumor necrosis factor (TNF) is a cytokine involved in systemic inflammation. The interactions between TNF family ligands and their receptors are involved in the modulation of a number of signaling pathways in the immune system, such as cell proliferation, differentiation, apoptosis and survival [71]. Artis et al. [72] suggested a role for TNF-α in regulating Th2 cytokine responses in the intestine, which has a significant effect on protective immunity to helminth infection. Moreover, the TNFα gene was relatively highly expressed in intestinal lymph cells of sheep selected for resistance to nematodes during infection with Trichostrongylus colubriformis [73]. In mice, TNFRSF21-knockout studies suggest that this gene plays a role in T-helper cell activation, and may be involved in inflammation and immune regulation [71]. A missense mutation was found in the VEGFA (vascular endothelial growth factor A) gene, which was differentially expressed in abomasal limphonodes of lambs with different susceptibilities to GIN [64] and in the abomasal mucosa of sheep infected with Haemonchus contortus [74]. Finally, nine already known missense mutations were detected in the TRIM26 and TRIM38 genes. The products of these genes belong to the tripartite motif (TRIM) protein family composed of more than 70 members in humans. Accumulating evidence has indicated that TRIM proteins play crucial roles in the regulation of the pathogenesis of autoimmune diseases and the host defense against pathogens, especially viruses [75]. Both genes were among the GIN-activated genes and contributed to enrich the terms "defense response" (TRIM38) and "interferon-gamma-mediated signaling pathway", "innate immune response", "defense response" (TRIM26). Lyu et al. [76] who investigated the risk associated to nasopharyngeal carcinoma in humans, detected a regulatory variant in this gene and suggested that the downregulation of TRIM26, which is dependent on the allele at this variant, contributed to the downregulation of several immune genes and thus was associated to a low immune response.
Our results show that selective breeding may be an option to limit the issues related to infestation of gastro-intestinal nematodes in sheep. On the one hand, the heritability estimate and QTL detection results confirm that both traditional progeny testing and marker-assisted selection are realistic options. However, the laboriousness of fecal egg counting on a large scale makes marker-assisted selection potentially more profitable in terms of cost benefits. Indeed, the ten significant markers identified in our study and already available on the commercial Illumina arrays explain an important portion of the genetic variation in our large population. On the other hand, the results of the combined use of whole genome data and functional annotation did not provide any marker or causative mutation to improve the efficiency of a marker-assisted selection program in the short term. However, our study which was carried out on a large experimental population provides a first list of candidate genes and SNPs which could be used to address further validation studies on independent populations. In the mid-term, the expected advancements in the quality of the annotation of the ovine genome and the use of experimental designs based on sequence data and phenotypes from multiple breeds that show different LD extents and gametic phases may help to identify causative mutations. As far as the Sarda breed is concerned, the Breeders Association is assessing the feasibility of a selection program for nematode resistance based on fecal egg counting and on the genotypes described in this study for the nucleus flock and combined with the genotyping of selection candidate males that are bred in Herd Book farms and are genetically connected with the experimental flock.
The data that support the findings of this study are available from Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna but restrictions apply to the availability of these data, which were used under license for the current study, and thus are not publicly available. However, data are available from the authors upon reasonable request and with permission of Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna.
Kaplan RM, Vidyashankar AN. An inconvenient truth: Global worming and anthelmintic resistance. Vet Parasitol. 2012;186:70–8.
Mavrot F, Hertzberg H, Torgerson P. Effect of gastro-intestinal nematode infection on sheep performance: a systematic review and meta-analysis. Parasit Vectors. 2015;8:557.
Geurden T, Hoste H, Jacquiet P, Traversa D, Sotiraki S, Frangipane di Regalbono A, et al. Anthelmintic resistance and multidrug resistance in sheep gastro-intestinal nematodes in France, Greece and Italy. Vet Parasitol. 2014;201:59–66.
Aguiar de Oliveira P, Riet-Correa B, Estima-Silva P, Coelho ACB, dos Santos BL, Costa MAP, et al. Multiple anthelmintic resistance in Southern Brazil sheep flocks. Rev Bras Parasitol Vet. 2017;26:427–32.
Sargison ND, Jackson F, Bartley DJ, Wilson DJ, Stenhouse LJ, Penny CD. Observations on the emergence of multiple anthelmintic resistance in sheep flocks in the south-east of Scotland. Vet Parasitol. 2007;145:65–76.
McMahon C, Bartley DJ, Edgar HWJ, Ellison SE, Barley JP, Malone FE, et al. Anthelmintic resistance in Northern Ireland (I): Prevalence of resistance in ovine gastrointestinal nematodes, as determined through faecal egg count reduction testing. Vet Parasitol. 2013;195:122–30.
Jackson F, Miller J. Alternative approaches to control-Quo vadit? Vet Parasitol. 2006;139:371–84.
Brito DL, Dallago BSL, Louvandini H, dos Santos VRV, de Araújo Torres SEF, Gomes EF, et al. Effect of alternate and simultaneous grazing on endoparasite infection in sheep and cattle. Rev Bras Parasitol Vet. 2013;22:485–94.
Houdijk JGM, Kyriazakis I, Kidane A, Athanasiadou S. Manipulating small ruminant parasite epidemiology through the combination of nutritional strategies. Vet Parasitol. 2012;186:38–50.
Zvinorova PI, Halimani TE, Muchadeyi FC, Matika O, Riggio V, Dzama K. Breeding for resistance to gastrointestinal nematodes - the potential in low-input/output small ruminant production systems. Vet Parasitol. 2016;225:19–28.
Bouix J, Krupinski J, Rzepecki R, Nowosad B, Skrzyzala I, Roborzynski M, et al. Genetic resistance to gastrointestinal nematode parasites in Polish long-wool sheep. Int J Parasitol. 1998;28:1797–804.
Sechi S, Salaris S, Scala A, Rupp R, Moreno C, Bishop SC, et al. Estimation of ( co ) variance components of nematode parasites resistance and somatic cell count in dairy sheep. Ital J Anim Sci. 2009;8:156–8.
Assenza F, Elsen J-M, Legarra A, Carré C, Sallé G, Robert-Granié C, et al. Genetic parameters for growth and faecal worm egg count following Haemonchus contortus experimental infestations using pedigree and molecular information. Genet Sel Evol. 2014;46:13.
Aguerre S, Jacquiet P, Brodier H, Bournazel JP, Grisez C, Prévot F, et al. Resistance to gastrointestinal nematodes in dairy sheep: genetic variability and relevance of artificial infection of nucleus rams to select for resistant ewes on farms. Vet Parasitol. 2018;256:16–23.
Beh KJ, Hulme DJ, Callaghan MJ, Leish Z, Lenane I, Windon RG, et al. A genome scan for quantitative trait loci affecting resistance to Trichostrongylus colubriformis in sheep. Anim Genet. 2002;33:97–106.
Crawford AM, Paterson KA, Dodds KG, Diez Tascon C, Williamson PA, Roberts Thomson M, et al. Discovery of quantitative trait loci for resistance to parasitic nematode infection in sheep: I. Analysis of outcross pedigrees. BMC Genomics. 2006;7:178.
Davies G, Stear MJ, Benothman M, Abuagob O, Kerr A, Mitchell S, et al. Quantitative trait loci associated with parasitic infection in Scottish blackface sheep. Heredity. 2006;96:252–8.
Gutiérrez-Gil B, Pérez J, Álvarez L, Martínez-Valladares M, De La Fuente LF, Bayán Y, et al. Quantitative trait loci for resistance to trichostrongylid infection in Spanish Churra sheep. Genet Sel Evol. 2009;41:46.
Sallé G, Jacquiet P, Gruner L, Cortet J, Sauvé C, Prévot F, et al. A genome scan for QTL affecting resistance to Haemonchus contortus in sheep. J Anim Sci. 2012;90:4690–705.
Riggio V, Matika O, Pong-Wong R, Stear MJ, Bishop SC. Genome-wide association and regional heritability mapping to identify loci underlying variation in nematode resistance and body weight in Scottish Blackface lambs. Heredity. 2013;110:420–9.
Riggio V, Pong-Wong R, Sallé G, Usai MG, Casu S, Moreno CR, et al. A joint analysis to identify loci underlying variation in nematode resistance in three European sheep populations. J Anim Breed Genet. 2014;131:426–36.
McRae KM, McEwan JC, Dodds KG, Gemmell NJ. Signatures of selection in sheep bred for resistance or susceptibility to gastrointestinal nematodes. BMC Genomics. 2014;15:637.
Atlija M, Arranz J-J, Martinez-Valladares M, Gutiérrez-Gil B. Detection and replication of QTL underlying resistance to gastrointestinal nematodes in adult sheep using the ovine 50K SNP array. Genet Sel Evol. 2016;48:4.
Sechi S, Giobbe M, Sanna G, Casu S, Carta A, Scala A. Effects of anthelmintic treatment on milk production in Sarda dairy ewes naturally infected by gastrointestinal nematodes. Small Rumin Res. 2010;88:145–50.
Scala A, Bitti PL, Fadda M, Pilia A, Varcasia A. I trattamenti antiparassitari negli allevamenti ovini della Sardegna. In: Proceedings of the 7th Congress of Mediterranean Federation for Health and Production of Ruminants: 22–24 April 1999; Santarem. 1999. p. 267–72.
Salaris S, Usai MG, Casu S, Sechi T, Manunta A, Bitti M, et al. Perspectives of the selection scheme of the Sarda dairy sheep breed in the era of genomics. ICAR Tech Ser. 2018;23:79–88.
Usai MG, Casu S, Sechi T, Salaris SL, Miari S, Sechi S, et al. Mapping genomic regions affecting milk traits in Sarda sheep by using the OvineSNP50 Beadchip and principal components to perform combined linkage and linkage disequilibrium analysis. Genet Sel Evol. 2019;51:65.
Nicolazzi EL, Caprera A, Nazzicari N, Cozzi P, Strozzi F, Lawley C, et al. SNPchiMp vol 3: integrating and standardizing single nucleotide polymorphism data for livestock species. BMC Genomics. 2015;16:283.
Casu S, T. Sechi, MG. Usai SM, Casula M, Mulas G, et al. Investigating a Highly Significant QTL for Milk Protein Content Segregating in Sarda Sheep Breed Close to the Caseins Cluster Region by Whole Genome Re-sequencing of Target Animals. In: 10th World Congress of Genetics Applied to Livestock Production. Asas; 2014. Accessed on 24/01/2018.
Köster J, Rahmann S. Snakemake—a scalable bioinformatics workflow engine. Bioinformatics. 2018;34:3600.
Krueger F, James F, Ewels P, Afyounian E, Schuster-Boeckler B. FelixKrueger/TrimGalore: v0.6.7. https://doi.org/10.5281/zenodo.5127898 Accessed 3 Dec 2021.
Andrews S. FastQC: a quality control tool for high throughput sequence data. 2010. https://www.bioinformatics.babraham.ac.uk/projects/fastqc Accessed 3 Dec 2021.
Li H, Durbin R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics. 2009;25:1754–60.
Danecek P, Bonfield JK, Liddle J, Marshall J, Ohan V, Pollard MO, et al. Twelve years of SAMtools and BCFtools. Gigascience. 2021;10:1–4.
"Picard Toolkit." 2019. Broad Institute, GitHub Repository. https://broadinstitute.github.io/picard/; Accessed 3 Dec 2021.
McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010;20:1297–303.
Van der Auwera GA, Carneiro MO, Hartl C, Poplin R, del Angel G, Levy-Moonshine A, et al. From fastQ data to high-confidence variant calls: the genome analysis toolkit best practices pipeline. Curr Protoc Bioinform. 2013;43:11.
Raynaud J-P, William G, Brunault G. Etude de l'efficacité d'une technique de coproscopie quantitative pour le diagnostic de routine et le contrôle des infestations parasitaires des bovins, ovins, équins et porcins. Ann Parasitol Hum Comp. 1970;45:321–42.
Euzéby J. Diagnostic expérimental des helminthoses animales. Paris: Edition Vigot Frères; 1958.
van Wyk JA, Mayhew E. Morphological identification of parasitic nematode infective larvae of small ruminants and cattle: a practical lab guide. Onderstepoort J Vet Res. 2013;80:539.
Butler DG, Cullis BR, Gilmour AR, Gogel BJ, Thompson R. ASReml-R Reference Manual Version 4 ASReml estimates variance components under a general linear mixed model by residual maximum likelihood (REML). Hemel Hempstead: VSN International Ltd; 2018. p. 188.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, et al. Common SNPs explain a large proportion of the heritability for human height. Nat Genet. 2010;42:565–9.
Yang J, Lee SH, Goddard ME, Visscher PM. GCTA: a tool for genome-wide complex trait analysis. Am J Hum Genet. 2011;88:76–82.
Ripley B, Venables B, Bates DM, Firth D, Hornik K, Gebhardt A. Package "MASS". Support functions and datasets for Venables and Ripley's MASS. 2018. http://www.r-project.org. Accessed 3 Dec 2021.
Meuwissen T, Goddard M. The use of family relationships and linkage disequilibrium to impute phase and missing genotypes in up to whole-genome sequence density genotypic data. Genetics. 2010;185:1441–9.
Meuwissen TH, Goddard ME. Prediction of identity by descent probabilities from marker-haplotypes. Genet Sel Evol. 2001;33:605–34.
Elsen J-M, Mangin B, Goffinet B, Boichard D, Le Roy P. Alternative models for QTL detection in livestock I. General introduction. Genet Sel Evol. 1999;31:213–24.
PubMed Central Google Scholar
Pong-Wong R, George AW, Woolliams JA, Haley CS. A simple and rapid method for calculating identity-by-descent matrices using multiple markers. Genet Sel Evol. 2001;33:453–71.
Cingolani P, Platts A, Wang LL, Coon M, Nguyen T, Wang L, et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly. 2012;6:80–92.
Usai MG, Casu S, Ziccheddu B, Sechi T, Miari S, Carta P, et al. Using identity-by-descent probability to impute whole genome sequence variants in a nucleus flock. Ital J Anim Sci. 2019;18:S52.
Chitneedi PK, Arranz JJ, Suárez-Vega A, Martínez-Valladares M, Gutiérrez-Gil B. Identification of potential functional variants underlying ovine resistance to gastrointestinal nematode infection by using RNA-Seq. Anim Genet. 2020;51:266–77.
Liao Y, Wang J, Jaehnig EJ, Shi Z, Zhang B. WebGestalt 2019: gene set analysis toolkit with revamped UIs and APIs. Nucleic Acids Res. 2019;47:W199-205.
Kriventseva EV, Kuznetsov D, Tegenfeldt F, Manni M, Dias R, Simão FA, et al. OrthoDB v10: sampling the diversity of animal, plant, fungal, protist, bacterial and viral genomes for evolutionary and functional annotations of orthologs. Nucleic Acids Res. 2019;47:D807–11.
Gutiérrez-Gil B, Pérez J, De La Fuente LF, Meana A, Martínez-Valladares M, San Primitivo F, et al. Genetic parameters for resistance to trichostrongylid infection in dairy sheep. Animal. 2010;4:505–12.
Bishop SC, Jackson F, Coop RL, Stear MJ. Genetic parameters for resistance to nematode infections in Texel lambs and their utility in breeding programmes. Anim Sci. 2004;78:185–94.
Gruner L, Bouix J, Brunel JC. High genetic correlation between resistance to Haemonchus contortus and to Trichostrongylus colubriformis in INRA 401 sheep. Vet Parasitol. 2004;119:51–8.
Gruner L, Bouix J, Vu Tien Khang J, Mandonnet N, Eychenne F, Cortet J, et al. A short-term divergent selection for resistance to Teladorsagia circumcincta in Romanov sheep using natural or artificial challenge. Genet Sel Evol. 2004;36:217–42.
Pickering NK, Auvray B, Dodds KG, McEwan JC. Genomic prediction and genome-wide association study for dagginess and host internal parasite resistance in New Zealand sheep. BMC Genomics. 2015;16:958.
Berton MP, de Oliveira Silva RM, Peripolli E, Stafuzza NB, Martin JF, Álvarez MS, et al. Genomic regions and pathways associated with gastrointestinal parasites resistance in Santa Inês breed adapted to tropical climate. J Anim Sci Biotechnol. 2017;8:73.
Al Kalaldeh M, Gibson J, Lee SH, Gondro C, van der Werf JHJ. Detection of genomic regions underlying resistance to gastrointestinal parasites in Australian sheep. Genet Sel Evol. 2019;51:37.
Benavides MV, Sonstegard TS, Kemp S, Mugambi JM, Gibson JP, Baker RL, et al. Identification of novel loci associated with gastrointestinal parasite resistance in a red Maasai x Dorper backcross population. PLoS ONE. 2015;10:e0122797.
Sweeney T, Hanrahan JP, Ryan MT, Good B. Immunogenomics of gastrointestinal nematode infection in ruminants—breeding for resistance to produce food sustainably and safely. Parasite Immunol. 2016;38:569–86.
Kemper KE, Emery DL, Bishop SC, Oddy H, Hayes BJ, Dominik S, et al. The distribution of SNP marker effects for faecal worm egg count in sheep, and the feasibility of using these markers to predict genetic merit for resistance to worm infections. Genet Res. 2011;93:203–19.
Gossner A, Wilkie H, Joshi A, Hopkins J. Exploring the abomasal lymph node transcriptome for genes associated with resistance to the sheep nematode Teladorsagia circumcincta. Vet Res. 2013;44:68.
Chitneedi PK, Suárez-Vega A, Martínez-Valladares M, Arranz JJ, Gutiérrez-Gil B. Exploring the mechanisms of resistance to Teladorsagia circumcincta infection in sheep through transcriptome analysis of abomasal mucosa and abomasal lymph nodes. Vet Res. 2018;49:39.
Zhang R, Liu F, Hunt P, Li C, Zhang L, Ingham A, et al. Transcriptome analysis unraveled potential mechanisms of resistance to Haemonchus contortus infection in Merino sheep populations bred for parasite resistance. Vet Res. 2019;50:7.
Yamazaki T, Goya I, Graf D, Craig S, Martin-Orozco N, Dong C. A butyrophilin family member critically inhibits T cell activation. J Immunol. 2010;185:5907–14.
Benavides MV, Sonstegard TS, Van Tassell C. Genomic regions associated with sheep resistance to gastrointestinal nematodes. Trends Parasitol. 2016;32:470–80.
Gadahi JA, Yongqian B, Ehsan M, Zhang ZC, Wang S, Yan RF, et al. Haemonchus contortus excretory and secretory proteins (HcESPs) suppress functions of goat PBMCs in vitro. Oncotarget. 2016;7:35670–9.
Aboshady HM, Mandonnet N, Félicité Y, Hira J, Fourcot A, Barbier C, et al. Dynamic transcriptomic changes of goat abomasal mucosa in response to Haemonchus contortus infection. Vet Res. 2020;51:44.
Liu J, Na S, Glasebrook A, Fox N, Solenberg PJ, Zhang Q, et al. Enhanced CD4+ T cell proliferation and Th2 cytokine production in DR6-deficient mice. Immunity. 2001;15:23–34.
Artis D, Humphreys NE, Bancroft AJ, Rothwell NJ, Potten CS, Grencis RK. Tumor necrosis factor α is a critical component of interleukin 13- mediated protective T helper cell type 2 responses during helminth infection. J Exp Med. 1999;190:953–62.
Pernthaner A, Cole SA, Morrison L, Hein WR. Increased expression of interleukin-5 (IL-5), IL-13, and tumor necrosis factor alpha genes in intestinal lymph cells of sheep selected for enhanced resistance to nematodes during infection with Trichostrongylus colubriformis. Infect Immun. 2005;73:2175–83.
Guo Z, González JF, Hernandez JN, McNeilly TN, Corripio-Miyar Y, Frew D, et al. Possible mechanisms of host resistance to Haemonchus contortus infection in sheep breeds native to the Canary Islands. Sci Rep. 2016;6:26200.
Yang W, Gu Z, Zhang H, Hu H. To TRIM the immunity: From innate to adaptive immunity. Front Immunol. 2020;11:02157.
Lyu XM, Zhu XW, Zhao M, Zuo XB, Huang ZX, Liu X, et al. A regulatory mutant on TRIM26 conferring the risk of nasopharyngeal carcinoma by inducing low immune response. Cancer Med. 2018;7:3848–61.
The authors gratefully acknowledge Severino Tolu and the staff of the AGRIS experimental unit at Monastir for technical support in raising, monitoring and recording the animals; Giorgia Dessì for participating in the fecal egg counting; Stefania Sechi for her contribution in editing and archiving data collected in early stages of the experiment.
This study was part of the MIGLIOVIGENSAR project funded by Centro Regionale di Programmazione (CRP), Regione Autonoma della Sardegna (LR n.7/2007 R.A).
Genetics and Biotechnology – Agris Sardegna, Olmedo, Italy
Sara Casu, Mario Graziano Usai, Tiziana Sechi, Sotero L. Salaris, Sabrina Miari, Giuliana Mulas & Antonello Carta
Department of Veterinary Medicine, University of Sassari, Sassari, Italy
Claudia Tamponi, Antonio Varcasia & Antonio Scala
Sara Casu
Mario Graziano Usai
Tiziana Sechi
Sotero L. Salaris
Sabrina Miari
Giuliana Mulas
Claudia Tamponi
Antonio Varcasia
Antonio Scala
Antonello Carta
SC carried out the phenotypic and the functional annotation analyses, participated in data interpretation and drafted the manuscript. MGU developed the statistical methodology for QTL detection and imputation analyses, wrote the Fortran programs and performed the statistical analyses. TS, with the collaboration of GM and SM, performed the genotyping. SLS participated in the data analyses and interpretation of results. AV and CT performed the fecal egg count. AS planned the recording system and managed the fecal egg counting. AC conceived the overall design, undertook the project management, contributed to the interpretation of results and critically revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Mario Graziano Usai.
Ewes from the experimental farm were raised under breeding conditions that are similar to those of commercial sheep flocks. Blood sampling and anthelmintic treatments were performed by veterinarians or under veterinarian supervision following standard procedures and relevant national guidelines to ensure appropriate animal care.
Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_01_1 (chromosome 1). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals (WGS Mapping, blue dots) in the QTL region Q_01_1 (chromosome 1, imputation from 99 to 100 Mb of the Ovis aries genome assembly v4.0). Figure S2. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_02_1 (chromosome 2). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_02_1 (chromosome 2, imputation from 135 to 137 Mb of the Ovis aries genome assembly v4.0). Figure S3. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_02_2 (chromosome 2). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_02_2 (chromosome 2, imputation from 212 to 214 Mb of the Ovis aries genome assembly v4.0). Figure S4. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_04_1 (chromosome 4). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_04_1 (chromosome 4, imputation from 4 to 10 Mb of the Ovis aries genome assembly v4.0). Figure S5. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_06_1 (chromosome 6). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_06_1 (chromosome 6, imputation from 12 to 14 Mb of the Ovis aries genome assembly v4.0). Figure S6. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_07_1 (chromosome 7). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_07_1 (chromosome 7, imputation from 87 to 89 Mb of the Ovis aries genome assembly v4.0). Figure S7. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_12_1 (chromosome 12). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_12_1 (chromosome 7, imputation from 35 to 42 Mb of the Ovis aries genome assembly v4.0). Figure S8. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_15_1 (chromosome 15). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_15_1 (chromosome 15, imputation from 33 to 35 Mb of the Ovis aries genome assembly v4.0). Figure S9. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_19_1 (chromosome 19). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_19_1 (chromosome 19, imputation from 18 to 32 Mb of the Ovis aries genome assembly v4.0). Figure S10. Graphical comparison of LDLA and WGS-based data association analyses within the QTL region Q_20_1 (chromosome 20). The figure shows the test statistics (− log10(nominal p-values) profile of the LDLA analysis (LDLA Mapping, red line) and Manhattan plot of the association analysis based on imputed genotypes from re-sequenced animals in the QTL region Q_20_1 (chromosome 20, imputation from 16 to 37 Mb of the Ovis aries genome assembly v4.0).
Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_01_1 on Ovis aries chromosome 1. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_01_1 on chromosome 1 (from 99000291 to 100998839 bp, Ovis aries genome assembly v4.0). Tables S2. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_02_1 on Ovis aries chromosome 2. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_02_1 on chromosome 2 (from 135000202 to 136999313 bp, Ovis aries genome assembly v4.0). Tables S3. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_02_2 on Ovis aries chromosome 2. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_02_2 on chromosome 2 (from 212000099 to 213999982 bp, Ovis aries genome assembly v4.0). Tables S4. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_04_1 on Ovis aries chromosome 4. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_04_1 on chromosome 4 (from 4000037 to 10000000 bp, Ovis aries genome assembly v4.0). Tables S5. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_06_1 on Ovis aries chromosome 6. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_06_1 on chromosome 6 (from 12000078 to 13999887 bp, Ovis aries genome assembly v4.0). Tables S6. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_07_1 on Ovis aries chromosome 7. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_07_1 on chromosome 7 (from 87000021 to 88999946 bp, Ovis aries genome assembly v4.0). Tables S7. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_12_1 on Ovis aries chromosome 12. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_12_1 on chromosome 12 (from 35000043 to 41999843 bp, Ovis aries genome assembly v4.0). Tables S8. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_15_1 on Ovis aries chromosome 15. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_15_1 on chromosome 15 (from 33000037 to 34999984 bp, Ovis aries genome assembly v4.0). Tables S9. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_19_1 on Ovis aries chromosome 19. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_19_1 on chromosome 19 (from 18000014 to 31999894 bp, Ovis aries genome assembly v4.0). Tables S10. Variant classification according to SNPeff 4.3t of bilallelic SNPs identified within the QTL region Q_20_1 on Ovis aries chromosome 20. Summary table extracted from the additional snpeff output file"snpEff_summary.html file" reporting the number of effects by impact and the number of effects per type and region, for the QTL region Q_20_1 on chromosome 20 (from 16000304 to 36997864 bp, Ovis aries genome assembly v4.0).
Additional file 3: Table S11.
Full characterisation of the retained SNPs: high or moderate impact variants or most significant variants from the association analysis mapping within the QTL regions. Description of the retained SNPs that mapped within the QTL regions identified in the present work: functional annotation from SNPeff; nominal significance level (− log10(nominal p-values) from the WGS based association analysis, GO biological process enriched term from WebGestalt analysis; and study from which the candidate GIN-activated gene listed by Chitneedi et al. 2020 [51] was identified. The SNP positions are from the Ovis aries genome assembly v4.0.
Functional characterization of the 10 most significant SNPs per QTLR from the WGS analysis. Characterization of the 10 most significant SNPs of the QTLR considered in this work and their functional consequences according to the annotation performed with SnpEff. The SNP positions are from the Ovis aries genome assembly v4.0.
Top hierarchical terms identified by the Gene Ontology (GO) enrichment analysis (biological process database) performed with WebGestalt. Results of the over-representation analysis (ORA) of GO biological process terms of the genes harboring significant mutations or mutations with functional consequences on the transcripts performed with WebGestalt. Gene symbols and ID of human gene orthologues are reported. They were retrieved from the OrthoDB v10 data base starting from the NBCI ID of ovine genes from the Ovis aries annotation release 102.
Casu, S., Usai, M.G., Sechi, T. et al. Association analysis and functional annotation of imputed sequence data within genomic regions influencing resistance to gastro-intestinal parasites detected by an LDLA approach in a nucleus flock of Sarda dairy sheep. Genet Sel Evol 54, 2 (2022). https://doi.org/10.1186/s12711-021-00690-7
|
CommonCrawl
|
A novel algorithm for finding top-k weighted overlapping densest connected subgraphs in dual networks
Riccardo Dondi1,
Mohammad Mehdi Hosseinzadeh1 &
Pietro H. Guzzi ORCID: orcid.org/0000-0001-5542-29972
The use of networks for modelling and analysing relations among data is currently growing. Recently, the use of a single networks for capturing all the aspects of some complex scenarios has shown some limitations. Consequently, it has been proposed to use Dual Networks (DN), a pair of related networks, to analyse complex systems. The two graphs in a DN have the same set of vertices and different edge sets. Common subgraphs among these networks may convey some insights about the modelled scenarios. For instance, the detection of the Top-k Densest Connected subgraphs, i.e. a set k subgraphs having the largest density in the conceptual network which are also connected in the physical network, may reveal set of highly related nodes. After proposing a formalisation of the approach, we propose a heuristic to find a solution, since the problem is computationally hard. A set of experiments on synthetic and real networks is also presented to support our approach.
In last years, the use of networks to manage and analyse experimental data in many fields has grown (Cannataro et al. 2010; Barabási 2011). For instance, in computational biology associations among biological molecules (such as genes, proteins, small lipids etc.), are usually modelled as graphs. Data collected from social networks are modelled using graph theory and their analysis may shed light into association patterns among users (Sapountzi and Psannis 2018; Abatangelo et al. 2009; Clark and Kalita 2014; Faisal et al. 2015; Cannataro et al. 2010)
Usually, data are modelled using a single network whose nodes represent entities and edges their relations. Then, the topological analysis of the networks, i.e. global or local structures (Cannataro et al. 2010), finds context specific properties such as groups of related genes in biology or users in social networks (Liu et al. 2018). More recently, some works demonstrated that the use of a single network may not be able to capture all the relationships among elements considered, therefore some complex models have been introduced such as heterogeneous networks (Milano et al. 2020) or dual networks (Wu et al. 2016). A dual network is a pair of related graphs sharing the same node set, with two different edge sets. One network has unweighted edges, and it is called physical graph. The second one has weighted edges and it is called conceptual graph. For example, in biology dual networks have been used to model interactions among genetic variants (Phillips 2008), where genetic interactions are modelled using the physical network and the quantitative effects of these interactions are modelled with the conceptual one.
An interesting problem in dual networks is the Densest Connected Subgraph (DCS) problem, that is finding a common subgraph between the two networks that has two properties: it is connected on the physical one and it is densest in the conceptual one. A DCS in a dual network may convey relevant information. For instance (Guzzi et al. 2020), showed that DCS may suggest missing links in social networks, capture similar interests among authors in a co-authorships dual network, where physical network represents co-authors and the conceptual network is used to model topics shared.
The relevance of problem arises in many real life scenarios. For instance in Phillips (2008) authors extracted a DCS from dual networks to analyse interactions between genetic variants and their strength. Given two input graphs \(G_{c}(V,E_{c})\) (undirected and edge-weighted), and \(G_{p}(V,E_{p})\) (undirected and unweighted), the problem consists in finding a subset of nodes \(I_{s}\) that induces a densest community in \(G_{c}\) and a connected subgraph in \(G_{p}\). As proved in Wu et al. (2016), the DCS problem is NP-hard, since it may be reduced from the Set Cover problem (Karp 2009). Therefore there is the need for novel heuristics and computational approaches to solve it. Here we focus on a generalisation of this problem, since we search for a set of (overlapping) common subgraphs, that are connected in the physical network and densest in the conceptual network, i.e. top-k weighted overlapping densest connected subgraphs. The identification of top k-densest overlapping subgraphs in a network has been considered in Galbrun et al. (2016); Dondi et al. (2019); Hosseinzadeh (2020).
Our approach is based on a two step strategy: first a single alignment graph is built from the dual networks Guzzi and Milenković (2017); Milano et al. (2020), then we look for dense subgraphs in this network with an ad-hoc heuristic. Notice that these subgraphs correspond to dense subgraphs in the conceptual networks and connected subgraphs in the physical one, therefore they are solutions of the initial problem. Figure 1 depicts the workflow of our approach.
Workflow of the proposed approach. In the first step the input conceptual and physical networks are merged together using a network alignment approach; then Weighted-Top-k-Overlapping DCS is applied on the alignment graph. Each extracted subgraph induces a connected subgraph in the physical network and one of the top-k overlapping weighted densest subgraph in the conceptual one
Considering the state of the art, we should note that we allow more flexibility with respect to other works such as Wu et al. (2016). In this work authors do not consider overlapping subgraphs and their approach is limited to the exact correspondence of nodes between networks. On the other hand, with respect to other approaches for finding densest subgraphs in a network (Balalau 2015; Galbrun et al. 2016; Dondi et al. 2019; Guzzi and Cannataro 2010), we consider weighted networks, an extension that can be useful in many contexts, in particular for biological and social networks.
We provide an implementation of our heuristic, and we show the effectiveness of our approach on synthetic datasets and on four real networks (a social network, two biological networks, a co-authorship network). The experimental results confirm the effectiveness of our approach.
The paper is structured as follows: "Related work" section discusses related works, "Definitions" section gives definitions and formally introduces the problem we are interested into. "The proposed algorithm" section presents our heuristic; "Experiments" section discusses the case studies; finally "Conclusion" section concludes the paper.
Many complex systems cannot be efficiently modelled using a single network without losses of information. Therefore the use of dual networks is growing (Wu et al. 2016; Sun and Kardia 2010). These applications span a large number of fields as introduced before: from bioinformatics to social networks. In genetics, dual networks are used to describe and analyse interactions among genetic variants. They can discover the common effects among multiple genetic variants (Sun and Kardia 2010), using a protein–protein interaction network that represents physical interactions and a weighted network that represents the relations between two genetic variants, usually measured by statistical tests.
A relevant problem in network analysis is that of discovering dense communities, as they represent strongly related nodes. The problem of finding communities in a network or a dual network is based on the specific model of dense or cohesive graph considered. Several models of cohesive subgraph have been considered in the literature and applied in different contexts. One of the first definition of a cohesive subgraph is a fully connected subgraph, i.e. a clique. However, the determination of a clique of the maximum size, also referred to as the Maximum Clique Problem, is NP-hard (Hastad 1996), and it is difficult to approximate (Zuckerman 2006). Moreover, in real networks communities may have missing edges; therefore, the clique model is often too strict and may fail to find some important subgraphs. Consequently, many alternative definitions of cohesive subgraphs that are not fully interconnected have been introduced, including s-club, s-plex and densest subgraph (Komusiewicz 2016; Dondi et al. 2019).
A densest subgraph is a subgraph with maximum density (where the density is the ratio between the number of edges and number of nodes of the subgraph) and the Densest-Subgraph problem asks for a subgraph of maximum density in a given graph. The problem can be solved in polynomial time (Goldberg 1984; Kawase and Miyauchi 2018) and approximated within factor \(\frac{1}{2}\) (Asahiro et al. 2000; Charikar 2000). Notice that the Densest-Subgraph problem can be extended also to edge-weighted networks.
Recently, Wu et al. (2016), proposed an algorithm for finding a densest connected subgraph in a dual network. The approach is based on a two-step strategy. In the first step, the algorithm prunes the dual network without eliminating the optimal solution. In the second step, two greedy approaches are developed to build a search strategy for finding a densest connected subgraph. Briefly, the first step finds the densest subgraph in the conceptual network. The second step refines this subgraph to guarantee that it is connected in the physical network.
In this contribution we use an approach based on local network alignment (LNA) that aims to find (relatively) small regions of similarity among two or more input networks. Such regions may be overlapping or not, and they represent conserved topological among networks. For instance, in protein interaction networks these regions are related to conserved motifs or pattern of interactions (Guzzi and Milenković 2017). LNA algorithms are usually based on building an intermediate structure, defined as alignment graph, and on the subsequent mining of it (Milano et al. 2020). For instance, Ciriello et al. (2012) and its successor AlignMCL (Mina and Guzzi 2014) are based on the construction of alignment graphs (see related papers for complete details about the construction of the alignment graph). GLAlign (Global Local Aligner), is a new local network alignment methodology (Milano et al. 2018) that mixes topology information from global alignment and biological information according to a linear combination schema, while the more recent L-HetNetAligner (Milano et al. 2020) extends the local alignment to heterogeneous networks.
While the literature of network mining has mainly focused on the problem of finding a single subgraph, recently the interest in finding more than a subgraph has emerged (Balalau 2015; Galbrun et al. 2016; Dondi et al. 2019; Hosseinzadeh 2020; Cho et al. 2013). The proposed approaches usually allows overlapping between the computed dense subgraphs. Indeed, there can be nodes that are shared between interesting dense subgraphs, for example hubs. The proposed approaches differ in the way they deal with overlapping. The problem defined in Balalau (2015) controls the overlap by limiting the Jaccard coefficient between each pair of subgraphs of the solution. The Top-k-Overlapping problem, introduced in Galbrun et al. (2016), includes a distance function in the the objective function. In this paper, we follow this last approach and we extend it to weighted networks.
This section introduces the main concepts related to our problem.
Definition 1
Dual Network.
A Dual Network (DN) \(G(V,E_c,E_p)\) is a pair of networks: a conceptual weighted network \(G_c(V,E_c)\) and a physical unweighted one \(G_p(V,E_p)\).
Now, we introduce the definition of weighted density of a graph.
Density.
Given a weighted graph G(V, E, weight), let \(v \in V\) be a node of G, and let
$$\begin{aligned}vol(v)=\sum _{w:(v,w)\in E}weight(v,w)\end{aligned}$$
be the sum of the weights of the edges incident in v. The density of the weighted graph G is defined as
$$\begin{aligned} \rho (G)=\frac{\sum _{v \in V}vol(v)}{|V|}. \end{aligned}$$
Given a graph (weighted or unweighted) G with a set V of nodes and a subset \(Z \subseteq V\), we denote by G[Z] the subgraph of G induced by Z. Given \(E' \subseteq E\), we denote by \(weight(E')\) the sum of weights of edges in \(E'\). Given a dual network we denote by \(G_{p}[I]\), \(G_{c}[I]\), respectively, the subgraphs induced in the physical and conceptual network, respectively, by the set \(I \subseteq V\).
A densest common subgraph DCS, formally defined in the following, is a subset of nodes I that induces a connected subgraph in the conceptual network and a connected subgraph in the physical network.
Densest Common Subgraph.
Given a dual network \(G(V,E_c,E_p)\), a densest common subgraph in \(G(V,E_c,E_p)\) is a subset of nodes \(I \subseteq V\) such that \(G_p[I]\) is connected and the density of \(G_c[I]\) is maximum.
In this paper, we are interested in finding \(k \ge 1\) densest connected subgraphs. However, to avoid taking the same copy of a subgraph or subgraphs that are very similar, we consider the following distance functions introduced in Galbrun et al. (2016).
Let \(G(V,E_c,E_p)\) be a dual network and let G[A], G[B], with \(A, B \subseteq V\), be two induced subgraphs of G. The distance between G[A] and G[B], denoted by \(d: 2^{V} \times 2^{V} \rightarrow \mathbb {R_{+}}\) has value equal \(2-\frac{|A \cap B|^2}{|A||B|}\) if \(A \ne B\), else is equal to 0.
Notice that \(2-\frac{|A \cap B|^2}{|A||B|}\) decreases as the overlapping between A and B increases.
Now, we are able to introduce the problem we are interested into.
Weighted-Top-k-Overlapping DCS
Input: A dual network \(G(V,E_c,E_p)\), a parameter \(\lambda > 0\).
Output: a set \({\mathcal {X}} = \{ G[X_1], \ldots , G[X_k] \}\) of k connected subgraphs of G, with \(k \ge 1\), such that the following objective function is maximised:
$$\begin{aligned} \sum _{i=1}^{k} \rho (G_c[X_i])+ \lambda \sum _{i=1}^{k-1} \sum _{j=i+1}^k d(G[X_i],G[X_j]) \end{aligned}$$
Weighted-Top-k-Overlapping DCS, for \(k \ge 3\), is NP-hard, as it is NP-hard already on an unweighted graphs (Dondi et al. 2019). Notice that for \(k=1\), then Weighted-Top-k-Overlapping DCS is exactly the problem of finding a single weighted densest connected subgraph, hence it can be solved in polynomial time (Goldberg 1984).
Greedy algorithms for DCS
One of the ingredient of our method is a variant of a greedy algorithm for DCS, denoted by Greedy, which is an approximation algorithm for the problem of computing a connected densest subgraph of a given graph. Given a weighted graph G, Greedy (Asahiro et al. 2000; Charikar 2000) iteratively removes from G a vertex v having lowest vol(v) and stops when all the vertices of the graph have been removed. It follows that at each iteration i, with \(1 \le i \le |V|\), Greedy computes a subgraph \(G_i\) of G. The output of this algorithm is a densest of subgraphs \(G_1, \ldots , G_{|V|}\). The algorithm has a time complexity \(O(|E| + |V| \log |V|)\) on weighted graphs and achieves an approximation factor of \(\frac{1}{2}\) (Asahiro et al. 2000; Charikar 2000).
We introduce here a variant of the Greedy algorithm, called V-Greedy. Given an input weighted graph G, V-Greedy, similarly to Greedy, at each iteration i, with \(1 \le i \le |V|\), removes a vertex v having lowest vol(v) and computes a subgraph \(G_i\), with \(1 \le i \le |V|\). Then, among subgraphs \(G_1, \ldots , G_{|V|}\), V-Greedy returns a subgraph \(G_i\) that maximises the value:
$$\begin{aligned} \rho (G_i) + 2\left( \frac{\rho (G_i)}{|V_i|}\right) . \end{aligned}$$
Essentially, when selecting the subgraph to return among \(G_1, \ldots , G_{|V|}\), we add to the density the correction factor \(2(\frac{\rho (G_i)}{|V_i|})\). This factor is added to avoid returning a subgraph that is not well-connected in terms of edge connectivity, that is it contains a small cut. For example, consider a graph with two equal size cliques \(K_1\) and \(K_2\) having the same (large) weighted density and a single edge of large weight connecting them. Then the union of \(K_1\) and \(K_2\) is denser than both \(K_1\) and \(K_2\), hence Greedy returns the union of \(K_1\) and \(K_2\). This may prevent us to find \(K_1\), \(K_2\) as a solution of Weighted-Top-k-Overlapping DCS. In this example, when the density of \(K_1\) and \(K_2\) is close enough to the density of their union, V-Greedy will return one of \(K_1\), \(K_2\).
The proposed algorithm
In this section we present our heuristic for Weighted-Top-k-Overlapping DCS in dual networks. The approach is based on two main steps:
First, the input networks are integrated into a single weighted alignment graph preserving the connectivity properties of the physical network
Second, the obtained alignment graph is mined by using an ad-hoc heuristic for Weighted-Top-k-Overlapping DCS based on the V-Greedy algorithm
Building of the alignment graph
In the first step the algorithm receives in input: a weighted graph \(G_c(V,E_c)\) (the conceptual graph); an unweighted graph \(G_p(V,E_p)\) (the physical graph); an initial set (seed nodes) of node pairs P, where each pair defines a correspondence between a node of \(G_c\) and a node of \(G_p\); a distance threshold \(\delta\) that represents the maximum threshold distance that two nodes may have in the physical network. For example, when \(\delta\) is set to one, only adjacent nodes in both networks are considered.
Given the input data, the algorithm starts by building the nodes of the alignment graph. The alignment graph contains a node for each pair in P. The edges and weights of the alignment graph are defined as follows:
An edge \(\{u,v\}\) is defined in the alignment graph when the nodes corresponding to u and v are adjacent in \(G_p\) and in \(G_c\); the weight of \(\{u,v\}\) is equal to the weight of the edge connecting the nodes corresponding to u and v in \(G_c\)
An edge \(\{u,v\}\) is defined in the alignment graph when u and v are adjacent in \(G_p\) and have distance lower than \(\delta\) in \(G_c\); the weight of \(\{u,v\}\) is equal to the average of the weights on a shortest path connecting the nodes corresponding to u and v in \(G_c\).
A heuristic for Weighted-top-k-overlapping DCS
In the second phase of our algorithm, we solve Weighted-Top-k-Overlapping DCS on the alignment graph G computed in phase 1 via a heuristic. We present here our heuristic for Weighted-Top-k-Overlapping DCS, called Iterative Weighted Dense Subgraphs (IWDS).
The heuristic starts with a set \({\mathcal {X}}= \emptyset\) and consists of k iterations. At each iteration i, with \(1 \le i \le k\), given a set \({\mathcal {X}}= \{G[X_1],\ldots ,G[X_{i-1}]\}\) of subgraphs of G, IWDS computes a subgraph \(G[X_i]\) and adds it to \({\mathcal {X}}\).
The first iteration of IWDS applies the V-Greedy algorithm (see "Greedy algorithms for DCS" section) on G and computes \(G[X_1]\). In iteration i, with \(2 \le i \le k\), IWDS applies one of the two following cases, depending on a parameter f, \(0 < f \le 1\), and on the size of the set \(C_{i-1} = \bigcup _{j=1}^{i-1} X_j\) (the set of nodes already covered by the subgraphs in \({\mathcal {X}}\)).
Case 1. If \(|C_{i-1}| \le f |V|\) (that is at most f|V| nodes of G are covered by the subgraphs in \({\mathcal {X}}\)), IWDS applies the V-Greedy algorithm on a subgraph \(G'\) pf G obtained by retaining \(\alpha\) nodes (\(\alpha\) is a parameter) of \(C_{i-1}\) having highest weighted degree in G and removing the other nodes of \(C_{i-1}\). \(G'[X_i]\) is a weighted connected dense subgraph in \(G'\), distinct from those in \({\mathcal {X}}\).
Case 2. If \(|C_{i-1}| > f |V|\) (more than f|V| nodes of G are covered by the subgraphs in \({\mathcal {X}}\)), IWDS applies the V-Greedy algorithm on a subgraph \(G''\) of G obtained by removing \((1-\alpha )\) nodes (recall that \(\alpha\) is a parameter of IWDS) of \(C_{i-1}\) having lowest weighted degree in G. IWDS computes \(G''[X_i]\) as a weighted connected dense subgraph in \(G'\), distinct from those in \({\mathcal {X}}\).
Complexity evaluation.
We denote by n (by m, respectively) the number of nodes (of edges, respectively) of the dual network. The first step requires the analysis of both the physical and the conceptual graph, and the construction of the novel alignment graph. This requires \({\mathcal {O}}(n^2)(\hbox {calculation-edge-weights})\) time. The calculation of edge weights requires the calculation of a shortest path among all the node pairs in the physical graph using the Chan implementation (Chan 2012), therefore it requires \({\mathcal {O}}(n m_p)\) time (\(m_p\) is the number of edges of the physical graph).
As for Step 2, IWDS makes k iterations. Each iteration applies V-Greedy on G and requires \(O(m n \log n)\) time, as the Greedy algorithm (Charikar 2000). Iteration i, with \(2 \le i \le k\), first computes the set of covered nodes in order to find those nodes that have to be removed (or retained). For this purpose, we sort the nodes in \(C_{j-1}\) based on their weighted degree in \(O(n \log n)\) time. Thus the overall time complexity of IWDS is \(O(k m n \log n)\).
In this section, we provide an experimental evaluation of IWDS on synthetic and real networks.Footnote 1 The design of a strong evaluation scheme for our algorithm is not simple, since we have to face two main issues:
Existing methods for computing the top k overlapping subgraphs (Galbrun et al. 2016) are defined for unweighted graphs and cannot be used on dual networks.
Existing network alignment algorithms do not aim to extract top k densest subgraphs.
Consequently, we cannot easily compare our approach with the existing state of the art methods, and we design an ad-hoc procedure for the evaluation of our method based on the following steps. First, we consider the performance of our approach on synthetic networks. In this way, we show that, in many of the cases we considered, IWDS can correctly recover top k weighted densest subgraphs. Then we apply our method to four real-world dual networks.
The alignment algorithm described of "A heuristic for Weighted-top-k-overlapping DCS" section is implemented in Python 3.7 using the NetworkX package for managing networks (Hagberg et al. 2008). IWDS is implemented in MATLAB R2020a. We perform the experiments on MacBook-Pro (OS version 10.15.3) with processor 2.9 GHz Intel Core i5 and 8 GB 2133 MHz LPDDR3 of RAM, Intel Iris Graphics 550 1536 MB.
Synthetic networks
In the first part of our experimental evaluation, we analyse the performance of IWDS to find planted ground-truth subgraphs on synthetic datasets.
Datasets. We generate two noiseless synthetic datasets, consisting of \(k=5\) planted dense subgraphs (cliques). Synthetic1 contains five non-overlapping ground-truth subgraphs, while Synthetic3 contains five overlapping ground-truth subgraphs.
In Synthetic1, each planted dense subgraph contains 30 nodes and has edge weights randomly generated in the interval [0.8, 1]. In Synthetic3, each planted dense subgraph contains 20 nodes not shared with other planted subgraphs. The subgraphs are arranged in a cycle, 5 nodes of each subgraph are shared with the subgraph on one side and 5 nodes are shared with the subgraph on the other side. Edge weights are randomly generated in the interval [0.8, 1].
These cliques are then connected to a background subgraph of 100 nodes. We consider three different ways to generate the background subgraph: Erdös–Renyi with parameter \(p=0.1\), Erdös–Renyi with parameter \(p=0.2\) and Barabasi–Albert with parameter equal to 10. Weights of the background graphs are randomly generated in interval [0, 0.5]. Then 50 edges connecting cliques and the background graph are randomly added (with weights randomly generated in interval [0, 0.5]).
Based on this approach, we generate four different sets of synthetic networks, called Synthetic1, Synthetic2, Synthetic3 and Synthetic4. Synthetic1 (for the non-overlapping case) and Synthetic3 (for the overlapping case) are generated as described above. Synthetic2 and Synthetic4, respectively, are obtained by applying noise to the synthetic networks in Synthetic1, Synthetic3, respectively. The noise is added by varying 5%, 10% and 15% of node relations of each network. A set of pairs of nodes are chosen randomly: if they belong to the same clique, the weight of the edge connecting the two nodes is changed to a random value in the interval [0, 0.5]; else an edge connecting the two nodes is (possibly) added (if not already in the network) and its weight is randomly assigned a value in the interval [0.8, 1].
Outcome. We present the results of our experimental evaluation, in particular, the average running time, density, distance and F1-score,Footnote 2 varying the parameter \(\alpha\). We recall that F1-score is the average mean of precision and recall, and, as in Galbrun et al. (2016) we consider this measure to evaluate the accuracy of our method to detect the ground-truth subgraphs. Following Yang and Leskovec (2012), we consider the number of shared nodes between each ground-truth subgraph and each detected subgraph, so that we are able to define the best-matching of ground-truth subgraphs and detected subgraphs. Then, we compute the F1[t/d] measure as the average F1-score of the best-matching ground-truth subgraph to each detected subgraph (truth to detected) and F1[d/t] measure as the average F1-score of the best-matching detected subgraph to each ground-truth subgraph (detected to truth). Notice that in most of the cases considered, the running time of IWDS increases with the increasing of \(\alpha\). Also, generally, the solutions returned by IWDS for larger values of \(\alpha\) are denser than for small values, while the solutions with small values of \(\alpha\) have a higher value of distance (hence the subgraphs returned have a smaller overlapping).
Tables 1 and 3 report average results of running time (in minutes), density, distance and F1 scores for the two noiseless datasets. Table 1 shows the experimental results for the noiseless Synthetic1 dataset, where ground-truth subgraphs are disjoint. In this case IWDS is able to detect the ground-truth subgraphs for all values of \(\alpha\), averaged over 300 examples. Table 2 shows the experimental results for the noiseless Synthetic3 dataset, where ground-truth subgraphs are overlapping. In this case the best performances are achieved for \(\alpha =0.75\), where F1[t/d] = 0.745, while F1[d/t]= 0.804. The experimental results show that F1[d/t] increases with \(\alpha\), in particular for lower values of \(\alpha\) (\(\alpha \le 0.25\)) the performance of IWDS for this measure is poor. We observe that for values of \(\alpha \ge 0.5\), the F1[t/d] measure decreases as \(\alpha\) increases.
Table 1 Performance of IWDS on non overlapping generated networks (called synthetic1) for \(k=5\), varying \(\alpha\) from 0.05 to 0.9, the running time (in minutes), the density and the distance are averaged over 300 examples
Table 2 Performance of IWDS on overlapping generated networks (called synthetic3) for \(k=5\), varying \(\alpha\) from 0.05 to 0.9, the running time (in minutes), the density and the distance are averaged over 300 examples
Tables 3 and 4 show the performances of IWDS on the noisy datasets Synthetic2 and Synthetic4. Recall that for these datasets, we consider noise values of 0.05, 0.10 and 0.15. The results we present are averaged over 90 examples. As for the noiseless datasets, we vary the value of parameter \(\alpha\).
Table 3 Performance of IWDS on non overlapping generated networks with added noise varying from 0.05 to 0.15 (called synthetic2) for \(k=5\), varying \(\alpha\) from 0.05 to 0.9, the running time (in minutes), the density and the distance are averaged over 90 examples
Table 4 Performance of IWDS on overlapping generated networks with added noise varying from 0.05 to 0.15 (called synthetic4) for \(k=5\), varying \(\alpha\) from 0.05 to 0.9, the running time (in minutes), the density and the distance are averaged over 90 examples
For Synthetic2, for noise value 0.05 and 0.10, we obtain near optimal solutions for all the cases considered. The performances of IWDS starts to degrade with noise equal to 0.15, in particular the values of F1[d/t] for \(\alpha \le 0.25\). F1[t/d] is instead close to 1 (at least 0.93) for the values of \(\alpha\) considered.
For Synthetic4, the added noise has a significant impact on the quality of computed solutions, even for noise value equal to 0.05. While the noise increasing has a limited effect on IWDS for small value of \(\alpha\) (\(\alpha \le 0.25\)), for higher values of \(\alpha\) leads to a degrade in performance, in particular for F1[t/d].
Dual networks
We evaluate IWDS on four real-world dual network datasets:
Datasets. G-graphA. The G-graphA dataset is derived from the GoWalla social network where users share their locations (expressed as GPS coordinates) by checking-in into the web site (Cho et al. 2011). Each node represents a user and each edge links two friends in the network. We obtained the physical network by considering friendship relation on the social network. We calculated the conceptual network by considering the distance among users. Then we run the first step of our algorithm and we obtained the alignment graph G-graphA, containing 2,241,339 interactions and 9878 nodes (we set \(\delta\)=4). In this case a DCS represents set of friends that share check-ins in near locations.
DBLP-graphA. The DBLP-graphA dataset is extracted from a computer science bibliography and represents interactions between authors. Nodes represent authors and edges represent connections between two authors if they have published at least one paper together. Each edge in the physical network connects two authors that co-authored at least one paper. Edges in the conceptual network represent the similarity of research interests of the authors calculated on the basis of all their publications. After running the first step of the algorithm (using \(\delta\)=4), we obtained an alignment graph DBLP-graphA dataset containing 553,699 interactions and 18,954 nodes. In this case a DCS represents a set of co-authors that share some strong common research interests and the use of DNs is mandatory, since physical network shows only co-authors that may not have many common interests and the conceptual network represents authors with common interest that may not be co-authors.
HS-graphA. HS-graphA is a biological dataset and is taken from the STRING database (Szklarczyk et al. 2016). Each node represents a protein, and each edge takes into account the reliability of the interactions. We use two networks for modelling the database: a conceptual network represents such reliability value; and a physical network stores the binary interactions. The HS-graphA dataset contains 5,879,727 interactions and 19,354 nodes (we set \(\delta\)=4).
Protein-interaction We extracted from the STRING database a subnetwork of proteins involved into the SARS-COV-2 infection (Szklarczyk et al. 2016). The physical network contains interacting proteins, while the conceptual network contains the strength of the association among them. Protein-Interaction contains 192 nodes and 418 edges (Table 5).
Table 5 Properties of the alignment graphs obtained for each dataset
For these large size datasets, we set the value of k to 20, following the approach in Galbrun et al. (2016). Table 6 reports the running time of IWDS, and the density and distance of the solutions returned by IWDS. As for the synthetic datasets, we consider six different values of \(\alpha\). As shown in Table 6, by increasing the value of \(\alpha\) from 0.05 to 0.5, IWDS (except of one case, HS-graphA with \(\alpha =0.1\)) returns solutions that are denser, but with lower distance.
Table 6 shows also how the running time of IWDS is influenced by the size of the network and by the value of \(\alpha\). We have put a bound of 20 h on the running time of IWDS and the method was not able to return a solution for HS-graphA for \(\alpha \ge 0.5\) within this time. The running time is influenced in particular by the number of edges of the input network. DBLP-graphA and HS-graph-A have almost the same number of nodes, but HS-graph-A is much more denser than DBLP-graphA. IWDS for the former network is remarkably slower than for DBLP-graphA (1.986 slower for \(\alpha =0.05\), 6.218 slower for \(\alpha =0.25\)). The running time of IWDS is considerably influenced by the value of parameter \(\alpha\), since it increases as \(\alpha\) increases. Indeed by increasing the value of \(\alpha\), less nodes are removed by Case 1 and Case 2 of IWDS, hence in iterations of IWDS V-Greedy is applied to larger subgraphs. This fact can be seen in particular for HS-graphA, for which IWDS failed to terminate within 20 h when \(\alpha \ge 0.5\).
Table 6 Performance of IWDS on real-world network for \(k=20\), varying \(\alpha\) from 0.05 to 0.9. For each network, we report the running time in minutes, the density and the distance
Biological evaluation of results
For biological data there is the possibility to evaluate the relevance of the results considering the relevance of the biological knowledge that results may convey.
Biological data are usually annotated with terms extracted from ontologies, e.g. Gene Ontology (Guzzi et al. 2012). Consequently, experiments of analysis of biological data may evaluated in terms of the biological knowledge inferred from the analysis of data and in terms of the statistical relevance of the results themselves. For instance, given a DCS extracted from two biological networks, it is interesting to determine the biological meaning of the DCS and how this is relevant, i.e. how this DCS may convey biological relevance with respect to another random one. Usually, subgraphs of biological networks may represent groups of interacting proteins sharing some common functions or playing similar biological roles. Consequently, it is possible to evaluate the biological relevance of obtained results by considering the role of proteins. Such information are stored and organised into biological ontologies such as Gene Ontology (GO) (Harris et al. 2004). GO functional enrichment has been proposed to evaluate the significant presence of common roles or function in a solution represented as a list of genes/proteins. It has been shown that the use of semantic similarities (SS) (Guzzi et al. 2012) is a feasible and efficient way to quantify biological similarity among proteins. SS measures are able to quantify the functional similarity of pairs of proteins/genes, comparing the GO terms that annotate them, therefore proteins that share the biological role have high values of semantic similarity. As a consequence, genes/proteins that are found in the same solution should have a semantic similarity significantly higher than random expectation. These considerations have been used during the design of the evaluation of our results that we adapted from the evaluation scheme proposed in Mina and Guzzi (2014).
Given a DCS \(DCS_k\) we calculate its internal semantic similarity \(SS_{DCS_k}\) as the average semantic similarity of all the nodes pairs of the DCS as follows:
$$\begin{aligned} SS_{DCS_k} =\frac{\sum _{n_i \in DCS_k}\sum _{n_j \in DCS_k,j \ne i} SS_{DCS_k}(n_i,n_j)}{\Vert SS_{DCS_k}\Vert \Vert SS_{DCS_k-1}\Vert } \end{aligned}$$
We compare the DCS extracted from the biological network against random ones obtained by randomly sampling the input networks to prove their statistical significance. Given a DCS \(DCS_i\), we can test the null hypothesis: \(H_1^0\): the average semantic similarity of the protein internals to the DCS \(SS(DCS_i)\) is higher than by chance, where the background distribution can be estimated from the semantic similarity of random subgraphs \(RS_i\) taken from the alignment graph \(SS(RS_i)\), using for instance 0.05 as significance level.
Consequently we design this test as described in the following algorithm:
Let \(DCS_{i}\) be a given DCS;
Let \(SS(DCS_{i})\) be its internal semantic similarity
Let \(V_s\) be the set of 100 random subgraph with same size \(V_s\)={\(RS_{j}\)} j=0,..,99
For Each \(RS_{j} \in V_s\) calculate \(SS_j(RS_{j})\) the internal semantic similarity of each random solution
Compare \(SS(DCS_{i})\) and all the \(SS_j(RS_{j})\) using a non parametric test
Accept or Refuse the Hypothesis \(SS(DCS_{i})\) is significantly higher than \(SS_j(RS_{j})\)
Consequently, for each graph in the solution we generate 100 random graphs of the same size, by sampling the obtained alignment graph. For each graph we calculated its internal semantic similarity using the Resnick measure (Resnik 1999). Results demonstrate that our solution is biologically relevant and the relevance is higher than by chance as summarised in Table 7.
Table 7 Comparison of the average semantic similarity for the two biological networks considered
DNs are used to model two kinds of relationships among elements in the same scenario. A DN is a pair of networks that have the same set of nodes. One network has unweighted edges (physical network), while the second one has weighted edges (conceptual network). In this contribution, we introduced an approach that first integrates a physical and a conceptual network into an alignment graph. Then, we applied the Weighted-Top-k-Overlapping DCS problem to the alignment graph to find k dense connected subgraphs. These subgraphs represent subsets of nodes that are strongly related in the conceptual network and that are connected in the physical one. We presented a heuristic, called IWDS, for Weighted-Top-k-Overlapping DCS and an experimental evaluation of IWDS. We first considered as a proof-of-concept the ability of our algorithm to retrieve known densest subgraphs in synthetic networks. Then we tested the approach on four real networks to demonstrate the effectiveness of our approach. Future work will consider a possible high performance implementation of our approach and the application of the IWDS algorithm to other scenarios (e.g. financial or marketing datasets).
https://github.com/mehdihosseinzadeh/-k-overlapping-densest-connected-subgraphs
The source code and data used in our experiments are available at https://github.com/mehdihosseinzadeh/-k-overlapping-densest-connected-subgraphs.
Given the ground-truth and detected subgraphs, F1-score is calculated considering precision and recall, where precision is the fraction of the number of nodes in the ground-truth correctly identified by detected subgraphs divided by number of nodes in the detected subgraphs, whereas the recall is the fraction of the number nodes in the ground-truth correctly identified by the detected subgraphs divided by the number of nodes in the ground-truth.
DN:
(Dual networks)
DCS:
(Densest connected subgraph)
LNA:
(Local network alignment)
IWDS:
(Iterative weighted dense subgraphs)
Abatangelo L, Maglietta R, Distaso A, D'Addabbo A, Creanza TM, Mukherjee S, Ancona N (2009) Comparative study of gene set enrichment methods. BMC Bioinform 10:275. https://doi.org/10.1186/1471-2105-10-275
Asahiro Y, Iwama K, Tamaki H, Tokuyama T (2000) Greedily finding a dense subgraph. J Algorithms 34(2):203–221
Balalau OD, Bonchi F, Chan T-H, Gullo F, Sozio M (2015) Finding subgraphs with maximum total density and limited overlap. In: Cheng, X., Li, H., Gabrilovich, E., Tang, J. (eds.) Proceedings of the eighth ACM international conference on web search and data mining, WSDM 2015, Shanghai, China, February 2–6, 2015. ACM, pp 379–388. https://doi.org/10.1145/2684822.2685298
Barabási A-L (2011) The network takeover. Nat Phys 8(1):14–16. https://doi.org/10.1038/nphys2188
Cannataro M, Guzzi PH, Veltri P (2010) Protein-to-protein interactions. ACM Comput Surv 43(1):1–36. https://doi.org/10.1145/1824795.1824796
Cannataro M, Guzzi PH, Veltri P (2010) Impreco: distributed prediction of protein complexes. Future Gener Comput Syst 26(3):434–440
Chan TM (2012) All-pairs shortest paths for unweighted undirected graphs in o(mn) time. ACM Trans Algorithms 8(4)
Charikar M (2000) Greedy approximation algorithms for finding dense components in a graph. In: International workshop on approximation algorithms for combinatorial optimization. Springer, pp 84–95
Charikar M (2000) Greedy approximation algorithms for finding dense components in a graph. In: Jansen K, Khuller S (eds) Approximation algorithms for combinatorial optimization, third international workshop, APPROX 2000, Proceedings. Lecture notes in computer science, vol 1913. Springer, pp 84–95. https://doi.org/10.1007/3-540-44436-X
Cho Y-R, Mina M, Lu Y, Kwon N, Guzzi PH (2013) M-finder: uncovering functionally associated proteins from interactome data integrated with go annotations. Proteome Sci 11(1):1–12
Cho E, Myers SA, Leskovec J (2011) Friendship and mobility: user movement in location-based social networks. In: Proceedings of the 17th ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1082–1090
Ciriello G, Mina M, Guzzi PH, Cannataro M, Guerra C (2012) AlignNemo: a local network alignment method to integrate homology and topology. PLOS ONE 7(6):38107. https://doi.org/10.1371/journal.pone.0038107
Clark C, Kalita J (2014) A comparison of algorithms for the pairwise alignment of biological networks. Bioinformatics (Oxford, England) 30(16):2351–2359
Dondi R, Mauri G, Sikora F, Zoppis I (2019) Covering a graph with clubs. J Graph Algorithms Appl 23(2):271–292. https://doi.org/10.7155/jgaa.00491
MathSciNet Article MATH Google Scholar
Dondi R, Guzzi PH, Hosseinzadeh MM (2020) Top-k connected overlapping densest subgraphs in dual networks. In: International conference on complex networks and their applications. Springer, pp 585–596
Dondi R, Hosseinzadeh MM, Mauri G, Zoppis I (2019) Top-k overlapping densest subgraphs: approximation and complexity. In: Proceedings of the 20th Italian conference on theoretical computer science, ICTCS 2019, Como, Italy, September 9–11, 2019, pp 110–121
Faisal F, Meng L, Crawford J, Milenkovic T (2015) The post-genomic era of biological network alignment. EURASIP J Bioinform Syst Biol 2015(1):1–19
Galbrun E, Gionis A, Tatti N (2016) Top-k overlapping densest subgraphs. Data Min Knowl Discov 30(5):1134–1165. https://doi.org/10.1007/s10618-016-0464-z
Goldberg A (1984) Finding a maximum density subgraph. Technical report. University of California, Berkeley
Guzzi PH, Milenković T (2017) Survey of local and global biological network alignment: the need to reconcile the two sides of the same coin. Brief Bioinform 132
Guzzi PH, Cannataro M (2010) μ-cs: an extension of the tm4 platform to manage affymetrix binary data. BMC Bioinform 11(1):315
Guzzi P, Mina M, Guerra C, Cannataro M (2012) Semantic similarity analysis of protein data: assessment with biological features and issues. Brief Bioinform 13(5):569–585. https://doi.org/10.1093/bib/bbr066
Guzzi PH, Salerno E, Tradigo G, Veltri P (2020) Extracting dense and connected communities in dual networks: an alignment based algorithm. IEEE Access 8:162279–162289
Hagberg A, Swart P, S Chult D (2008) Exploring network structure, dynamics, and function using network. In: Technical report, Los Alamos National Lab. (LANL), Los Alamos
Harris MA, Clark J, Ireland A, Lomax J, Ashburner M et al (2004) The gene ontology (go) database and informatics resource. Nucl Acids Res 32(Database issue):258–261
Hastad J (1996) Clique is hard to approximate within n/sup 1-/spl epsiv. In: Proceedings of 37th conference on foundations of computer science. IEEE, pp 627–636
Hosseinzadeh MM (2020) Dense subgraphs in biological networks. In: International conference on current trends in theory and practice of informatics. Springer, pp 711–719
Karp RM (2009) Reducibility among combinatorial problems. In: 50 years of integer programming 1958–2008. Springer, Berlin, pp 219–241
Kawase Y, Miyauchi A (2018) The densest subgraph problem with a convex/concave size function. Algorithmica 80(12):3461–3480. https://doi.org/10.1007/s00453-017-0400-7
Komusiewicz C (2016) Multivariate algorithmics for finding cohesive subnetworks. Algorithms 9(1):21
Liu X, Shen C, Guan X, Zhou Y (2018) Digger: detect similar groups in heterogeneous social networks. ACM Trans Knowl Discov from Data (TKDD) 13(1):2
Milano M, Guzzi PH, Cannataro M (2018) Glalign: a novel algorithm for local network alignment. IEEE/ACM Trans Comput Biol Bioinform 16(6):1958–1969
Milano M, Milenković T, Cannataro M, Guzzi PH (2020) L-HetNetAligner: a novel algorithm for local alignment of heterogeneous biological networks. Sci Rep 10(1):3901. https://doi.org/10.1038/s41598-020-60737-5
Mina M, Guzzi PH (2014) Improving the robustness of local network alignment: design and extensive assessment of a Markov clustering-based approach. IEEE/ACM Trans Comput Biol Bioinform (TCBB) 11(3):561–572
Phillips PC (2008) Epistasis—the essential role of gene interactions in the structure and evolution of genetic systems. Nat Rev Genet 9(11):855–867
Resnik P (1999) Semantic similarity in a taxonomy: an information-based measure and its application to problems of ambiguity in natural language. J Artif Intell Res 11:95–130
Sapountzi A, Psannis KE (2018) Social networking data analysis tools and challenges. Future Gener Comput Syst 86:893–913
Sun YV, Kardia SL (2010) Identification of epistatic effects using a protein-protein interaction database. Human Mol Genet 19(22):4345–4352
Szklarczyk D, Morris JH, Cook H, Kuhn M, Wyder S, Simonovic M, Santos A, Doncheva NT, Roth A, Bork P et al (2016) The string database in 2017: quality-controlled protein-protein association networks, made broadly accessible. Nucl Acids Res 937
Wu Y, Zhu X, Li L, Fan W, Jin R, Zhang X (2016) Mining dual networks: models, algorithms, and applications. TKDD
Yang J, Leskovec J (2012) Community-affiliation graph model for overlapping network community detection. In: 2012 IEEE 12th international conference on data mining. IEEE, pp 1170–1175
Zuckerman D (2006) Linear degree extractors and the inapproximability of max clique and chromatic number. In: Kleinberg JM (ed) Proceedings of the 38th annual ACM symposium on theory of computing, Seattle, WA, USA, May 21–23, 2006. ACM, pp 681–690 (2006). https://doi.org/10.1145/1132516.1132612
A preliminary version of the paper has been published in Dondi et al. (2020)
Department of Science, University of Bergamo, Bergamo, Italy
Riccardo Dondi & Mohammad Mehdi Hosseinzadeh
Department of Surgical and Medical Sciences, Magna Graecia University, Catanzaro, Italy
Pietro H. Guzzi
Riccardo Dondi
Mohammad Mehdi Hosseinzadeh
All the authors contribute to the framework definition. PHG designed and implemented the graph alignment part. RD and MH designed and implemented the heuristic IWDS . All the authors performed the experimental analysis. All the authors contributed to the manuscript writing. All the authors read and approved the manuscript.
Correspondence to Pietro H. Guzzi.
We give our consent for the publication.
Dondi, R., Hosseinzadeh, M.M. & Guzzi, P.H. A novel algorithm for finding top-k weighted overlapping densest connected subgraphs in dual networks. Appl Netw Sci 6, 40 (2021). https://doi.org/10.1007/s41109-021-00381-8
Network mining
Dense subgraphs
Special Issue of the 9th International Conference on Complex Networks and Their Applications
|
CommonCrawl
|
ETRI Journal
ETRI Journal is an international, peer-reviewed multidisciplinary journal published bimonthly in English. The main focus of the journal is to provide an open forum to exchange innovative ideas and technology in the fields of information, telecommunications, and electronics. Key topics of interest include high-performance computing, big data analytics, cloud computing, multimedia technology, communication networks and services, wireless communications and mobile computing, material and component technology, as well as security. With an international editorial committee and experts from around the world as reviewers, ETRI Journal publishes high-quality research papers on the latest and best developments from the global community.
http://mc.manuscriptcentral.com/etrij KSCI KCI SCI
Volume 3 Issue 1.2
$Pr^{3+}-and$ $Pr^{3+}/Er^{3+}$-Doped Selenide Glasses for Potential $1.6{\mu}m$ Optical Amplifier Materials
Choi, Yong-Gyu;Park, Bong-Je;Kim, Kyong-Hon;Heo, Jong 97
$1.6\;{\mu}m$ emission originated from $Pr^{3+}:\;(^3F_3,\;^3F_4)\;{\longrightarrow}\;^3H_4$ transition in $Pr^{3+}-\;and\;Pr^{3+}/Er^{3+}$-doped selenide glasses was investigated under an optical pump of a conventional 1480 nm laser diode. The measured peak wavelength and fullwidth at half-maximum of the fluorescent emission are ~1650nm and 120nm, respectively. A moderate lifetime of the thermally coupled upper manifolds of ${\sim}212{\pm}10{\mu}s$ together with a high stimulated emission cross-section of ${\sim}(3{\pm}1){\times}10^{-20}\;cm^2$ promises to be useful for $1.6{\mu}m$ band fiber-optic amplifiers that can be pumped with an existing high-power 1480 nm laser diode. Codoping $Er^{3+}$ enhances the emission intensity by way of a nonradiative $Er^{3+}:\;^4I_{13/2}\;{\longrightarrow}\;Pr^{3+}:\;(^3F_3,\;^3F_4)$ energy transfer. The Dexter model based on the spectral overlap between donor emission and acceptor absorption describes well the energy transfer from $Er^{3+}$ to $Pr^{3+}$ in these glasses. Also discussed in this paper are major transmission loss mechanisms of a selenide glass optical fiber.
A Novel Method of All-Optical Switching: Quantum Router
Ham, Byoung-Seung 106
Subpicosecond all-optical switching method based on the simultaneous two-photon coherence exchange is proposed and numerically demonstrated. The optical switching mechanism is based on the optical field induced dark resonance swapping via nondegenerate four-wave mixing processes. For potential applications of ultrafast all-optical switching in fiber-optic communications, 10-THz channel number independent quantum router is discussed.
Configuration of ACK Trees for Multicast Transport Protocols
Koh, Seok-Joo;Kim, Eun-Sook;Park, Ju-Young;Kang, Shin-Gak;Park, Ki-Shik;Park, Chee-Hang 111
For scalable multicast transport, one of the promising approaches is to employ a control tree known as acknowledgement (ACK) tree which can be used to convey information on reliability and session status from receivers to a root sender. The existing tree configuration has focused on a 'bottom-up' scheme in which ACK trees grow from leaf receivers toward a root sender. This paper proposes an alternative 'top-down' configuration where an ACK tree begins at the root sender and gradually expands by including non-tree nodes into the tree in a stepwise manner. The proposed scheme is simple and practical to implement along with multicast transport protocols. It is also employed as a tree configuration in the Enhanced Communications Transport Protocol, which has been standardized in the ITU-T and ISO/IEC JTC1. From experimental simulations, we see that the top-down scheme provides advantages over the existing bottom-up one in terms of the number of control messages required for tree configuration and the number of tree levels.
Performance Evaluation of the Physical Layer of the DSRC Operating in 5.8 GHz Frequency Band
Lee, Byung-Seub;Yim, Choon-Sik;Ahn, Dong-Hyun;Oh, Deock-Gil 121
In this paper, the theoretical as well as experimental results of BER characteristics of three different modulation schemes, ASK, FSK and BPSK, in a multi-path Rician channel are addressed. These BER characteristics are analyzed as a function of $E_b/N_o$ and the power ratio of the line of sight (LOS) component to the Rayleigh scattered component. The theoretical as well as computer simulation results shows the ASK is the most suitable modulation scheme for the dedicated short range communication (DSRC) in terms of implemental cost and system complexity. The decision feedback equalizer is proved to be very effective in canceling the multi-path interference in the DSRC channel environment. The simulation result of the equalized ASK, reveals the performance enhancement achievable with decision feedback (DFE) equalizer for the first generation DSRC system. The multi-ray DSRC channel model is also provided to predict the received carrier power and fluctuation, which are quite dependent on the surroundings of a cell.
A Cost Effective Reference Data Sampling Algorithm Using Fractal Analysis
Lee, Byoung-Kil;Eo, Yang-Dam;Jeong, Jae-Joon;Kim, Yong-Il 129
A random sampling or systematic sampling method is commonly used to assess the accuracy of classification results. In remote sensing, with these sampling methods, much time and tedious work are required to acquire sufficient ground truth data. So, a more effective sampling method that can represent the characteristics of the population is required. In this study, fractal analysis is adopted as an index for reference sampling. The fractal dimensions of the whole study area and the sub-regions are calculated to select sub-regions that have the most similar dimensionality to that of the whole area. Then the whole area's classification accuracy is compared with those of sub-regions, and it is verified that the accuracies of selected sub-regions are similar to that of whole area. A new kind of reference sampling method using the above procedure is proposed. The results show that it is possible to reduce sampling area and sample size, while keeping the same level of accuracy as the existing methods.
Efficient Path Delay Test Generation for Custom Designs
Kang, Sung-Ho;Underwood, Bill;Law, Wai-On;Konuk, Haluk 138
Due to the rapidly growing complexity of VLSI circuits, test methodologies based on delay testing become popular. However, most approaches cannot handle custom logic blocks which are described by logic functions rather than by circuit primitive elements. To overcome this problem, a new path delay test generation algorithm is developed for custom designs. The results using benchmark circuits and real designs prove the efficiency of the new algorithm. The new test generation algorithm can be applied to designs employing intellectual property (IP) circuits whose implementation details are either unknown or unavailable.
|
CommonCrawl
|
De novo mutation rate variation and its determinants in Chlamydomonas
PDF 引用 数据集 DOI
De novo mutations are central for evolution, since they provide the raw material for natural selection by regenerating genetic variation. However, studying de novo mutations is challenging and is generally restricted to model species, so we have a limited understanding of the evolution of the mutation rate and spectrum between closely related species. Here, we present a mutation accumulation (MA) experiment to study de novo mutation in the unicellular green alga Chlamydomonas incerta and perform comparative analyses with its closest known relative, Chlamydomonas reinhardtii. Using whole-genome sequencing data, we estimate that the median single nucleotide mutation (SNM) rate in C. incerta is $\mu = 7.6 \times 10^{-10}$, and is highly variable between MA lines, ranging from $\mu = 0.35 \times 10^{-10}$ to $\mu = 131.7 \times 10^{ -10}$. The SNM rate is strongly positively correlated with the mutation rate for insertions and deletions between lines ($r > 0.97$). We infer that the genomic factors associated with variation in the mutation rate are similar to those in C. reinhardtii, allowing for cross-prediction between species. Among these genomic factors, sequence context and complexity are more important than GC content. With the exception of a remarkably high C→T bias, the SNM spectrum differs markedly between the two Chlamydomonas species. Our results suggest that similar genomic and biological characteristics may result in a similar mutation rate in the two species, whereas the SNM spectrum has more freedom to diverge.
期刊文章
Molecular Biology and Evolution, 38 (9): 3709-3723.
Chlamydomonas reinhardtii 突变
Inbred lab mice are not isogenic: genetic variation within inbred strains used to infer the mutation rate per nucleotide site
|
CommonCrawl
|
Beginning 5 a.m. ET on Jan. 9, 2023, ams.org will undergo maintenance and will be unavailable for several hours.
Home › AMS eBooks: Contemporary Mathematics
eBook Collections Home
My Holdings
Your device is paired with
for another days.
Activate Remote Access
AMS/MAA Series
Search eContent
AMS Bookstore
AMS eBook CollectionsOne of the world's most respected mathematical collections, available in digital format for your library or institution
Hodge Theory, Complex Geometry, and Representation Theory
Robert S. Doran, Texas Christian University, Ft. Worth, TX, Greg Friedman, Texas Christian University, Ft. Worth, TX and Scott Nollet, Texas Christian University, Ft. Worth, TX, Editors
Publication: Contemporary Mathematics
Publication Year 2014: Volume 608
ISBNs: 978-0-8218-9415-6 (print); 978-1-4704-1470-2 (online)
DOI: http://dx.doi.org/10.1090/conm/608
Read more about this volume
This volume contains the proceedings of an NSF/Conference Board of the Mathematical Sciences (CBMS) regional conference on Hodge theory, complex geometry, and representation theory, held on June 18, 2012, at the Texas Christian University in Fort Worth, TX. Phillip Griffiths, of the Institute for Advanced Study, gave 10 lectures describing now-classical work concerning how the structure of Shimura varieties as quotients of Mumford-Tate domains by arithmetic groups had been used to understand the relationship between Galois representations and automorphic forms. He then discussed recent breakthroughs of Carayol that provide the possibility of extending these results beyond the classical case. His lectures will appear as an independent volume in the CBMS series published by the AMS.
This volume, which is dedicated to Phillip Griffiths, contains carefully written expository and research articles. Expository papers include discussions of Noether-Lefschetz theory, algebraicity of Hodge loci, and the representation theory of $SL_{2}(\mathbb{R})$. Research articles concern the Hodge conjecture, Harish-Chandra modules, mirror symmetry, Hodge representations of $Q$-algebraic groups, and compactifications, distributions, and quotients of period domains. It is expected that the book will be of interest primarily to research mathematicians, physicists, and upper-level graduate students.
Graduate students and research mathematicians interested in Hodge theory, algebraic/complex geometry, representation theory, mirror symmetry and related topics.
View other years and volumes:
Front/Back Matter
View this volume's front and back matter
Donu Arapura, Xi Chen and Su-Jeong Kang – The smooth center of the cohomology of a singular variety
John Brevik and Scott Nollet – Developments in Noether-Lefschetz theory
James A. Carlson and Domingo Toledo – Compact quotients of non-classical domains are not Kähler
Eduardo Cattani and Aroldo Kaplan – Algebraicity of Hodge loci for variations of Hodge structure
Mark Green and Phillip Griffiths – On the differential equations satisfied by certain Harish-Chandra modules
Tatsuki Hayama – Kato-Usui partial compactifications over the toroidal compactifications of Siegel spaces
Aroldo Kaplan and Mauro Subils – On the equivalence problem for bracket-generating distributions
Matt Kerr – Notes on the representation theory of $SL_{2}(\mathbb {R})$
Matt Kerr – Cup products in automorphic cohomology: The case of $Sp_{4}$
James D. Lewis – Hodge type conjectures and the Bloch-Kato theorem
C. Robles – Principal Hodge representations
Sampei Usui – A study of mirror symmetry through log mixed Hodge theory
View full volume PDF
|
CommonCrawl
|
Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation
Abdul Lateef Haji Ali, Raul Tempone
Applied Mathematics and Computational Science
We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.
Statistics and Computing
https://repository.kaust.edu.sa/bitstream/10754/625499/1/s11222-017-9771-5.pdf
Dive into the research topics of 'Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation'. Together they form a unique fingerprint.
Monte Carlo methods Engineering & Materials Science 100%
Differential equations Engineering & Materials Science 82%
Haji Ali, A. L., & Tempone, R. (2017). Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation. Statistics and Computing, 28(4), 923-935. https://doi.org/10.1007/s11222-017-9771-5
Haji Ali, Abdul Lateef ; Tempone, Raul. / Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation. In: Statistics and Computing. 2017 ; Vol. 28, No. 4. pp. 923-935.
@article{888702a258dd427e9b2763ed56f8dbcb,
title = "Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation",
abstract = "We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.",
author = "{Haji Ali}, {Abdul Lateef} and Raul Tempone",
note = "KAUST Repository Item: Exported on 2020-10-01 Acknowledgements: R. Tempone is a member of the KAUST Strategic Research Initiative, Center for Uncertainty Quantification in Computational Sciences and Engineering. R. Tempone received support from the KAUST CRG3 Award Ref: 2281 and the KAUST CRG4 Award Ref: 2584. The authors would like to thank Lukas Szpruch for the valuable discussions regarding the theoretical foundations of the methods.",
journal = "Statistics and Computing",
Haji Ali, AL & Tempone, R 2017, 'Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation', Statistics and Computing, vol. 28, no. 4, pp. 923-935. https://doi.org/10.1007/s11222-017-9771-5
Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation. / Haji Ali, Abdul Lateef; Tempone, Raul.
In: Statistics and Computing, Vol. 28, No. 4, 12.09.2017, p. 923-935.
T1 - Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation
AU - Haji Ali, Abdul Lateef
AU - Tempone, Raul
N1 - KAUST Repository Item: Exported on 2020-10-01 Acknowledgements: R. Tempone is a member of the KAUST Strategic Research Initiative, Center for Uncertainty Quantification in Computational Sciences and Engineering. R. Tempone received support from the KAUST CRG3 Award Ref: 2281 and the KAUST CRG4 Award Ref: 2584. The authors would like to thank Lukas Szpruch for the valuable discussions regarding the theoretical foundations of the methods.
N2 - We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.
AB - We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of $$\mathrm {TOL}$$TOL, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators.
UR - http://link.springer.com/article/10.1007/s11222-017-9771-5
JO - Statistics and Computing
JF - Statistics and Computing
Haji Ali AL, Tempone R. Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation. Statistics and Computing. 2017 Sep 12;28(4):923-935. https://doi.org/10.1007/s11222-017-9771-5
|
CommonCrawl
|
Geochemical approaches to the quantification of dispersed volcanic ash in marine sediment
Rachel P. Scudder1,6,
Richard W. Murray1,
Julie C. Schindlbeck2,
Steffen Kutterolf2,
Folkmar Hauff2,
Michael B. Underwood3,
Samantha Gwizd4,
Rebecca Lauzon5 &
Claire C. McKinley6
Volcanic ash has long been recognized in marine sediment, and given the prevalence of oceanic and continental arc volcanism around the globe in regard to widespread transport of ash, its presence is nearly ubiquitous. However, the presence/absence of very fine-grained ash material, and identification of its composition in particular, is challenging given its broad classification as an "aluminosilicate" component in sediment. Given this challenge, many studies of ash have focused on discrete layers (that is, layers of ash that are of millimeter-to-centimeter or greater thickness, and their respective glass shards) found in sequences at a variety of locations and timescales and how to link their presence with a number of Earth processes. The ash that has been mixed into the bulk sediment, known as dispersed ash, has been relatively unstudied, yet represents a large fraction of the total ash in a given sequence. The application of a combined geochemical and statistical technique has allowed identification of this dispersed ash as part of the original ash contribution to the sediment. In this paper, we summarize the development of these geochemical/statistical techniques and provide case studies from the quantification of dispersed ash in the Caribbean Sea, equatorial Pacific Ocean, and northwest Pacific Ocean. These geochemical studies (and their sedimentological precursors of smear slides) collectively demonstrate that local and regional arc-related ash can be an important component of sedimentary sequences throughout large regions of the ocean.
In the modern era, marine scientists' interest in volcanic ash can be traced back at least to the HMS Challenger expedition (1872–1876). Indeed, "What is marine sediment made of?" is a question that has been asked for 100s of years, and while many aspects of the petrology, mineralogy, and geochemistry of marine sediment have been well characterized, it is remarkable how much remains unknown about some key aspects of ocean mud. In particular, the presence/absence and composition of volcanic ash is challenging given its fine grain size, broad classification as an "aluminosilicate", and common presence, especially since both oceanic and continental arc volcanism are often located physically adjacent to oceanic environments and are known to globally generate and transport widespread ash (Fig. 1). Certainly, civilization's natural curiosity about volcanism has driven interest in the marine records of ash for generations of scientists. Additionally, these studies considered how to link their presence with explosive volcanism, climate, arc evolution, biological productivity, and other geological processes, as described below. Although the term "ash" is normally used in volcanology as a grain size definition (32 μm to 2 mm, e.g., Fisher and Schmincke 1984) for explosive volcanic products (juvenile matter and lithic fragments), we here use the term "ash" to indicate a product (mostly volcanic glass) from explosive volcanism no matter which grain size a particle may have. Therefore, in this paper we refer to "discrete ash layers" as a volcanic product that is present as a clearly visible bed in the marine sediments (that is, ash layers of millimeter-to-centimeter or greater thickness, Fig. 2a–f) and their respective glass shards (Fig. 2g–l)) found in sequences at a variety of locations and timescales.
(Top) Global distribution of subaerial volcanoes in different tectonic settings. Map has been modified after http://d-maps.com and volcano positions are taken from the Smithsonian Global Volcanism Program (http://www.volcano.si.edu). (Bottom) Locations of sites highlighted in this paper. Data from GeoMapApp (http://www.geomapapp.org); GMRT Global Multi-Resolution Topography (Ryan et al. 2009)
Photographs of a mafic pockets, b discrete mafic ash layers, c felsic ash layer and pockets, d disseminated mafic ash pockets selected primary mafic ash layers, e partially disseminated ash pockets, and f disseminated ash. All examples from IODP expedition 344 (CRISP II) offshore Costa Rica. Example smear-slide microphotographs showing glass shard textures of felsic (G-I) and mafic (J-L) marine ash layers. g U1381-3H-7, 14–19 cm: felsic ash with tubular transparent to light brown shards. h M66-226/78 cm bsf: silicic shard from highly vesicular pumice. i M66-222/51 cm bsf: pumice fragment with moderately elongated vesicles. j M66-223/305 cm bsf: blocky sideromelane shard with and without vesicles. k M66-222/436-441 cm bsf: elongated vesicles in sideromelane shard. l M66-222/517–518 cm bsf: overview picture of different glass shards in one sample
Less widely recognized than the discrete layers of ash is what has been referred to as "dispersed" ash. "Dispersed ash" is defined as volcanic ash material of various grain sizes (including single μm grains) that is mixed throughout the bulk sediment. This ash occurs in addition to the discrete ash layers. The presence of dispersed ash in the marine record has previously been relatively overlooked as it is difficult to identify petrographically due to its commonly extremely fine grain size (Rose et al. 2003, and references therein) and/or alteration to authigenic clay (Sigurdsson et al. 1997; Plank et al. 2000). Such alteration (e.g., Schacht et al. 2008) often results in fine-grained material that is unable to be differentiated from detrital terrigenous clay (non-ash, derived from land) on the basis of visual study alone.
Dispersed ash is the result from bioturbation of pre-existing discrete ash layers, the settling of airborne or subaqueous ash through the water column, transport by rivers and currents from terrestrially exposed ash deposits, and other processes (e.g., gravity flows). Because dispersed ash has often been only qualitatively addressed (e.g., via sedimentological smear slides), however it still plays as vital a role as discrete ash layers in many studies on multiple spatial and temporal scales, including (but not limited to) the following:
Ash plays an important role for geochemical budgets (e.g., "Subduction Factory"), in that studies of geochemical recycling will benefit from an increased understanding of both input (subduction) and output (volcanism) fluxes (e.g., Straub 1997, 2008; Plank and Langmuir 1998; Straub and Schmincke 1998; Bryant et al. 1999, 2003; Straub and Layne 2002, 2003a,b; Hauff et al. 2003; Straub et al. 2004; Stern et al. 2006; Plank et al. 2007; Scudder et al. 2009, 2014; Tamura et al. 2009; Völker et al. 2011, 2014; Freundt et al. 2014). The amount, distribution, and composition of ash in sediment speaks to inputs as well as outputs in the budget, namely, knowing how much ash is entering the Subduction Factory is important for understanding recycling of geological material. Additionally, the ash record in the sediment itself provides an archive of local, regional, and global magmatic evolution.
The marine repository of explosive volcanic eruptions is a vital record from which the geological history of arc evolution is recorded (e.g., Kennett et al. 1977; Ninkovich et al 1978; Sigurdsson et al. 1980, 2000; Huang 1980; Cambray et al. 1993, 1995; Lee et al. 1995; Bailey 1996; Peters et al. 2000; Straub 2003; Kutterolf et al. 2008a,b,c; Straub et al. 2009, 2010, 2015). An improved understanding of arc evolution has the potential to contribute significantly to knowledge regarding the birth, life, and death of arc magmatism. Furthermore, a full accounting of the ash record (that is, comparison of the dispersed ash record with that of the discrete layers) may provide information about arc evolution and erosional histories of landmasses. Studies of ash distribution are also vitally important to document risk associated with volcanic hazards.
Reconstructions of eruption intensities and atmospheric wind patterns have been based on grain size characteristics and inferred dispersal patterns of the ash found in discrete layers, both at sea and on land (e.g., Huang et al. 1973, 1975; Shaw et al. 1974; Ledbetter and Sparks 1979; Carey and Sigurdsson 1980, 2000; Carey and Sparks 1986; Reid et al. 1996; Rose et al. 2003; Carey et al. 2010). These finer particles are probably the main constituent of dispersed ash, and therefore documenting the distribution of dispersed ash will constrain inferences of the transport pathways of volcanic material.
The ability to quantify the amount of dispersed ash will increase knowledge of sedimentary physical properties (e.g., Peacock 1990; Kastner et al. 1991; Underwood and Pickering 1996; Chan and Kastner 2000; Saffer et al. 2008, 2012; Hüpers et al. 2015). Given that ash alters to clay in a hydration reaction, the amount of water bound in the sediment will increase, thus decreasing the shear strength of the mud. Additionally, the fluid budget of subducting sediment will be affected by these and, further, hydration/dehydration reactions. A quantification of the amount of dispersed ash will thus assist interpretation of physical properties of marine sediment and subduction zone modeling.
Volcanic ash and eolian dust input to the surface ocean contributes nutrients to the surface ocean and are thus important for understanding trace metal cycling, biological productivity, climate change, volcano-climate interactions, and allied biogeochemical studies (e.g., Huang et al. 1974; Shaw et al. 1974; Carey 1997; Frogner et al. 2001; Jones and Gislason 2008; Robock et al. 2009; Duggen et al. 2010; Olgun et al. 2011; Kutterolf et al. 2013; Metzner et al. 2014).
Ash is commonly altered on and/or in the seafloor by diagenesis. In addition to the research cited above, for example, in a historically classic study, Gardner et al. (1986) observed many thin, pale green laminae (PGL) in the sediment cored during Deep Sea Drilling Project (DSDP) Leg 90 (Lord Howe Rise), which they identified as bentonites resulting from discrete ash fallout events. They found the temporal distribution of the PGLs to be similar to that of the discrete layers and logically concluded the PGLs were merely altered ash. For dispersed ash, however, the alteration increases the complexity for the application of techniques based on sedimentological and physical separation (e.g., grain size or sequential leachings to arrive at an aluminosilicate "residue"), because a fine-grained altered aluminosilicate ash is difficult to distinguish from, for example, fine-grained aluminosilicate eolian dust. The geochemical techniques summarized in this review, however, will still identify these alteration products as part of the original ash contribution to the sediment. For example, Hein et al. (1978) showed that ash beds in the Bering Sea become more altered with burial and form bentonite layers. They also documented that the chemistry of the bentonite beds reflects the chemistry of the parent ash, and differentiated authigenic smectite, which forms from the in situ breakdown of glass in ash layers or glass shards dispersed throughout the sediment, from detrital smectite derived from subaerial breakdown of recycled volcanic debris.
The use of tephra as a stratigraphic and chronologic tool has a long and wide-ranging history (e.g., Cambray et al. 1995; Straub and Schmincke 1998; Lowe 2011) and we do not intend here to repeat those efforts. Rather, in this paper, we (a) present a review of previous geochemical approaches to quantify dispersed ash in the Caribbean Sea and equatorial Pacific Ocean, (b) provide a summary of recent work applying a combined geochemical and multivariate statistical technique to identify dispersed ash in the northwest Pacific Ocean, and (c) describe preliminary results moving toward a regional assessment of inputs to the western Pacific Ocean (Fig. 1). We include some areas of research that are currently unresolved as they are likely to provide further insight into the nature and distribution of this important component of marine sediment. While radiogenic isotopes have proven helpful in studies of dispersed ash and the very fine detrital component (e.g., Ziegler et al. 2008; Scudder et al. 2014), in this paper we focus on major, trace, and rare earth elements since they are more commonly used in the various regions we discuss.
Geochemical approaches: theory and practice
Composition is different from transport
A basic tenet of provenance research is that studies based on chemical, isotopic, or mineralogic composition, or many aspects of sedimentology (e.g., grain size), cannot alone infer the transport pathway (e.g., eolian vs. subaqueous transport) by which a grain of volcanic ash eventually becomes entrained in the sediment. Strictly speaking, transport pathway must be inferred based upon geological or geographic constraints. For example, in the context of eolian transport of eroded continental material, truly detrital aluminosilicate grains must be transported by wind to reach the deepest and most distal north Pacific (Rea and Leinen 1988; Kyte et al. 1993; Rea 1994).
For volcanic ash, however, the situation is more complicated. Subaqueous eruptions can lead to significant volcanic input (e.g., Fiske et al. 2001; Maicher and White 2001; White et al. 2003; Tani et al. 2008). Fiske et al. (2001, their p. 822) in particular note how the finest-grained fraction is missing from caldera deposits in the Izu-Bonin system. This "missing material" undoubtedly ends up as dispersed ash throughout the bulk sediment and due to its grain size has the potential to be advected great distances away from its point of origin. Determining this material's composition in the bulk sediment, a priori will not differentiate between subaerial (eolian) or subaqueous transport.
Subaqueous eruptions can also lead to pumice rafts, which may persist in the ocean for weeks, months, or years (Simkin and Fiske 1983; Risso et al. 2002; Bryan et al. 2004). Even in today's modern era of enhanced ocean and atmospheric observation, such occurrences are surprises (BBC News, 10 Aug. 2012). Yet, although these events may be uncommon on the human timescale, they are most likely significant pathways of ash to the deep sea over longer time frames.
Mixes of mixtures
In the broad context of geologic material, all non-biogenic rocks and sediments on the surface of Earth can be chemically described as falling along a mixing line or mixing curve between families of primordial materials (e.g., basalts) and families of fractionated "upper continental crust". Deviations from such a mixing line or curve are due to local provenance effects, authigenic/diagenetic resetting, weathering, and other processes, but the overall relationship holds true. While such a compositional range may seem relatively easy to differentiate analytically (e.g., felsic vs. mafic), the cases are rare when there are only two aluminosilicate sources and they are sufficiently compositionally distant from each other along the mixing line or curve. Far more common is the challenge of resolving multiple sources that are each at different fractional lengths along a theoretical mixing line or curve. For example, continental crust has been approximated as "granodiorite", "dacite", or other intermediate composition igneous/volcanic rocks (e.g., Taylor and McLennan 1985), and therefore distinguishing eroded continental crust from wind-blown dacitic ash is challenging. Or, still more challenging are scenarios whereby two disparate sources such as a marine basalt and a wind-blown dust of broad continental crust composition are both authigenically altered to, for example, assorted smectites, and thus have the potential to appear as a single source.
The above examples speak to the importance for geochemical studies of dispersed ash to address their attention to refractory elements that are appropriate to unravel mixing problems and that will also be relatively unaffected by diagenetic and authigenic processes. There are no chemical elements that unambiguously record any single geochemical provenance or domain. Within the spectrum of potential elements for use in general characterization as well as the multivariate statistics discussed below, inclusion of certain common major elements such as Al, Ti, and Fe, along with certain common trace elements such as La and Sc, proves to be beneficial. On a case-by-case basis, certain other key elements such as Th, Nb, and Cr can be advantageous as well (e.g., Plank 2005), although interpretations based on only a few elements are likely to be less representative than studies based on suites of many elements. However, common elements involved in reverse weathering reactions (e.g., Mg, Mackenzie and Garrels 1966; Presti and Michalopoulos 2008) should be avoided for such studies of provenance, given their labile behavior.
The contribution from geochemistry teamed with multivariate statistics
A variety of geochemical approaches can help resolve such mixing issues. In certain situations, such as in the Caribbean Sea example below, single element "normative calculations" (e.g., Leinen 1987) are sufficient, while in other cases multiple different analytical and/or computational approaches need to be simultaneously deployed to develop a more holistic understanding of the geochemical variability within datasets. We emphasize here a combined geochemical approach that generates a wide-ranging elemental suite that is examined with multivariate statistics.
The overall approach we summarize in this paper is based on Q-Mode Factor Analysis (QFA) and two forms of multiple linear regression, a Constrained Least Squares (CLS) technique and a Total Inversion (TI) technique (Pisias et al. 2013). The history of these techniques, as well as the specific MATLAB scripts to apply them to marine sediment chemistry (and recommendations for their use), are detailed in Pisias et al. (2013). The interested reader is also pointed toward previous papers by Dymond (1981), Leinen and Pisias (1984), Zhou and Kyte (1992), and Kyte et al. (1993). In our research group, we have successfully used these techniques in marine sediment from the Cariaco Basin, Arctic Ocean, equatorial Pacific, northwest Pacific Ocean, and South Pacific Gyre (Martinez et al. 2007, 2009, 2010; Ziegler and Murray 2007; Ziegler et al. 2007, 2008; Scudder et al. 2009, 2014; Dunlea et al. 2015a, b), demonstrating their utility throughout a variety of oceanic depositional regimes.
QFA is a terrific tool because—among other things—it is an "objective" technique (e.g., Leinen and Pisias 1984) that is minimally affected by inadvertent researcher bias. There are no inputs other than the data itself. It is limited, however, in many cases in that the resultant compositional factor scores may not be geologically reasonable or geologically precise enough across the spectrum of the wide element menus commonly generated in today's world of rapid geochemical analysis. Factor analysis is best used to help identify the number of independent components (also referred to as "end members") and their general composition, but not necessarily the extremely specific composition thereof. Therefore, one of the limitations of using QFA alone is that the user often cannot identify the specific geologic source(s) of the bulk sediment.
The number of end members necessary to best describe the variability of the dataset (that is, the number of factors indicated by the QFA) helps steer the CLS and TI multiple linear regressions. Specifically, in order to identify the exact sources identified by QFA, the CLS or TI linear regressions must be applied. In CLS and TI, the researcher inputs the composition of potential end members and mixes them until the best solution is reached (the differences between CLS and TI are detailed in Pisias et al. (2013)). Thus, we use the QFA to generate an objective "answer" as to the number of sources and their broad composition(s), and then the CLS and/or TI, based on thoughtful selection of potential sources derived from knowledge of previous work and the local/regional geology, to generate the quantitative mix of specific sources. Libraries of potential end members are typically (a) constructed from the literature of all likely volcanic contributors (that is, individual papers about the geochemistry of particular volcano or region), (b) compiled from international global databases (e.g., EarthChem; http://earthchem.org), and/or (c) developed internally from the study's own data (e.g., using the composition of a discrete ash layer to assess whether it could potentially be a contributor to the dispersed ash component.).
Despite the quantitative attractiveness of applying statistical treatments to a well-measured geochemical dataset, there is no single diagnostic "magic bullet" to fulfill the task of identifying the dispersed ash component within bulk sediment. Instead, a series of interpretative techniques must be used prior to applying the statistical techniques. The use of standard geochemical approaches such as expressing data as concentrations, calculation of fluxes and/or elemental ratios, study of simple coefficient of determination (R 2) matrices, x-y diagrams, and other well-established approaches is extremely important. Such approaches define the boundaries for mass balance, allow comparison with lithologic descriptions and mineralogy, help ground the ensuing statistical treatments, and, in a way that is hard to quantify, ensure that the researcher is intimately familiar with the fundamental underpinnings of each dataset.
Finally, statistical treatments such as those discussed here often lead to non-unique solutions. Contextual oceanographic or geological knowledge plays a major role, however, and can eliminate some of the non-unique solutions. Our group takes a very labor-intensive approach to dealing with this broad subject for both factor analysis and multiple linear regression, strategically built around performing dozens to hundreds of statistical runs per dataset to assess sensitivity of, and variability in, the results (Scudder et al. 2009, 2014; Dunlea et al. 2015a,b). Each dataset is different, and we have found some in which any variation in one element will make a difference; but in another dataset varying a different element will exert a larger lever arm. We take great care to ensure that our results are not dependent upon, or unique to, one or two narrow statistical models nor to one or two specific elements. Doing so would be counter to the very nature of our task at hand—to develop a widely applicable, internally consistent understanding of "What is marine sediment made of?"
In this section, we present a review of previously published studies focused on unraveling bulk marine sedimentary compositions. These case studies range from a simple three-component system (Caribbean Sea) in which element normative calculations alone are sufficient to determine the end members, to a very complex system in which the bulk sediment is composed entirely of aluminosilicates with very similar compositions that requires a combination of geochemical and statistical technique in order to identify the individual compositions (Izu-Bonin-Mariana Arc).
Caribbean sea: a simple case
The Caribbean Sea is ringed by volcanic arcs (Fig. 1; Sigurdsson 1999). To the west rises the Central American volcanic arc system, with a spectrum of volcanic edifices ranging in size from basaltic cinder cones to large stratovolcanos and calderas. To the east is the Antilles island arc, from which some of the most famous volcanic eruptions in the world originate (e.g., Lesser Antilles, Carey and Sigurdsson 1978). These combined volcanic systems contribute large amounts of ash to sediment in the Atlantic Ocean and Caribbean Sea from the Miocene on (Carey and Sigurdsson 1980, 2000; Peters et al. 2000; Jordan et al. 2006; Expedition 340 Scientists 2012; Le Friant et al. 2013) Ocean Drilling Program (ODP) Leg 165 cored throughout the Caribbean Sea (Fig. 1) with its most important objectives being related to the Cretaceous-Neogene bolide impact. One surprise that resulted from this research cruise was the startling number of discrete ash layers (Fig. 3, Carey and Sigurdsson 2000; Sigurdsson et al. 2000), their extent in time and space of which was unexpected. In addition to the discrete ash layers, shipboard study observed ash dispersed throughout the background sediment. Bulk chemical analyses of the bulk sediment allowed for the development of a single-element normative calculation scheme to quantify the abundance of this dispersed ash in these carbonate-rich sediments (Peters et al. 2000). Fortuitously, most of the sequence is a relatively simple three-component system (of CaCO3, terrigenous, and ash), which facilitated the study of dispersed ash in parallel to the discrete layers. Petrographically, the bulk sediment appeared to consist of only carbonate and aluminosilicate clay—and the aluminosilicate clay was assumed to be entirely comprised of eroded terrigenous material (Peters et al. 2000).
Closeup photos and photomicrographs of ash layers from ODP Site 998, western Caribbean Sea. (Left) Volcanic ash layer in Section 165-998A-10H-4, 50–72 cm. Note the sharp base and transitional top that has been reworked by bioturbation. (Right, top) Section 165-998A-19X-3, 55–56 cm. Typical rhyolitic glass shards from a 13.8 Ma volcanic ash fall layer. The width of the field of view is 0.5 mm. (Right, bottom) Typical fresh (clear) and altered (cloudy) glass shards and an amphibole phenocryst in a 15 Ma volcanic ash fall layer in Section 165-998A-21X-3, 51–52 cm. From Sigurdsson, Leckie, Acton, et al. 1997
Bulk chemical analyses, however, demonstrated that the sediment was vastly depleted in Cr beyond that, which could be explained by carbonate dilution. For example, in a sample with an equal (50 and 50 %) mix of CaCO3 and terrigenous clay, one would predict there should be ~55 ppm Cr in the bulk composition (based on typical "average shale" of ~110 ppm Cr, using Post-Archean average Australian Shale [PAAS] from Taylor and McLennan (1985) to broadly represent an upper crustal terrigenous source). However, such a sample would instead present <10 ppm Cr, indicating that the brown aluminosilicate groundmass was not solely detrital terrigenous material (Peters et al. 2000).
Given the simple three-component system, normative calculations (Leinen 1987) were appropriate to identify the sources to the bulk sediment. Peters et al. (2000) based their calculations on Cr, given that the end member values of Cr in typical shale (110 ppm) and rhyolite (<3 ppm) are strongly divergent. Thus, in a three-component system of CaCO3 (which is quantified precisely by coulometry), terrigenous material (calculated via Cr), and ash, the amount of ash can be calculated by difference, as follows:
$$ {\left(\%\kern0.28em \mathrm{Ash}\right)}_{\mathrm{sample}}\kern0.37em =100\kern0.28em -\kern0.28em {\left(\%\kern0.28em \mathrm{C}\mathrm{a}\mathrm{C}{\mathrm{O}}_3\right)}_{\mathrm{sample}}\kern0.28em -\kern0.28em {\left(\%\kern0.28em \mathrm{Terrigenous}\right)}_{\mathrm{sample}} $$
$$ {\left(\%\;\mathrm{Terrigenous}\right)}_{\mathrm{sample}}=\kern0.37em 100\;*\left(\mathrm{C}{\mathrm{r}}_{\mathrm{sample}}\;/\mathrm{C}{\mathrm{r}}_{\mathrm{PAAS}}\right) $$
Peters et al. (2000) thus documented on a sample-by-sample basis that dispersed ash comprises 15–20 wt.%, with a maximum of 45 wt.%, of the bulk sediment in the western Caribbean. These values were consistent with the sedimentological smear-slide analyses (Sigurdsson et al. 1997) but were more precise.
Peters et al. (2000) also observed that the timing of the accumulation rate of dispersed ash paralleled the sedimentation rate of the discrete layers, although the maxima in dispersed ash preceded the Miocene and Eocene maxima in discrete layers by ~2–4 Ma (Fig. 4). They interpreted the relative timing as recording arc evolution, with the dispersed ash being generated by smaller volcanoes characteristic of the more juvenile arc, and the larger discrete layers representing a mature arc characterized by larger stratovolcanoes and caldera systems, as suggested by Carey and Sigurdsson (2000) and Sigurdsson et al. (2000). These are capable of injecting large plumes of Central American co-ignimbrite ash into the lower stratosphere and upper troposphere, that is, the ideal atmospheric height to facilitate west-to-east blowing wind transport (e.g., rather than transport from east-to-west in the higher stratosphere) (Fig. 4). This interpretation is consistent with the previous studies that have shown that Paleogene and Neogene Central American volcanism is predominantly characterized by larger ignimbrite-forming eruptions (Jordan et al. 2006) that may have frequently produced co-ignimbrite ash plumes reaching lower stratospheric heights (~15 to 20 km; e.g., Woods and Wohletz 1991; Bursik 2001), in contrast to the higher stratospheric eruption columns of Plinian eruptions (>>20 km; e.g., Kutterolf et al. 2008a).
(Top) Dispersed (blue line) vs. discrete (red field) ash accumulation in the Caribbean Sea. Accumulation of discrete ash layers (Carey and Sigurdsson 2000; Sigurdsson et al. 2000) and dispersed ash (Peters et al. 2000). ODP Site 998, western Caribbean Sea. (Bottom) Eruption transport and deposition patterns for recent to Miocene eruptions of the Central American Arc. Large plinian eruption columns (light to dark gray) reach deep into the stratosphere and are therefore transported to the west by the prevailing winds. Large ignimbrite-generating eruptions, predominantly in the Miocene, generate co-ignimbrite ash plumes (pinkish to light gray) reaching only into the upper troposphere/lower stratosphere and are therefore distributed to the east. Tropopause is shown with a dashed line at 15 km altitude. Gray fields below the sea level demonstrate submarine ash deposits
Equatorial pacific ocean: a complex case
Our group also has a long-standing interest studying the equatorial Pacific (e.g., Murray and Leinen 1993, 1996; Murray et al. 1993, 1995, 2000; Kryc et al. 2003; Ziegler and Murray 2007; Ziegler et al. 2007, 2008, plus others). The non-biogenic component of these sediments includes terrigenous clay and ash, which are challenging to resolve in sediment with more than 98 % biogenic material (carbonate and biogenic silica). Some of our work specifically targeted how to resolve the eolian, ash, and authigenic components of these sediments and built upon the classic research of the long-standing Michigan group and their progeny (e.g., Hovan et al. 1991; Rea 1994; Rea et al. 1998).
For example, Ziegler et al. (2007) detailed chemical criteria that can be applied to differentiate authigenic, terrigenous, and volcanogenic aluminosilicates from each other. Sequential extractions were performed at ODP Site 1215 (Fig. 1) to remove non-aluminosilicate components. The material remaining upon completion of the extraction procedure was termed the "residual" component, consistent with established protocols. La-Sc-Th ternary diagrams and PAAS-normalized rare earth element (REE) patterns (Fig. 5a) and reference REE patterns (Fig. 5b) were used to aid in determination of the sedimentary components. In the residual component, Ziegler et al. (2007) quantified upper continental and lower crustal components using selected rare earth element (REE), Sc, and Th abundances. In old (>50 Ma) nannofossil ooze, the residual component exhibited a large La contribution. Additionally, the residual component of the nannofossil ooze fell outside the boundaries of a simple two component mixing between upper and lower crust components. REE patterns for these samples exhibit sharp, seawater-like negative Ce anomalies, providing a clear indication that an authigenic phase is present in the residual component (Fig. 5c). The old age of these sediments may contribute to the formation of the authigenic phase, given that such samples have likely experienced significant diagenesis. In this way, the geochemical approach clearly identified samples in which it is virtually impossible to differentiate eolian material from volcanic ash (Fig. 5c, d).
a La-Sc-Th ternary diagram and PAAS-normalized rare earth element (REE) patterns from Site 1215 (central Pacific) and Site 1256 (eastern Pacific) from Ziegler et al (2007). b Reference REE patterns. Asian loess (Jahn et al. 2001), western Pacific ash (Bailey 1993), EPR and Pacific bottom water (Piepgras and Jacobsen 1992), hydrothermal residual sediment (Severmann et al. 2004), and hydrothermal fluid (Klinkhammer et al. 1994). c Residual component extracted from early-mid Eocene biogenic sediment at Site 1215 as defined in Ziegler et al. (2007). Note the significantly negative Ce anomaly indicating a seawater source. d Residual component extracted from the red clay unit from Site 1215 and the silty nannofossil ooze from Site 1256 as defined in Ziegler et al. (2007). See legend for symbol description. Groupings are based on REE patterns. Factor 1 is interpreted as lower crust island arc volcanic arc material and Factor 2 is similar to upper continental crust. T1, T2, and T3 are eolian dust samples classified based on REE patterns, Eu/Sm, and La/Sc ratios from Hyeong et al. (2005). URC upper red clay; LRC lower red clay
Izu-Bonin-Mariana Arc (ODP Site 1149 and DSDP Site 52)
One of the goals of ODP Leg 185 was to test whether subducted sediments control along-strike geochemical differences in Izu-Bonin-Mariana arc composition (Fig. 1, Plank et al. 2000). In addition to the potential influence of the terrigenous (non-ash) sedimentary component, the amount of ash in the sediment column is critical to constrain, not only for the determination of absolute geochemical fluxes into the "Subduction Factory" but also in order to determine how much of the input is "recycled" from the arc itself. Due to the long-standing interest in the region from a variety of geological perspectives, there is also a wealth of tectonic- and arc-related information to draw upon in the context of both ash layers and dispersed ash.
Preliminary work based on a Nb-based normative calculation showed that 33 ± 9 wt.% of the sediment at ODP Site 1149 is comprised of dispersed ash (Plank et al. 2000). As described in Scudder et al. (2009, 2014), in order to determine the sources to the bulk sediment, we followed the QFA and TI approaches and MATLAB scripts from Pisias et al. (2013). The elemental suite we targeted was composed of refractory elements predominantly associated with aluminosilicate components (Al, Ti, Sc, Cr, Ni, Nb, La, Th) as these are the most likely to identify differences between source components. The QFA results indicate that four factors (end members) explain 97 % of the variability of the bulk sediment (Scudder et al. 2014, Fig. 6). Given that QFA alone is not sufficient to identify the source of these factors, TI modeling was employed. The TI modeling identifies a combination of Chinese Loess (CL), rhyolitic ash from the Honshu arc (HR), mafic ash from the Izu-Bonin Front Arc (IBFA), and a second eolian dust (termed "Eolian 2") that best explain the total chemical composition of the bulk sediment.
VARIMAX factor scores from QFA, (Top) Site 1149, (Bottom) Site 52, from Scudder et al. (2014). Note the similarities between Site 1149 Factor 2 and Site 52 Factor 4 as well as Site 1149 Factor 3 and Site 52 Factor 2
Based on the TI results, the dispersed ash mass accumulation rate (MAR, g/cm2/ky) at Site 1149 can be calculated and compared to the MAR of a series of ash layer parameters. At this site, the number of ash layers most closely tracks the total dispersed MAR (that is the sum of HR + IBFA). That the number of layers is most similar to the dispersed MAR suggests that eruption frequency, rather than eruption size, is the driving mechanism for the dispersed ash record (Scudder et al. 2014, Fig. 7).
Site 1149. The accumulation rate of total dispersed ash (gray shaded with filled circles in g/cm2/ky) plotted against parameters associated with the sedimentation of discrete ash layers per 1 Ma (open circles) from Scudder et al. (2014)
The MAR patterns of the ash components are consistent with published eruption records of both the Izu-Bonin and Honshu arcs, and we can interpret these changes in accumulation rate in terms of arc history (Fig. 8). Focusing first on the Honshu Rhyolite, we observe that this component is the dominant dispersed ash, even though the Honshu arc is relatively far away. Given Site 1149's distal location, eruptions from the Honshu arc could have been large and yet resulted in only thin layers. These layers may subsequently be mixed into the bulk sediment to create the dispersed ash component. Alternately, the dispersed ash could simply represent an increase in deposition of ash from the atmosphere. The MAR of HR is consistent with the tectonic history of the Honshu arc, particularly through the younger portions of the record (e.g., Taira 2001). For the IBFA component, the models closely track the temporal changes in the tectonic record of Izu-Bonin (e.g., Sibuet et al. 1987). MAR patterns indicate that a gradual increase in IBFA volcanism occurred from ~18–10 Ma, with a burst of activity from ~4.5–3 Ma followed by a steady increase beginning at ~2 Ma (Scudder et al. 2014, Fig. 8).
Individual data and three-point averages of MARs of each dispersed ash end member, Site 1149. Inset focuses on the younger portion of each record. Note color-coded different x-axis scales and labels. From Scudder et al. (2014)
We can compare and contrast Site 1149 to DSDP Site 52 in the northern Marianas arc (Fig. 1). Site 52 contains extremely high abundances of volcanic ash not directly linked to a specific eruption style (upwards of 50 % volcanic glass mixed in the brown clay; Fischer et al. 1971). Unfortunately, rotary drilling at Site 52 left only a few ash layers intact (all in the upper 20 mbsf; Fischer et al. 1971). Applying QFA with the same chemical suite as used at Site 1149 indicates that four factors (end members) are present (Fig. 6). Two of these factors closely resemble those found at Site 1149, while the other two do not correspond well, if at all (Scudder et al. 2014). Given the limitations of QFA described previously, this interpretation is based on the factor scores (the weight of each element on the discrimination of a single factor) and the broad compositional scores produced by the modeling. TI linear modeling confirms this interpretation, identifying a mix of Chinese Loess, IBFA, dispersed boninite from the Izu-Bonin arc (BNN), and a dispersed felsic ash with the same composition as felsic layers from Site 52 (referred to as Felsic52) as best explaining the bulk sediment chemical composition.
Site 52 presents some key similarities and differences from Site 1149. First, the total amount of dispersed ash (regardless of composition) at both sites is very high, averaging 30 ± 17 wt.% at Site 1149 and 36 ± 18 wt.% at Site 52. Second, whereas at Site 1149 the two ash sources (HR, IBFA) are found both as layers and as dispersed ash, at Site 52 at least three volcanic sources are required to explain the bulk composition of the sediment. The discrete layers appear to be IBFA and felsic ash layers from Site 52, while the dispersed ash is comprised of IBFA, Felsic52, and an additional component, average Izu-Bonin Boninite (BNN). Therefore, at Site 52, the ash layers and the dispersed ash are partially compositionally decoupled from each other (Scudder et al. 2014).
In this section, we present new data and ongoing observations from a variety of settings in the Northwest Pacific. While some of this research may still be developing, we include it here in order to show the strength of the combined geochemical/statistical technique in moving toward a regional perspective of inputs to the Western Pacific.
Nankai Trough, IODP Sites C0011 and C0012
IODP Sites C0011 and C0012 (Fig. 1), drilled as part of the Integrated Ocean Drilling Program (IODP) NanTroSEIZE transect, are located in the Shikoku Basin ~100 km southeast of the Kii Peninsula and ~160 km west of the Izu-Bonin arc on the Kashinosaki Knoll, a prominent bathymetric high. Located near the crest of the Kashinosaki Knoll, Site C0012 represents a condensed sediment section in comparison to Site C0011, which is located on the northwestern flank. The main goal of NanTroSEIZE is to drill across the up-dip limit of the seismogenic and tsunamigenic zone over the Nankai Trough subduction boundary, along which mega-thrust earthquakes are known to occur (Tobin and Kinoshita 2006). Shipboard sedimentological smear slide analyses at both Sites C0011 and C0012 estimate that on average dispersed ash constitutes ~25–30 wt.% through Units I and II, and decreases to ~7–15 wt.% in the hemipelagic facies of Unit III (Saito et al. 2010; Henry et al. 2012).
Statistical analyses of newly acquired data from Sites C0011 and C0012 were performed using the same element suite as that applied to Site 1149. At Site C0011, QFA indicates that three factors explain 99 % of the variability of the bulk sediment (Fig. 9). While we cannot determine the exact composition of the end members from QFA, one of the modeled sources appears to be intermediate (roughly equal contributions of most of the elements in the model, that is, of Sc, Cr, Ni, Nb, La, and Th) in composition while the other two are broadly felsic (contributions driven by Al, Ti, and Sc, with lesser covarying Nb, La, and Th). CLS multiple linear regression analysis suggest that the three sources at Site C0011 are consistent with one being PAAS, and the other two being ashes having compositions similar to a representative Rhyo-Dacite and Rhyolite from the Honshu Arc (HR) (Fig. 10). These latter two sources may in fact be dispersed ash but could also be ash that has since been altered and yet retained its refractory chemical signature. Regardless, these two types of dispersed ash and/or clay minerals with comparable geochemical signatures comprise 36 ± 8 wt.% of the bulk sediment. Evidence for multiple sources of volcanic material, and/or the chemically equivalent clay-alteration products, is consistent with studies of tuffaceous and volcaniclastic sandstones from this site, which calls for volcanic material from both the Izu-Bonin and Honshu Arcs (e.g., Pickering et al. 2013; Schindlbeck et al. 2013; Kutterolf et al. 2014).
VARIMAX factor scores from QFA. (Top) Site C0011. (Bottom) Site C0012
Abundance (weight %) of sources identified by constrained least squares multiple linear regressions (CLS). (Left) Site C0011. (Right) Site C0012
Applying the same methods to Site C0012, QFA indicates that four factors (end members) explain 99 % of the variability in the bulk sediment (Fig. 9). The modeled compositions of the bulk sediment at Site C0012 identify two intermediate and two felsic end members for the geochemical proxies. CLS indicates that these end members are best explained by mixing PAAS and three ashes: the dacitic ash layers from Site C0012, rhyolite from the Izu-Bonin Arc (IBR), and andesite from the Izu-Bonin Arc (IBA). Collectively, the ash compositions (and/or their chemically equivalent clay-mineral alteration products) contribute approximately one-half (49 ± 10 wt.%) of the bulk sediment (Fig. 10). Here, it is important to recall that our geochemical approach cannot resolve the pathway by which the ashes have reached these sites, as some of these ashes may be eroded from terrestrial deposits in the Japanese archipelago.
We compare the mass accumulation rate of the dispersed ash (or its clay alteration) component to a number of common ash layer parameters, including the number of ash layers per 0.2 Ma, the thickest ash layer per 0.2 Ma, and the total thickness of ash layers per 0.2 Ma. We binned the data by 0.2 Ma units, in order to generate a discrete ash layer dataset of approximately the same temporal resolution as that of our dispersed ash record (Fig. 11).
The accumulation rate of total dispersed ash (gray shaded line, g/cm2/ky, the same in each panel by site) plotted against parameters associated with the sedimentation of discrete ash layers per 0.2 Ma (black line, parameters are different in each panel as shown in the bottom x-axis labels). A moving 0.2 Ma window was chosen to bin each discrete ash layer parameter to approximate the resolution of the modeled dispersed ash record. (Top) Site C0011. (Bottom) Site C0012
When compared to various ash layer parameters, we find that the MAR of the total dispersed ash component at Site C0011 correlates best with "Thickest Layer" in Unit I. This suggests that through this interval, eruption volume, rather than the frequency of explosive eruptions, drives the dispersed ash accumulation (Fig. 11). In contrast, in Unit I of Site C0012, the number of discrete ash layers rather than "thickness" is best correlated to the dispersed ash MAR. At both sites, the number and thickness of ash layers vastly decreases below the Unit I/II boundary while the dispersed ash component remains high as a geochemical proxy. This is reflective of time periods when the volcanism in SW Japan was reduced or eliminated (Mahony et al. 2011), when the sites were influenced by siliciclastic turbidites (Units II, IV, and V), or when dispersal paths from the main detrital sources were further away from eruptive fronts (Unit III). Changes in any of these factors may have resulted in either fewer and/or thinner layers that would be more susceptible to being homogenized (by bioturbation or otherwise). That the geochemical signature equivalent to dispersed ash is high at times when so few ash layers were deposited clearly demonstrates that clay minerals derived from volcanic ash and volcanic rock (e.g., smectite group) are a much greater contributor to the sedimentary sequence than is recorded only by the ash layers.
Shikoku Basin, DSDP Site 444
DSDP Site 444 was drilled during Leg 58 to test hypotheses about the origin of marginal basins (Fig. 1, deVries Klein and Kobayashi 1980; deVries Klein et al. 1980). In particular, the Shikoku Basin portion of Leg 58 was drilled in order to answer questions regarding the basin formation by evaluating various spreading models, the ages of the oceanic crust, sediment evolution, and the paleoceanographic origin of the region (deVries and Kobayashi 1980). Site 444 was selected for our research on dispersed ash because of its abundant volcanic ash layers (for example, in comparison to Sites 442 and 443).
Site 444 contains basaltic, andesitic, and rhyolitic ash (deVries and Kobayashi 1980; Furuta and Arai 1980; and our own studies). Most tephra in other northwestern Pacific DSDP holes are rhyolitic and andesitic, which makes the basaltic nature of the Site 444 tephra geologically and tectonically unusual (White et al. 1980). In the Pleistocene and Pliocene, the ash layers are non-alkali rhyolites. As felsic ash is believed to mainly be a product of explosive volcanic eruptions, these layers could have originated from relatively large-scale eruptions in an island arc setting, most likely located westward of Site 444 and transported in the prevailing westerlies. This direction of transport is evident as grain size increases toward the west and decreases toward the east (Furuta and Arai 1980).
In the Miocene, the tephra is of non-alkali tholeiitic basaltic as well as alkali basaltic composition. These basaltic layers imply that another volcanic source was involved in the Miocene. This source could have been closer to the drilling sites than those sources of rhyolitic and andesitic tephra (possibly the Kinan Seamounts and the Shichito-Iwo Jima volcanic arc), but it could also be the distal evidence for large mafic eruptions from a more oceanic arc source. The knowledge that the occurrence of such large mafic plinian eruptions with a wide dispersion is not limited to arc settings alone and can also be found at ocean island settings (e.g., the Galápagos archipelago, Schindlbeck et al. 2015). Such observations are especially important for future studies of dispersed mafic ash in marine sediments. We note, however, that at least some of these ashes may also be subaqueous in origin. For example, certain volcanic arcs near the Shikoku basin (e.g., Izu-Bonin) contain subaqueous volcanoes (Hochstaedter et al. 2001).
Ash layers were carefully identified throughout the 310 meters of sediment at Site 444 using core photos and the shipboard lithologic description (deVries and Kobayashi 1980). Site 444 was cored using the Rotary Core Barrel (RCB) system, as it was occupied during the early-mid stages of the DSDP prior to the advent of the advanced piston corer. Because of the RCB drilling, the recovered core is highly disturbed. Therefore, we classified the ash deposits as either an "ash layer" or an "ash pocket". Anything that is ash in distinct form but that is not in the shape of a full horizontal layer (e.g., touching both sides of the core liner) is termed an "ash pocket". The pockets are often, though not always, circular in form. Ash layers and ash pockets correlate with each other with depth. Thus, we interpret that the pockets are likely to have originated from ash layers mixed by RCB drilling.
A total of 128 ash layers and 76 ash pockets were identified. Of the 28 ash layers chemically analyzed in this study, four (4) of them are categorized as ash pockets. This is important to note when assessing their compositions as any seemingly unusual compositions can be evaluated for mixing with non-ash sources (e.g., clays). The distribution of ash layers and pockets were parameterized by (1) thickness with depth, (2) number of layers per 10 meters, (3) thickest layer (cm) over successive 10 meter depth increments, and (4) total thickness of ash (cm) in that same 10 meter window (Fig. 12). These parameters collectively should yield a baseline understanding of the distribution of ash layers, and we acknowledge that any one of these parameters is not necessarily preferable to the others. Additionally, we note that there is a relatively thick 31-centimeter-thick ash layer at a depth of 30 mbsf (~0.66 Ma, Pleistocene), which may overly influence some of these parameters over that interval.
Ash layer parameters, number of ash layers, total ash thickness (cm), thickness of thickest layer (cm). (Top) Site 444. (Bottom) Site 579/581
Ash layers were analyzed for major element glass shard compositions by electron microprobe (average of 15 single shard measurements), and additionally, the bulk ash layers were analyzed by ICP-ES and ICP-MS for major, trace, and REE compositions following the methods of Scudder et al. (2014). Even though the bulk ash data may be compromised by the inclusion of the terrigenous aluminosilicate material (e.g., adjacent pelagic clay mixed into the ash layer or ash pocket; Scudder et al. 2014), we can still infer other key aspects of the distribution of the ash layer/pocket chemistry.
The distribution of key major elements used to differentiate compositional variation, such as SiO2 or TiO2, (Fig. 13), suggests the presence of multiple ash populations ranging from mafic to intermediate to felsic in composition. A transition in ash layer composition is observed at ~160–170 mbsf in which the SiO2 concentration of the bulk ash layers decreases with depth, while the TiO2 concentration of the bulk ash layers increases with depth. At ~160–170 mbsf, the ash layers are enriched in elements such as Sc and V, and these elemental changes are indicative of a shift to ash of intermediate to mafic composition at this depth (Fig. 13). Ternary diagrams (Fig. 14) indicate at least two populations of ash spanning a mixing line between relatively more felsic (dacites and rhyolite sources) and more mafic (basalt sources) end members.
Representative compositions and elemental ratios plotted with depth. Chinese Loess (Jahn et al. 2001), MORB (Arevalo and McDonough 2010). (Top) Site 444. (Bottom) Site 579/581
Ternary diagrams showing potential end members and bulk sediment chemistry at Site 444 (top) and Site 579/581 (bottom)
Other existing data supports the hypothesis of multiple ash populations. Some of the Site 444 cores possess visually black and chemically basaltic ash, which is a unique aspect for the Shikoku Basin. Refractive indices of volcanic glass and associated minerals have been documented as an effective means for correlating widespread tephra layers (Machida and Arai 1983). Based on the refractive indices of glass shards from ash layers at Site 444, Furuta and Arai (1980) (their Figure 12) show a linear relationship between refractive index and FeO content. They find that the refractivity of the transition-metal oxides (e.g., FeO, TiO2, and MnO) at this site is higher than that of SiO2 and other oxides, thus the refractive index seems to be driven primarily by the transition-metal oxides. The predominate transition-metal oxides in volcanic glass shards are iron-based (i.e., FeO or Fe2O3), thus at Site 444 the refractive index of volcanic glass shards can be determined by the content of the iron oxides and indicate that basaltic ashes are present. At Site 444, the ash layers are divided into several groups based on petrographic characteristics, chemical composition, and age. As discussed above, Furuta and Arai (1980) identify rhyolitic and basaltic ash layers at Site 444. Multiple ash populations are also apparent in our microprobe and bulk ash layer chemistry.
Downcore profiles of the bulk sediment do not necessarily provide clear distinction of a single source or two-component mixing (Fig. 13). Additionally, ternary diagrams show that the bulk sediment does not necessarily lie along a mixing line (Fig. 14) with an upper continental component (e.g., Chinese Loess or PAAS) influencing the sediment in addition to the ash components discussed above. Thus, there is likely at least one additional end member beyond the multiple ash populations that influences the bulk sediment composition. This will be explored further in our future studies.
Based on the above discussion of compositional trends, QFA was performed on the data at Site 444 using a modified element suite compared to that used for Site 1149 and Site 52 (Table 1). When performing factor analysis, there is a balance that must be maintained between the number of elements and the number of samples in the dataset. Reimann et al. (2002) outline a number of parameters by which to define the appropriate number of elements. We maintain a very conservative use of these rules, and as such during our study of Site 444, with 71 samples in the dataset, we restricted our element menu to seven (7) elements. Of the refractory elements chosen as representative of the aluminosilicate end members (Al, Ti, Sc, Cr, Ni, Nb, La, Th), Ni overall shows the least variability between potential sources and thus was removed from the element menu for our statistical analysis of Site 444.
Table 1 Bulk sediment elemental composition (ppm) used for QFA analysis, Site 444
The results of QFA at Site 444 indicate that three factors (end members) explain 98 % of the variability of the data. This is consistent with the geochemical trends we observe in the bulk data, and the three factors explain 53, 33, and 11 % of the variability of the data respectively (Fig. 15, left). Based on the preliminary modeled compositions of these factors, we interpret that there may be one intermediate crustal source, and at least one dacitic source exhibiting higher Ti/Al ratios, although we cannot differentiate further based on QFA alone. If we were to increase the number of factors, we find that four factors can explain 99 % of the variability of the data, with the four factors explaining 55, 34, 9, and 1 % of the variability of the data, respectively (Fig. 15, right). While this fourth factor is small, we find that it predominantly splits the third factor from the previous QFA and identifies an end member that is dominated by variation in Nb.
Site 444, VARIMAX factor scores from QFA. (Left) Three-factor model. (Right) Four-factor model. As described in the text, Factor 1 is most likely an intermediate crustal source, Factors 2 and/or 3 are most likely dacitic source(s) and Factor 4 is a Nb-rich end member
In terms of the relative downhole distributions of square factor loadings (Fig. 16), Factor 1 dominates over the upper ~150 mbsf and exhibits a broad decrease in importance with depth. Factor 2 is most important in the sequence below ~150 mbsf. Factors 2 and 3 have roughly equal relative contributions over the upper ~50 mbsf, below which Factor 2 exhibits a broad increase while Factor 3 generally decreases. Future statistical analysis by CLS will allow us to determine whether three or four factors are ultimately the appropriate number of end members and will allow us to further determine the sources to the sediment.
Depth profiles of square factor loadings from QFA showing the relative contribution of each factor, Site 444
Kamchatka and the Kuriles, DSDP Site 579 and Site 581
DSDP Sites 579 and 581 (Fig. 1) were occupied in June 1982 as part of DSDP Leg 86 and were cored as part of a transect across the Kuroshio Current system designed to investigate the paleoceanography of the northwest Pacific. Although spaced relatively far apart, Sites 579 and 581 comprise an excellent composite section of the region (Plank and Langmuir 1998). Site 579 marks the southern margin of the transition between the sediments of the calcareous-siliceous subtropical gyre and the more siliceous subarctic gyre. It was selected for Leg 86 with the goal of compiling a paleoceanographic record of this transition zone for the late Neogene and Quaternary and comparing this record of migrations of the subarctic front with that of Asian eolian inputs. Site 581 is the northernmost of the transect sites drilled during Leg 86 and was designed to provide a high-resolution record of the late Neogene.
As with the other sites in this region, a number of ash layer parameters were examined (Fig. 12). There are no clear trends evident in the number of ash layers through the depth of the site. There are two peaks in the number of ash layers per 5 m, with seven ash layers from 40–45 mbsf and four layers from 80–85 mbsf. This peak at 80 mbsf also corresponds to a peak in total ash thickness (44 cm thick), meaning that not only are more layers present at this depth but also that those layers are thicker. This suggests a period of increased large-scale volcanic eruptions during the Pliocene. However, this is difficult to verify as late Pleistocene glaciations are likely to have obscured the terrestrial record of eruptive history in this area (Volynets et al. 1999). Ash pockets show good correlation with the ash layers in frequency, and we interpret that they originated from disturbance in the core due to the weather during core recovery. The cores had a high occurrence of "flow-in" zones due to heavy seas that may have affected coring and disturbed recovery of some ash layers (Heath et al. 1985).
Interpretation of the bulk ash layers and bulk sediment downhole is not straightforward (Fig. 13). In ternary diagrams, the bulk sediment commonly falls along a mixing line that can be explained by the bulk ash and a relatively more basaltic component, supporting the presence of dispersed ash in the sediment (Fig. 14). Much of the bulk sediment is similar in composition to Chinese Loess, which, while not necessary to explain the composition of the bulk sediment, can be inferred to be almost surely present due to the location of the sites.
Applying QFA to the combined data from these sites, we find that either five or six factors best explain the variability of the data (Table 2, Fig. 17). Comparison of the two sets of factor analysis indicates that Factors 1 through 4 correlate with each other whether there are five or, alternatively, six factors. If there are six factors, Factor 5 from the smaller factor model is split between Factors 5 and 6. Factor 6 explains 1.7 % of the variability of the data in this model, and thus it is interpreted as "real".
Table 2 Bulk sediment elemental composition (ppm) used for QFA analysis, Sites 579 and 581
Sites 579 and 581, VARIMAX factor scores from QFA
Focusing first on the five-factor model, the calculated compositions of the end members identifies Factor 4 (explaining 10.1 % of the variability) as a potential terrigenous source. This is unusual, as in all the other NW pacific sites discussed in this paper the crustal source is the first and/or second end member explaining the majority of the variability in the dataset. This speaks to the importance of dispersed ash at Site 579/581, since the dispersed ash component(s) appear to be controlling most of the variability. This will be further explored in future work. The six-factor model identifies a few factors that are dominated by a single element (Al in Factor 4, Ti in Factor 5) indicating that the division of factors is controlling variability as opposed to their specific concentrations.
As with Site 444, square factor loadings for these Kamchatka/Kurile sites can allow us to assess the relative contribution of the individual factors on a sample-by-sample basis (Fig. 18). In order to assess the relative contributions for this work, we will focus here on the five-factor model. We observe that for most of the sedimentary record, Factor 1 dominates the record, although from ~65 to ~93 mbsf, Factor 2 is the dominate component. Additionally, from ~150 to ~240 mbsf the variance in the record is dominated by Factor 3 (~150 to ~200 mbsf) or Factor 2 (~200 to ~240 mbsf). Factors 4 and 5 are remarkably similar in importance until ~200 mbsf below which Factor 4 increases in relative importance while Factor 5 remains fairly stable. Future statistical analysis by CLS will allow us to determine whether five or six factors are ultimately the appropriate number of end members and will allow us to further determine the sources to the sediment.
Depth profiles of square factor loadings from QFA showing the relative contribution of each factor, Sites 579 and 581
Although the presence of dispersed ash in marine sediment has been long known by marine sedimentologists, the overall quantification of its abundance and its composition has recently received renewed attention. The dispersed ash, either altered or unaltered, is extremely difficult to differentiate from detrital/terrigenous/authigenic clay as they are all, fundamentally speaking, "aluminosilicates". The combined geochemical (major, trace, REEs) and statistical (Pisias et al. 2013) approach provides the chemical context for the determination of provenance and the distinctive resolution of the different aluminosilicate components based on their individual geochemical signature(s).
The case studies we present here show that this approach can be useful in multiple different arc systems. We have also shown that by adding the dispersed ash component to the study of discrete ash layers, many additional geological and oceanographic interpretations can be uncovered. Most obviously, we have shown how geochemical modeling of the bulk sediment composition can indicate the presence of an additional ash source that is not recorded in the discrete layer record. Furthermore, in cores where there are no remaining discrete layers (either due to natural bioturbation or man-made coring disturbance), we show that important ash contributions may still be recognized and studied for their composition and accumulation. We have also shown how the accumulation of dispersed ash will either best correlate with the thickest ash layer, or alternatively, the frequency of the discrete ash layers, which lead to dramatically different interpretations regarding the volcanic mechanisms of a given arc. Finally, when considering these studies throughout the northwest Pacific Ocean, or in marginal basins such as the Japan Sea, our approach may be used to develop a mass balance inventory of the total mass of volcanic ash that is more accurate than estimates based solely on the combined thicknesses of the discrete layers.
There are many other future research directions that are a natural extension of the work described above. There are a number of regions for which the characterization of the abundance and extent of dispersed ash would be of great scientific and societal importance (e.g., Tonga, Costa Rica, and Indonesia), and studying the dispersed ash component in these regions will most certainly lead to an improved understanding of explosive volcanism in those systems. Moreover, most marine volcanoes are beneath the sea surface, either in mid-ocean ridge settings or in convergent margins (e.g., Fiske et al. 2001; White et al. 2003). The vast majority of these eruptions go unobserved and their products are often unrecovered and thus are understudied. Often the only evidence of subaqueous eruptions are widespread pumice rafts. Little is known, however, about the fate of these rafts, yet they too are likely to contribute dispersed ash to the sediment.
Volcanic ash may have significant nutrient fertilization potential as well. Particulate matter from large volcanic eruptions may release enough iron and other nutrients to the surface ocean to stimulate primary production over short time scales (Watson 1997; Olgun et al. 2011). Thus, it would be important to develop an understanding of the effects of iron and other elements from volcanic ash on the biological productivity of the open ocean and, second, to consider the biological effects of volcanic ash-driven elemental fertilization in the open ocean through Earth's history. The use of the dispersed ash record can be applied to achieve these goals.
Additionally, ash, with its major component being volcanic glass, is a metastable phase with high potential biological availability to the subseafloor ecosystem, may provide both electron donors and electron acceptors important to microbial energetics. Characterizing the chemistry of the subseafloor sediment will provide vital information about the nutrients available to subseafloor microbial life as well as the pathways by which these nutrients are utilized.
BNN:
Izu-Bonin Boninite
CL:
Chinese Loess
CLS:
Constrained Least Squares
CRISP II:
Costa Rica Seismogenesis Project Stage II
DSDP:
Honshu Rhyolite
IBA:
Izu-Bonin Andesite
IBFA:
Izu-Bonin Front Arc
IBR:
Izu-Bonin Rhyolite
ICP-ES:
Inductively Coupled Plasma-Emission Spectrometer
ICP-MS:
Inductively Coupled Plasma-Mass Spectrometer
LRC:
lower red clay MAR, mass accumulation rate
MORB:
Mid-Ocean Ridge Basalt
PAAS:
Post-Archean average Australian Shale
PGL:
pale green laminae
QFA:
Q-Mode Factor Analysis
RCB:
Rotary Core Barrel
REE:
rare earth element
TI:
Total Inversion
URC:
upper red clay
Arevalo R, McDonough WF (2010) Chemical variations and regional diversity observed in MORB. Chem Geol 271(1):70–85
Bailey JC (1993) Geochemical history of sediments in the northwestern Pacific Ocean. Geochem J 27:71–90
Bailey JC (1996) Role of subducted sediments in the genesis of Kurile-Kamchatka island arc basalts: Sr isotopic and elemental evidence. Geochem J 30(5):289–321
Bryan SE, Cook A, Evans JP, Colls PW, Wells MG, Lawrence MG et al (2004) Pumice rafting and faunal dispersion during 2001-2002 in the Southwest Pacific: record of a dacitic submarine explosive eruption from Tonga. Earth Planet Sci Lett 227(1-2):135–154
Bryant CJ, Arculus RJ, Eggins SM (1999) Laser ablation-inductively coupled plasmamass spectrometry and tephras: a new approach to understanding arc-magma genesis. Geology 27(12):1119–1122
Bryant CJ, Arculus RJ, Eggins SM (2003) The geochemical evolution of the Izu–Bonin arc system: a perspective from tephras recovered by deep-sea drilling. Geochem Geophys Geosyst 4(11):1094. doi:10.1029/2002GC000427
Bursik M (2001) Effect of wind on the rise height of volcanic plumes. Geophys Rese Lett 28(18):3621–3624
Cambray H, Cadet JP, Pouclet A (1993) Ash layers in deep-sea sediments as tracers of arc volcanic activity: Japan and Central America as case studies. Island Arc 2:72–86
Cambray H, Pubellier M, Jolivet L, Pouclet A. Volcanic activity recorded in deep-sea sediments and the geodynamic evolution of western Pacific island arcs in Active Margins and Marginal Basins of the Western Pacific. In B Taylor and J Natland, editors. Washington DC Geophys Monogr Ser 88:97–124 AGU Washington DC; 1995.
Carey SN (1997) Influence of convective sedimentation on the formation of widespread tephra fall layers in the deep sea. Geology 25(9):839–842
Carey SN, Sigurdsson H (1978) Deep-sea evidence for distribution of tephra from the mixed magma eruption of the Soufriére on St Vincent 1902: ash turbidites and airfall. Geology 6:271–274
Carey SN, Sigurdsson H (1980) The Roseau ash: Deep-sea tephra deposits from a major eruption on Dominica, Lesser Antilles arc. J Volcanol Geoth Res 7:67–86
Carey SN, Sigurdsson H (2000) Grain size of Miocene volcanic ash layers from Sites 998 999 and 1000: Implications for source areas and dispersal. Proc Ocean Drill Program Sci Results 165:101–110
Carey SN, Sparks RSJ (1986) Quantitative models of fallout and dispersal of tephra from volcanic eruption columns. Bull Volcanol 48:109–125
Carey RJ, Houghton BF, Thordarson T (2010) Tephra dispersal and eruption dynamics of wet and dry phases of the 1875 eruption of Askja Volcano Iceland. Bull 72(3):259–278
Chan LH, Kastner M (2000) Lithium isotopic compositions of pore fluids and sediments in the Costa Rica subduction zone: implications for fluid processes and sediment contribution to the arc volcanoes. Earth Planet Sci Lett 183(1):275–290
de Vries KG, Kobayashi K (1980) Initial Reports of the Deep Sea Drilling Project v. 58. US Goverment Printing Office, Washington DC, p 1022
deVries Klein G, McConville RL, Harris JM, Steffensen CK (1980) Petrology and Diagenesis of Sandstones Deep Sea Drilling Project Site 445 Daito Ridge. Init Rep DSDP 58:609–616. doi:10.2973/dsdpproc581121980
Duggen S, Olgun N, Croot P, Hoffmann LJ, Dietze H, Delmelle P et al (2010) The role of airborne volcanic ash for the surface ocean biogeo- chemical iron‐cycle: A review. Biogeosci 7:827–844. doi:10.5194/bg-7-827-2010
Dunlea AG, Murray RW, Sauvage J., Pockalny RA, Spivack AJ, Harris RN, et al. Cobalt-based age models of pelagic clay in the South Pacific Gyre. Geochem Geophys Geosys. 2015a;16. doi:10.1002/2015GC005892.
Dunlea AG, Murray RW, Sauvage J, Pockalny RA, Spivack AJ, Harris RN, et al. Dust, volcanic ash, and the evolution of the South Pacific Gyre through the Cenozoic. Paleocean. 2015b; 30. doi:10.1002/2015PA00282.
Dymond J (1981) Geochemistry of Nazca plate surface sediments: An evaluation of hydrothermal biogenic detrital and hydrogenous sources. Geol Soc Am Memoirs 154:133–174
Expedition 340 Scientists Lesser Antilles volcanism and landslides: Implications for hazard assessment and long-term magmatic evolution of the arc. Prelim Rep Integrated Ocean Drill Program 340. 2012. doi:10.2204/iodppr3402012.
Fischer AG et al. Site 52. Initial Rep Deep Sea Drill Proj 6:247–290. 1971. doi:10.2973/dsdpproc61101971.
Fisher RV, Schmincke H-U (1984) Pyroclastic Rocks. Springer-Verlag, New York, p 472
Fiske RS, Naka J, Iizasa K, Yuasa M, Klaus A (2001) Submarine silicic caldera at the front of the Izu-Bonin arc Japan: voluminous seafloor eruptions of rhyolite pumice. Geol Soc Amer Bull 113:813–824
Freundt A, Grevemeyer I, Rabbel W, Hansteen TH, Hensen C, Wehrmann H et al (2014) Volatile (H2O, CO2, Cl, S) budget of the Central American subduction zone. Int J Earth Sci 103(7):2101–2127. doi:10.1007/s00531-014-1001-1
Frogner P, Gíslason SR, Óskarsson N (2001) Fertilizing potential of volcanic ash in ocean surface water. Geology 29:487–490
Furuta T, Arai F (1980) Petrographic and geochemical properties of tephras in Deep Sea Drilling Project cores from the north Philippine Sea. Init Rep DSDP 58:617–627. doi:10.2973/dsdpproc581131980
Gardner JV, Nelson CS, Baker PA. Distribution and character of pale-green laminae in sediment from Lord Howe Rise: A probable late Neogene and Quaternary tephrostratigraphic record. In Blakeslee JH (ed) Initial Reports of the DSDP 90, Wahsington, 1986. p. 1145–1158.
Hauff F, Hoernle K, Schmidt A (2003) Sr-Nd-Pb composition of Mesozoic Pacific oceanic crust (Site 1149 and 801 ODP Leg 185): Implications for alteration of ocean crust and the input into the Izu-Bonin-Mariana subduction system. Geochem Geophys Geosyst 4(8):8913. doi:10.1029/2002GC000421
Heath GR, Kovar RB, Lopez C, Campi GL (1985) Elemental Composition of Cenozoic Pelagic Clays from Deep Sea Drilling Project Sites 576 and 578 Western North Pacific. Init Rep DSDP 86:605–648. doi:10.2973/dsdpproc861271985
Hein JR, Scholl DW, Barron JA, Jones MG, Miller J (1978) Diagenesis of late Cenozoic diatomaceous deposits and formation of the bottom simulating reflector in the southern Bering Sea. Sedimentol 25(2):155–181
Henry P, Kanamatsu T, Moe K, Expedition 333 Scientists. Proceedings IODP 333, Tokyo Integrated Ocean Drilling Program Management, International Inc Washington DC; 2012. doi:10.2204/iodpproc3332012.
Hochstaedter A, Gill J, Peters R, Broughton P, Holden P, Taylor B. Across-arc geochemical trends in the Izu-Bonin arc: Contributions from the subducting slab Geochem Geophys Geosys. 2001;2(7). doi:10.1029/2000GC000105.
Hovan SA, Rea DK, Pisias NG (1991) Late Pleistocene continental climate and oceanic variability recorded in northwest Pacific sediments. Paleoceanography 6(3):349–370
Huang TC (1980) A volcanic sedimentation model: Implications of processes and responses of deep-sea ashes. Mar Geol 38:103–122
Huang TC, Watkins ND, Shaw DM, Kennett JP (1973) Atmospherically transported volcanic dust in South Pacific deep sea sedimentary cores at distances over 3000 km from the eruptive source. Earth Planet Sci Lett 20(1):119–124
Huang TC, Fillon RH, Watkins ND, Shaw DM (1974) Volcanism and silicious microfaunal diversity in the southwest Pacific during the Pleistocene period. Deep-Sea Res 21:377–384
Huang TC, Watkins ND, Shaw DM (1975) Atmospherically transported volcanic glass in deep-sea sediments: Volcanism in sub-antarctic latitudes of the south Pacific during late Pliocene and pleistocene time. Bull Geol Soc Am 86(9):1305–1315
Hüpers A, Ikari MJ, Dugan B, Underwood MB, Kopf AJ (2015) Origin of a zone of anomalously high porosity in the subduction inputs to Nankai Trough. Mar Geol 361:47–162
Hyeong K, Park S, Yoo CM, Kim K. Mineralogical and geochemical compositions of the eolian dust from the northeast equatorial Pacific and their implications on paleolocation of the Intertropical Convergence Zone. Paleoceanography. 2005;20(0883-8305). doi: 10.1029/2004PA001053.
Jahn BM, Gallet S, Han JM (2001) Geochemistry of the Xining Xifeng and Jixian sections Loess Plateau of China: eolian dust provenance and paleosol evolution during the last 140 ka. Chem Geol 178(1–4):71–94
Jones MT, Gíslason SR (2008) Rapid releases of metal salts and nutrients following the deposition of volcanic ash into aqueous environments. Geochim Cosmochim Acta 72:3661–3680. doi:10.1016/jgca200805030
Jordan BR, Sigudsson H, Carey SN, Rogers R, Ehrenborg J. Geochemical correlation of Caribbean Sea tephra layers with ignimbrites in Central America. In: Siebe C, Macías JL, Aguirre-Díaz GJ (eds) Neogene-Quaternary Continental Margin Volcanism: A Perspective from México 2006. p. 175-208
Kastner M, Elderfield H, Martin JB (1991) Fluids in convergent margins: what do we know about their composition origin role in diagenesis and importance for oceanic chemical fluxes? Philos Trans R Soc Lond A 335(1638):243–259
Kennett JP, McBirney AR, Thunell RC (1977) Episodes of Cenozoic volcanism in the circum-Pacific region. J Volc Geo Res 2:145–163
Klinkhammer GP, Elderfield H, Edmond JM, Mitra A (1994) Geochemical implications of rare earth element patterns in hydrothermal fluids from mid-ocean ridges. Geochim Cosmochim Acta 58(23):5105–5113
Kryc KA, Murray RW, Murray DW (2003) Al-to-oxide and Ti-to-organic linkages in biogenic sediment: relationships to paleo-export production and bulk Al/Ti. Earth Planet Sci Lett 211(1):125–141
Kutterolf S, Freundt A, Peréz W, Mörz T, Schacht U, Wehrmann H, et al. The Pacific offshore record of Plinian arc volcanism in Central America part 1: Along-arc correlations. Geochem Geophys Geosyst. 2008a;9(2). doi:10.1029/2007GC001631.
Kutterolf S, Freundt A, Peréz W. The Pacific offshore record of Plinian arc volcanism in Central America part 2: Tephra volumes and erupted masses. Geochem Geophys Geosys. 2008b;9(2). doi:10.1029/2007GC001791.
Kutterolf S, Freundt A, Schacht U, Bürk D, Harders R, Mörz T, et al. The Pacific offshore record of Plinian arc volcanism in Central America part 3: Application to forearc geology. Geochem Geophys Geosys. 2008c;9(2). doi:10.1029/2007GC001826.
Kutterolf S, Jegen M, Mitrovica JX, Kwasnitschka T, Freundt A, Huybers P (2013) A detection of Milankovitch frequencies in global volcanic activity. Geology 41(2):227–230. doi:10.1130/G334191
Kutterolf S, Schindlbeck JC, Scudder RP, Murray RW, Pickering KT, Freundt A et al (2014) Large volume submarine ignimbrites in the Shikoku Basin: An example for explosive volcanism in the Western Pacific during the Late Miocene. Geochem Geophys Geosys 15(5):1837–1851
Kyte FT, Leinen M, Heath GR, Zhou L (1993) Cenozoic sedimentation history of the central North Pacific: Inferences from the elemental geochemistry of core LL44-GPC3. Geochim Cosmochim Acta 57:1719–1749
Le Friant A, Ishizuka O, Stroncik NA and the Expedition 340 Scientists (2013) Proceedings of Integrated Ocean Drilling Program vol 340, Tokyo. doi:10.2204/iodp.proc.340.104.2013
Ledbetter MT, Sparks RSJ (1979) Duration of large-magnitude explosive eruptions deduced from graded bedding in deep-sea ash layers. Geology 7:240–244
Lee J, Stern RJ, Bloomer SH (1995) Forty million years of magmatic evolution in the Mariana arc: the tephra glass record. J Geophys Res 100(B9):17671–17687
Leinen M (1987) The origin of paleochemical signatures in North Pacific pelagic clays: Partitioning experiments. Geochim Cosmochim Acta 51(2):305–319
Leinen M, Pisias N (1984) An objective technique for determining end-member compositions and for partitioning sediments according to their sources. Geochim Cosmochim Acta 48(1):47–62
Lowe DJ (2011) Tephrochronology and its application: A review. Quat Geochronol 6:107–153. doi:10.1016/j.quageo.2010.08.003
Machida H, Arai F (1983) Extensive ash falls in and around the Sea of Japan from large late Quaternary eruptions. J Volc Geotherm Res 18(1):151–164
Mackenzie FT, Garrels RM (1966) Chemical mass balance between rivers and oceans. Am J Sci 264(7):507–525
Mahony SH, Wallace LM, Miyoshi M, Villamor P, Sparks RSJ, Hasenaka T (2011) Volcano-tectonic interactions during rapid plate-boundary evolution in the Kyushu region SW Japan. Geol Soc Am Bull 123:2201–2223
Maicher D, White JDL (2001) The formation of deep-sea Limu o Pele. Bull Volcanol 63:482–496
Martinez NC, Murray RW, Thunell RC, Peterson LC, Muller-Karger F, Astor Y et al (2007) Modern climate forcing of terrigenous deposition in the tropics (Cariaco Basin Venezuela). Earth Planet Sci Lett 264(3):438–451
Martinez NC, Murray RW, Dickens GR, Kölling M. Discrimination of sources of terrigenous sediment deposited in the central Arctic Ocean through the Cenozoic. Paleoceanography. 2009;24(1). doi:10.1029/2007PA001567.
Martinez NC, Murray RW, Thunell RC, Peterson LC, Muller-Karger F, Lorenzoni L et al (2010) Local and regional geochemical signatures of surface sediments from the Cariaco Basin and Orinoco Delta Venezuela. Geology 38(2):159–162
Metzner D, Kutterolf S, Toohey M, Timmreck C, Niemeier U, Freundt A et al (2014) Radiative forcing and climate impact resulting from SO2 injections based on a 200,000 year record of Plinian eruptions along the Central American Volcanic Arc. Int J Earth Sci 103(7):2063–2079. doi:10.1007/s00531-012-0814-z
Murray RW, Leinen M (1993) Chemical transport to the seafloor of the equatorial Pacific Ocean across a latitudinal transect at 135 W: tracking sedimentary major trace and rare earth element fluxes at the Equator and the Intertropical Convergence Zone. Geochim Cosmochim Acta 57(17):4141–4163
Murray RW, Leinen M (1996) Scavenged excess aluminum and its relationship to bulk titanium in biogenic sediment from the central equatorial Pacific Ocean. Geochim Cosmochim Acta 60(20):3869–3878
Murray RW, Leinen M, Isern A (1993) Biogenic flux of Al to sediment in the central equatorial Pacific Ocean: Evidence for increased productivity during glacial periods. Paleoceanography 8(5):651–670
Murray RW, Leinen M, Murray DW, Mix AC, Knowlton CW (1995) Terrigenous Fe input and biogenic sedimentation in the glacial and interglacial equatorial Pacific Ocean. Global Biogeochem Cycles 9(4):667–684
Murray RW, Knowlton C, Leinen M, Mix AC, Polsky CH (2000) Export production and carbonate dissolution in the central equatorial Pacific Ocean over the past 1 Myr. Paleoceanography 15(6):570–592
Ninkovich D, Sparks RSJ, Ledbetter MT (1978) The exceptional magnitude and intensity of the Toba eruption Sumatra: An example of the use of deep-sea tephra layers as a geological tool. Bull Volcanol 41:286–298
Olgun N, Duggen S, Croot PL, Delmelle P, Dietze H, Schacht U, et al. Surface ocean iron fertilization: the role of subduction zone and hotspot volcanic ash and fluxes into the Pacific Ocean Global Biogeochemical Cycles. 2011;25. doi:10.1029/2009GB003761
Peacock SA (1990) Fluid processes in subduction zones. Science 248(4953):329–337
Peters JL, Murray RW, Sparks J, Coleman DS (2000) Terrigenous matter and dispersed ash in sediment from the CaribbeanSea: Results from Leg 165. Proc Ocean Drill Program Sci Results 165:115–124. doi:10.2973/odpprocsr1650032000
Pickering KT, Underwood MB, Saito S, Naruse H, Kutterolf S, Scudder RP et al (2013) Depositional architecture provenance and tectonic/eustatic modulation of Miocene submarine fans in the Shikoku Basin: Results from Nankai Seismogenic Zone experiment. Geochem Geophys Geosys 14:6. doi:10.1002/ggge.20107
Piepgras DJ, Jacobsen SB (1992) The behavior of rare earth elements in seawater: Precise determination of variations in the North Pacific water column. Geochim Cosmochim Acta 56(5):1851–1862
Pisias NG, Murray RW, Scudder RP (2013) Multivariate statistical analysis and partitioning of sedimentary geochemical data sets: General principles and specific MATLAB scripts. Geochem Geophys Geosys 14(10):4015–4020. doi:10.1002/ggge20247
Plank T (2005) Constraints from Thorium/Lanthanum on sediment recycling at subduction zones and the evolution of continents. J Petrol 46:921–944
Plank T, Langmuir CH (1998) The chemical composition of subducting sediment and its consequences for the crust and mantle. Chem Geol 145(3–4):325–394
Plank T, Ludden JN, Escutia C, et al. Proc Ocean Drill Program Init Results. 2000;185:1–190. doi:10.2973/odpprocir1852000.
Plank T, Kelley KA, Murray RW, Stern LQ (2007) Chemical composition of sediments subducting at the Izu–Bonin trench. Geochem Geophys Geosyst 8:Q04I16. doi:10.1029/2006GC001444
Presti M, Michalopoulos P (2008) Estimating the contribution of the authigenic mineral component to the long-term reactive silica accumulation on the western shelf of the Mississippi River Delta. Cont Shelf Res 28(6):823–838
Rea DK (1994) The palcoclimatic record provided by eolian deposition in the deep sea: The geologic history of wind. Rev Geophys 32:159–195
Rea DK, Leinen M (1988) Asian aridity and the zonal westerlies: Late Pleistocene and Holocene record of eolian deposition in the northwest Pacific Ocean. Palaeogeog Palaeoclimatol Palaeoecol 66(1-2):1–8
Rea DK, Snoeckx H, Joseph LH (1998) Late Cenozoic eolian deposition in the North Pacific: Asian drying Tibetan uplift and cooling of the northern hemisphere. Paleoceanography 13(3):215–224
Reid P, Carey SN, Ross DR (1996) Late quaternary sedimentation in the Lesser Antilles island arc. Geol Soc Am Bull 108:78–100
Reimann C, Filzmoser P, Garrett RG (2002) Factor analysis applied to regional geochemical data: problems and possibilities. App Geochem 7(3):185–206
Risso C, Scasso RA, Aparicio A (2002) Presence of large pumice blocks on Tierra del Fuego and South Shetland Islands shorelines from 1962 South Sandwich Islands eruption. Mar Geol 186:413–422
Robock A, Ammann CM, Oman L, Shindell D, Levis S, Stenchikov G. Did the Toba volcanic eruption of ~ 74ka BP produce widespread glaciation? J Geophys Res: Atmospheres (1984–2012). 2009;114(D10). doi: 10.1029/2008JD011652.
Rose WI, Riley CM, Dartevelle S (2003) Sizes and shapes of 10-Ma distal fall pyroclasts in the Ogallala Group Nebraska. J Geol 111(1):115–124
Ryan WBF, Carbotte SM, Coplan JO, O'Hara S, Melkonian A, Arko R, et al. Global Multi-Resolution Topography synthesis. Geochem Geophys Geosys. 2009;10(3). doi:10.1029/2008GC002332.
Saffer DM, Underwood MB, McKiernan AW (2008) Evaluation of factors controlling smectite transformation and fluid production in subduction zones: Application to the Nankai Trough. Island Arc 17(2):208–230
Saffer DM, Lockner DA, McKiernan A (2012) Effects of smectite to illite transformation on the frictional strength and sliding stability of intact marine mudstones. Geophys Res Lett 39:L11304. doi:10.1029/2012GL051761
Saito S, Underwood MB, Kubo Y. Expedition 322 Scientists. Proceedings IODP 322, Tokyo Integrated Ocean Drilling Program Management, International Inc Washington DC. 2010. doi:10.2204/iodpproc3221042010.
Schacht U, Wallmann K, Kutterolf S, Schmidt M (2008) Volcanogenic sediment–seawater interactions and the geochemistry of pore waters. Chem Geol 249(3–4):321
Schindlbeck JC, Kutterolf S, Freundt A, Scudder RP, Pickering KT, Murray RW (2013) Emplacement processes of submarine volcaniclastic deposits (IODP Site C0011 Nankai Trough). Mar Geol 343:115–124. doi:10.1016/j.margeo.2013.06.017
Schindlbeck JC, Kutterolf S, Freundt A, Straub SM, Wang K-L, Jegen M, et al. The Miocene Galápagos ash layer record of IODP Legs 334&344: Ocean-island explosive volcanism during plume-ridge interaction. Geology. 2015. doi:10.1130/G366451.
Scudder RP, Murray RW, Plank T (2009) Dispersed ash in deeply buried sediment from the northwest Pacific Ocean: An example from the Izu-Bonin arc (ODP Site 1149). Earth Planet Sci Lett 284(3–4):639–648
Scudder RP, Murray RW, Schindlbeck JC, Kutterolf S, Hauff F, McKinley CC (2014) Regional-scale input of dispersed and discrete volcanic ash to the Izu-Bonin and Mariana subduction zones. Geochem Geophys Geosys 15(11):4369–4379
Severmann S, Johnson CM, Beard BL, German CR, Edmonds HN, Chiba H et al (2004) The effect of plume processes on the Fe isotope composition of hydrothermally derived Fe in the deep ocean as inferred from the Rainbow vent site Mid-Atlantic Ridge 36 14′ N. Earth Planet Sci Lett 225(1):63–76
Shaw DM, Watkins ND, Huang TC (1974) Atmospherically transported volcanic glass in deep-sea sediments: Theoretical considerations. J Geophysi Res 79(21):3087–3094
Sibuet JC, Letouzey J, Barbier F, Charvet J, Foucher JP, Hilde TW et al (1987) Back arc extension in the Okinawa Trough. J Geophys Res: Solid Earth (1978–2012) 92(B13):14041–14063
Sigurdsson H (1999) Volcanic episodes and rates of volcanism. In: Sigurdsson H, Houghten HB, McNutt SR, Ryme H, Stix J (eds) Encyclopedia of Volcanoes. Academic Press, New York, pp 271–279
Sigurdsson H, Sparks RSJ, Carey SN, Huang TC (1980) Volcanogenic sedimentation in the Lesser Antilles Arc. J Geol 88:523
Sigurdsson H, Leckie RM, Acton GD et al (1997) Proceedings of the Ocean Drilling Program: Initial reports Volume 165. Ocean Drilling Program, College Station Texas, p 865
Sigurdsson H, Kelley RM, Carey S, Bralower T, King J (2000) History of circum-Caribbean explosive volcanism: 40Ar/39Ar dating of tephra layers. Proc ODP Sci Results 165:299–314
Simkin T, Fiske RS (1983) Krakatau 1883: The Volcanic Eruption and its Effects. Smithsonian Institution Press, Washington DC, p 464
Stern RJ, Kohut EJ, Bloomer SH, Leybourne M, Fouch M, Vervoort J (2006) Subduction Factory processes beneath the Guguan Cross-chain Mariana Arc: no role for sediments are serpentinites important? Contrib Mineral Petrol 151:202–221
Straub SM (1997) Multiple sources of Quaternary tephra layers in the Mariana Trough. J Volcanol Geotherm Res 76(3–4):251–276
Straub SM. The evolution of the Izu Bonin–Mariana volcanic arcs (NW Pacific) in terms of major element chemistry. Geochem Geophys Geosyst. 2003;4. doi:10.1029/2002GC000357.
Straub SM. Timescales and causes of secular change in the Izu Bonin–Mariana volcanic arcs. Geochim Cosmochim Acta. 2008;72 (12). A905 Goldschmidt Conference Vancouver Canada.
Straub SM, Layne GD (2002) The systematics of boron isotopes in Izu arc front volcanic rocks. Earth Planet Sci Lett 198(1–2):25–39
Straub SM, Layne GD (2003b) The systematics of chlorine fluorine and water in Izu arc front volcanic rocks: Implications for volatile recycling in subduction zones. Geochim Cosmochim Acta 67(21):4179–4203
Straub SM, Layne GD. Decoupling of fluids and fluid-mobile elements during shallow subduction: evidence from halogen-rich andesite melt inclusions from the Izu arc volcanic front. Geochem Geophys Geosyst. 2003a;4. doi:10.1029/2002GC000349.
Straub SM, Schmincke HU (1998) Evaluating the tephra input into Pacific Ocean sediments: distribution in space and time. Geol Rundsch 87(3):461–476
Straub SM, Layne GD, Schmidt A, Langmuir CH (2004) Volcanic glasses at the Izu arc volcanic front: new perspectives on fluid and sediment melt recycling in subduction zones. Geochem Geophys Geosyst 5:Q01007. doi:10.1029/2002GC000408
Straub SM, Goldstein SL, Class C, Schmidt A (2009) Mid-ocean-ridge basalt of Indian type in the northwest Pacific Ocean basin. Nat Geosci 2(4):286–289
Straub SM, Goldstein SL, Class C, Schmidt A, Gomez-Tuena A (2010) Slab and Mantle Controls on the Sr-Nd-Pb-Hf Isotope Evolution of the Post 42Ma Izu-Bonin-Volcanic Arc. J Petrol 51(5):993–1026
Straub, SM, Woodhead, JD, Arculus, RJ. Temporal Evolution of the Mariana Arc: Mantle Wedge and Subducted Slab Controls Revealed with a Tephra Perspective. J Petrol. 2015. doi:10.1093/petrology/egv005.
Taira A (2001) Tectonic evolution of the Japanese island arc system. Annu Rev Earth Pl Sc 29(1):109–134
Tamura Y, Gill J, Tollstrup D, Kawabata H, Shukuno H, Chang Q et al (2009) Silicic Magmas in the Izu-Bonin Oceanic Arc and Implications for Crustal Evolution. J Petrol 50(4):685–723
Tani K, Fiske RS, Tamura Y, Kido Y, Naka J, Shukuno H et al (2008) Sumisu volcano Izu-Bonin arc Japan: site of a silicic caldera-forming eruption from a small open-ocean island. Bull Volcanol 70:547–562
Taylor SR, McLennan SM (1985) The Continental Crust: Its Composition And Evolution. Blackwell, Malden
Tobin HJ, Kinoshita M (2006) NanTroSEIZE: the IODP Nankai Trough seismogenic zone experiment. Sci Drill 2(2):23–27
Underwood MB, Pickering KT (1996) Clay-mineral provenance sediment dispersal patterns and mudrock diagenesis in the Nankai accretionary prism southwest Japan. Clays Clay Miner 44(3):339–356
Völker D, Kutterolf S, Wehrmann H (2011) Comparative mass balance of volcanic edifices at the Southern Volcanic Zone of the Andes between 33°S and 46°S. J Volcanol Geotherm Res 205:114–129. doi:10.1016/jjvolgeores201103011
Völker D, Wehrmann H, Kutterolf S, Iyer K, Geersen J, Rabbel W et al (2014) Constraining input and output fluxes of the southern Central Chile Subduction Zone: water chlorine sulfur. Int J Earth Sci 103(7):2129–2153. doi:10.1007/s00531-014-1002-0
Volynets ON, Ponomareva VV, Braitseva OA, Melekestsev IV, Chen CH (1999) Holocene eruptive history of Ksudach volcanic massif South Kamchatka: evolution of a large magmatic chamber. J Volc Geotherm Res 91(1):23–42
Watson AJ (1997) Volcanic Fe CO2 ocean productivity and climate. Nature 385:587–588. doi:10.1038/385587b0
White SM, Chamley H, Curtis DM, de Vries Klein G, Mizuno A (1980) Sediment synthesis: Deep Sea Drilling Project Leg 58 Philippine Sea. Init Rep DSDP 58:963–1014. doi:10.2973/dsdpproc581451980
White JDL, Smellie JL, Clague DA (2003) Introduction: A deductive outline and topical overview of subaqueous explosive volcanism In: White JDL, Smellie JL, Clague DA (Ed) Explosive subaqueous volcanism, Geophys Monograph American Geophysical Union, Washington, p. 1-23.
Woods AW, Wohletz KH (1991) Dimensions and dynamics of co-ignimbrite eruption clouds. Nature 350:225–227
Zhou L, Kyte FT (1992) Sedimentation history of the South Pacific pelagic clay province over the last 85 million years inferred from the geochemistry of Deep Sea Drilling Project Hole 596. Paleoceanography 7(4):441–465
Ziegler CL, Murray RW. Geochemical evolution of the central Pacific Ocean over the past 56 Myr. Paleoceanography. 2007;22(2). doi:10.1029/2006PA001321.
Ziegler CL, Murray RW, Hovan SA, Rea DK (2007) Resolving eolian volcanogenic and authigenic components in pelagic sediment from the Pacific Ocean. Earth Planet Sci Lett 254(3):416–432
Ziegler CL, Murray RW, Plank T, Hemming SR (2008) Sources of Fe to the Equatorial Pacific Ocean from the Holocene to Miocene. Earth Planet Sci Lett 270:258–270
This research used samples and/or data provided by the IODP. RPS and RWM thank the US Science Support Program (USSSP) of IODP, NSF OCE-0136855, and NSF OCE-0958002 for financial support, and T. Ireland, A. G. Dunlea, N. Murphy, and J. W. Sparks for laboratory assistance. JCS and SK thank the German Research Foundation for financial support (KU-2685/1-1, 2-1&2) and M. Thöner, K. Strehlow for laboratory assistance. Portions of this material are based upon work supported while RWM was serving at the National Science Foundation. Thanks to those who reviewed and commented on the manuscript.
Department of Earth & Environment, Boston University, Boston, MA, 02215, USA
Rachel P. Scudder
& Richard W. Murray
GEOMAR Helmholtz Centre for Ocean Research Kiel, 24148, Kiel, Germany
Julie C. Schindlbeck
, Steffen Kutterolf
& Folkmar Hauff
Department of Earth and Environmental Science, New Mexico Institute of Mining and Technology, Socorro, NM, 87801, USA
Michael B. Underwood
Department of Geosciences, Princeton University, Princeton, NJ, 08544, USA
Samantha Gwizd
Division of Earth and Ocean Sciences, Nicholas School of the Environment, Duke University, Durham, NC, 27708, USA
Rebecca Lauzon
Department of Oceanography, Texas A&M University, College Station, TX, 77843, USA
& Claire C. McKinley
Search for Rachel P. Scudder in:
Search for Richard W. Murray in:
Search for Julie C. Schindlbeck in:
Search for Steffen Kutterolf in:
Search for Folkmar Hauff in:
Search for Michael B. Underwood in:
Search for Samantha Gwizd in:
Search for Rebecca Lauzon in:
Search for Claire C. McKinley in:
Correspondence to Rachel P. Scudder.
RPS is the lead author of the paper, performed the ICP-ES and ICP-MS analyses, and led the research and multiple conversations/interactions with the research team. RWM funded the research (see Acknowledgements), performed the oversight of the project, advised then-Ph.D. student RPS and then-undergraduate students SG, RL, and CCM, facilitated the discussion with co-authors and community, and contributed the text. JCS and SK performed microprobe analyses of volcanic glass from ash layers, discussed the research, contributed revisions of the text, compiled the reference list, created Figs. 1, 2, and 4, and provided the interpretation of the data. SK funded the research (see Acknowledgements) and advised Ph.D. student JCS regarding ash chemistry. FH reviewed and revised the manuscript. MBU discussed the research, contributed the revisions of the text, and provided the interpretation of the data. SG, RL, and CCM assisted in the ICP-ES and ICP-MS analyses, were involved in the preliminary statistics and interpretations of Sites 52, 444, and 579/581, and reviewed and commented on the manuscript. All authors read and approved the final manuscript.
Scudder, R.P., Murray, R.W., Schindlbeck, J.C. et al. Geochemical approaches to the quantification of dispersed volcanic ash in marine sediment. Prog. in Earth and Planet. Sci. 3, 1 (2016) doi:10.1186/s40645-015-0077-y
Dispersed ash
Equatorial Pacific Ocean
Northwest Pacific Ocean
Ash layers
DSDP
Land-Ocean Linkages under the Influence of the Asian Monsoon
|
CommonCrawl
|
Latin numbers
translation and definition Numbers, English-Latin Dictionary online. The Book of Numbers, the fourth of the Books of Moses in the Old Testament of the Bible, the fourth book in the Torah Learning the Latin Numbers is very important because its structure is used in every day conversation. The more you master it the more you get closer to mastering the Latin language. But first we need to know what the role of Numbers is in the structure of the grammar in Latin. I saw Ben Slavic use a similar Arabic numeral/colors poster in his training videos and so I made one for Latin teachers. You can go to Staples or Kinko's and print out a big one
Latin Numbers 1-100 Latin Language Blo
Roman numerals are a number system developed in ancient Rome where letters represent numbers. The modern use of Roman numerals involves the letters I, V, X, L, C, D, and M ..grammar Latin Numbers Latin Roman Numerals Latin stems Latin verbs Latin vocabulary Latin Ovid Latina Christiana Latina Christiana 2 Latinum - Ausgabe B (V&R) Learn to Read Latin Lingua..
Latin numbers — Of Languages and Numbers
Exploratory Latin Numbers - Free download as PDF File (.pdf), Text File (.txt) or read online for free. Exploratory Latin Exam Materials 2009-2010
Random Number Between X and Y. X-digit Number Generator. RNG with more options. Pin Code Generator
ative and accusative forms will be the same as normal. However, it differs in the dative (for later) and genitive cases. In the genitive, instead of using the -i, -ae, -i endings in the singular, all of the singular genitive forms end in -ius. So, the chart shows the no
Contextual translation of latin numbers into English. Human translations with examples: love, laura, i see, leanne, i latin, english, inggris, security, wardrobe
Enter a whole number such as 2003, or a Roman numeral such as MMIII, then click Convert. A smaller number in front of a larger number means subtraction, all else means addition
Latin Numbers - The Latin Dictionar
Latin Numbers 1 - 20 with video Posted by Transparent Language on Nov 8, 2012 in Latin Latin Numbers 1 - 10 The content of this section provides a translation of the Latin numbers 1 - 10
You may remember in the Adjectives I session that adjectives must match their respective noun in case, number, and gender. Since cardinal numbers are essentially quantitative adjectives, the same rule applies. However, most numbers do not decline; in other words, for most numbers the form given above is the same regardless of case, number, or gender! Quinque is the only form of the cardinal number five available, so regardless of case, number1, or gender, the form "quinque" will be used.
However, a handful of these numbers do decline: one, two, and three. Furthermore, numbers starting at 200 and up may also be declinable, but that is for later. These numbers decline as normal adjectives do:
..Latin numbers (as listed above) and comparing them later authors' extensions of the names beyond Chuquet's nonyllion, it becomes clear that Chuquet was not consistent in his use of Latin to form..
Over 10 different Latin Cross symbols and alt codes, learn how to make cross text symbol character with letter and number. Various kinds of Latin Cross symbols listed with cross name and number
Answered Sep 5, 2016 · Author has 594 answers and 1.7m answer views. Originally Answered: How old are Latin numbers ( I,II etc) ? Roman numerals apparently come from Etruscan numerals (which..
Online Latin translation is done for free. It is a very qualitative and uninterrupted service which Translate to Latin is incredibly easy on condition that you know this dead language, otherwise it is..
In English, numbers have two forms: the word form and the numeric form. For instance, "ten" is the word form, and "10" is the numeric form. Latin has the same structure, except that the numerals are much different from English. First of all, the Romans had no zero; the Latin adjective nullus was sufficient. Second, the Romans used letters instead of a different set of symbols to represent numerical values.For now, since we have only dealt with First/Second adjectives, we will pay attention to one and two; when you learn Third adjectives, you will be able to decline three. A small number of superscript characters were introduced to Unicode for general usage in math, phonetics, and related fields. It is useful for professionals in these fields to be able to write their.. The chart below shows the first ten Cardinal numbers in Latin. The chart on the Number Reference page expands on this chart. This list of decimal numbers represent the string hello: 104 101 108 108 111. Encoding is how these numbers are translated into binary numbers to be stored in a compute
Video: Latin Numbers
In numbers written as words in British English, the conjunction and is used before tens, or before Digits, figures, numerals, numbers. The words digit, figure, numeral, number may present some.. See more of LATIN Dictionary on Facebook Duo is actually an Irregular adjective; though it does slightly resemble First/Second adjectives in the plural, it actually forms quite differently in the dative and ablative forms. For now, we will stick with the conventional nominative, accusative, and genitive trio. Notice how duo is always plural:How to remember the Symbols and Letters!The following helpful tip will help you to remember the symbols and letters used in the old numeric system. Latin Numbers. Age 14 to 16 Challenge Level In the grid below, N is a 6 digit number with a very special property: if you double the number and write it in the second row, treble the number and write..
I hope you have found this site to be useful. If you have any corrections, additions, or comments, please contact me. Please note that I am not able to respond to all requests. Please consult a major dictionary before e-mailing your query. All material on this page © 1996-2014 Stephen Chrisomalis. Links to this page may be made without permission. Latin Numbers can be expressed in both Arabic and Latin numeral notation. Knowing your Latin numbers is essential for any Latin speaker, whether you're a beginner or advanced.. For the higher Greek numbers, there's almost no evidence in English for the use of these words, but mathematicians sometimes need words for polygons and solid figures, so here are the appropriate prefixes and words for 13-20, 100 and 1000. Latin Numbers & Roman Numerals The content of this website provides a simple guide to the translation of Latin numbers into English
The Latin American country of Argentina has experienced a very steady, albeit quite slow, increase in population size. Every year since as far back as 1955, Argentina's population has been on the rise by.. The situation with the Greek terms is a little more complex than with the Latin, but not excessively so. Latin prefixes are used for some words for polygons, although the Greek prefix is to be preferred. "Biathlon" should, by all rights, be "diathlon", "triangle" is used for a plane figure as well as angles (instead of 'triagon'), and there are very few terms for 1 and 2. Of course, this is partly because there's no such thing as a two-faced polyhedron, and not much point in describing a single athletic event as a "monathlon" ...This is the Latin numeral for 3. As we can see, it is composed of three I's. In Latin each I is represented by 1, so this numeral is essentially the sum of three I's, or 1+1+1. Therefore, it is important to know what letters stand for what numbers. Alternative americana/folk blues childrens christian/gospel christmas classical comedy country dance/electronic holiday/season jazz latin POP r..
Latin Numbers & Roman Numeral
Latin definition, an Italic language spoken in ancient Rome, fixed in the 2nd or 1st century b.c., and established as the official language of the Roman Empire. Abbreviation: L See more
Larger numbers were described in more roundabout ways or by using mathematical notation; indeed, one million is expressed in Latin as decies centena milia or 10 × 100 × 1,000, and Archimedes (3rd..
In general, these words are made by combining a prefix derived from Latin or Greek number words and a suffix indicating the type or category of the thing being counted. If you know a lot of word etymologies, you can usually figure out whether a word takes a Latin or Greek numerical prefix if you can tell whether the suffix you want to use is Latin or Greek in origin. However, if you can't work out the etymology, it's probably best to just look at the lists below, which indicate which prefixes are used with which suffixes. Besides, there are exceptions to this general rule.
The next step to learning Latin is understanding Latin numbers. Latin numbers are essentially adjectives as they are in English, and so we will treat them as such. However, there are some nuances that must be addressed. Topics in this article include:
The links above are only a small sample of our lessons, please open the left side menu to see all links.
The ordinal numbers in Latin are declined like first and second declension adjectives. There are some oddities to note: How to say number in Latin. What's the Latin word for number? Here's a list of translations
List of Ordinal Numbers in Latin
Some of the worksheets displayed are Greek and latin root work, Numbers, Greek and latin numbers, Working with roman numerals i v x l c d m, Imagine what this world would be like, Latin alphabet.. American Arabic Australian Brazil Chechen (Latin) Chinese Chinese (Traditional) Croatian Czech Danish Dutch England/Wales Eritrean Finnish French German Greenland Hispanic Hobbit Hungarian..
A Brief Guide to Latin Numeral
Welcome to the Latin Dictionary, the largest and most complete online Latin dictionary with a conjugator and a declension tool included. A very valuable resource for students and specialists
The History of Latin Numbers & Roman NumeralsThe origin and history of this old classical numbering system was not documented by the historians of ancient Rome, however they were used by the Etruscans. The Etruscan numeric system was adapted from the Greek Attic numerals that provided the ideas for the later Roman numerals. The most obvious explanation of its origins is probably due to their counting system that was originally based on a counting method using the fingers. A single stroke of the pen would represent one finger and this translated to the number I. The additional letters used in the numerical system is based on the old word 'centum' meaning 100 and the word 'mille' meaning 1000 thus giving the numerals C and M.
Latin Phrases, Numbers & Roman NumeralsThe use of this ancient numerical system therefore still survives in many walks of everyday life in modern times. King Henry VIII is correct whereas referring to the king as Henry the 8th, or Henry 8, is not! It is not surprising that we look for a simple, free, online translator to enable us to understand the meaning of this ancient numbering system.
Below are listed the basic ordinal numbers in Latin with the Roman numeral corresponding to their value and their English equivalent. › Cram Up › Vocabulary › Numbers › English Numbers - Ordinal numbers. In spoken English, the definite article is used before the ordinal number: Charles II - Charles the Second We've learned the basic terms for measuring time; now let's learn the cardinal numbers this lesson. We'll eventually need more lessons on numbers, but for now, just the basics.
Modern Latin Numbers & Roman NumeralsLatin-Roman numbers are used for the copyright dates on films, television programmes and videos, for example the Latin Roman numbers MMXIII translate as 2013.Latin Numbers & Roman NumeralsThe interesting facts and information about this ancient, classical number system provides a simple guide to the translation of each number into the English language. Learn the English translation of all of the numbers into the English language together with examples and the meaning of all of the numerics. Find the words and phrases to help you learn the classical ancient language and understand the common phrases that are still used in modern times. An easy translation of every common number up to 1 million.Now, things get complicated. For the teens, things are made complex by the fact that instead of "duodeviginti" (literally 'two-from-twenty') and "undeviginti" ('one-from-twenty') for 18 and 19, the prefixes for them are 'decennoct' (ten-eight) and 'decennov' (ten-nine). And, what's this? Hexadecimal, not "sexadecimal"? "Hex-" is Greek, which goes against the rule set for all the other bases. I suspect, again, that prudishness has led to "hexadecimal" getting the nod over "sexadecimal". Actually, other than "hexadecimal", all these words for 13 through 19 are extremely rare to non-existent, so it might be best to just forget about them. Moving on to the decades, most of these are also quite rare. However, many nouns exist, derived from the appropriate adjectives of relation, which identify a person of a particular age; "octogenary" becomes "octogenarian" - someone in their eighties. To write numbers in Latin in numeric form, one must understand the system. For instance, consider the Latin number below: Find Latin Vocabulary quickly in the Online Dictionary and translate texts easily using the Latin Text Analysis tool. Total number of translations. Comprehensive Online Dictionary
Latin Numbers - The Ordinal or Ordered Numbers
When it comes to number four, in most ca... Latin Numerals - Cardinal Numbers 1 - 1000. By AureliaeLacrimae, December 9, 2014 in Study Latin Alibaba offers 20 Number Latin Suppliers, and Number Latin Manufacturers, Distributors, Factories, Companies. Find high quality Number Latin Suppliers on Alibaba NumberFormatException: Invalid float: ١٥.٨٦ non latin numbers or font. I have been getting several different issues with NumberFormatException because of locales and differences in how Android.. Technically speaking, word order in Latin is very loose. Since some of the numbers only have one form, it may be difficult to ascertain to which noun the number belongs. For instance, does "octo" describe how many boys or how many dogs? Conventionally on the Latin Dictionary, adjectives will directly follow their respective noun, so in this case we have eight boys walking some dogs. However, Latin poets and other texts do not necessarily follow this convention, so the sentence could become ambiguous, especially if more were added to the sentence.
Numerical Adjectives, Greek and Latin Number Prefixe
Study Latin Numbers (1-20) Flashcards at ProProfs - The Latin numbers from 1-20. It contains the English number on one side and the Latin cardinal and ordinal numbers on the other Now that I've outlined some of the basic features and rules to allow you to construct your own numerical words on the basis of those outlined here, I've compiled a triskaidecad of unusual facts and trivia relating to this very interesting topic.We could have easily said that the father had girls, but we wanted to know how many girls he had, so the number four specifies how many girls there are, modifying that noun.
In Spanish, ordinal numbers are adjectives that express the order of an element in an ordered series. They are the equivalent to the English words first, second, third, and so on 'latin numbers' için 1425 sonuç. numbers Rastgele tekerlek. Y9 Latin. Ordinal Numbers Quiz Test. Missc tarafından. KS1 Maths Numbers & fractions Cardinal Numbers and Ordinal Numbers have a very important role in Latin. Once you're done with Latin Numbers, you might want to check the rest of our Latin lessons here: Learn Latin. Don't forget to bookmark this page. give your telephone number: Our phone number is two-six-three, three-eight-four-seven. (481-2240). give years: She was born in nineteen seventy-five (1975) Latin Numbers could be expressed in Latin numeral notation. Knowing your Latin numbers is essential for any student and enginier and Latin speaker, whether you're a beginner or advanced, so..
Latin/Numbers - Wikiversit
A number of other countries in the Middle East have reported coronavirus cases that have been linked back to Iran.Credit...Atta Kenare/Agence France-Presse — Getty Images
Latin - Italy - Vocabulary - Number - Numbers - Roman Numerals - Numeral - Dictionary - Phrases - Translation - English To Latin - Study Guide - Learn - Words - Quotes - Language - Translate - Meaning - Free - Online - Kids - Children - Count - Counting - Basic - Speak - Simple - Easy - Translator - Find - Lesson --Vocabulary - Number - Numbers - Roman Numerals - Numeral - Dictionary - Phrases - Translation - Kids - Children - Count - Counting - Speak - English To Latin - Study Guide - Learn - Words - Language - Translate - Meaning - Free - Online - Kids - Children - Count - Counting - Basic - Speak - Simple - Easy - Translator - Find - Lesson - Kids - Children - Count - Counting - Speak
Learn about latin numerals numbers roman with free interactive flashcards. Latin numbers and Roman Numerals. Unus, -a, -um
This is an online quiz called Numbers in Latin. There is a printable worksheet available for download here so you can take the quiz with Find the latin numbers. Need to refresh on the roman numerals
In Latin numbers from 1 to 10 are unique and therefore need to be memorized individually. Numbers from 11 and upwards are formed by using the following pattern: first 2 or 3 letters plus ten (decim)
AmoLatina.com offers the finest in Latin Dating. Meet over 13000 Latin members from Colombia, Mexico, Costa-Rica, Brazil and more for Dating and Romance Combinatorics calculators. Compute factorials and combinations, permutations, binomial coefficients, integer partitions and compositions, enumeration problems, combinatorial functions, Latin squares ..Kyrgyz Latin Latvian Lithuanian Macedonian Malagasy Malay Maltese Mongolian Norwegian Persian Polish Portuguese Romanian Russian Serbian Slovak Slovenian Spanish Swahili Swedish Tagalog.. Numbers in Latin. latintutorial 275.953 views6 year ago. 3:07. Learn Latin Numbers for Children | Kids Learning. KiddiTube Channel 438 views1 year ago. 2:46 Can you name the 1 to 10 in Latin? Test your knowledge on this language quiz to see how you do and compare your score to others. Random Language or Numbers Quiz
Numbers in Latin - YouTub
It's unnecessary to reduce the number of people Find latin numbers stock images in HD and millions of other royalty-free stock photos, illustrations and vectors in the Shutterstock collection. Thousands of new, high-quality pictures added every day Numbers in Latin are as easy as one-two-three, except that the numbers one, two, and three are declined to match the same case, number.. Latin America and the Caribbean. Others have estimated the number of human beings who have ever lived to be anywhere from 45 billion to 125 billion, with most estimates falling into the range of 90.. List of Latin terms, phrases, and expressions. Latin numbers in English words. Latin is a regarded as a 'dead' language because it is not used as a main language in day-to-day communications and life
Latin cardinal number convey the "how many" they're also known as "counting numbers," because they show quantity.Latin Numbers & Roman NumeralsThe content of this website provides a simple guide to the translation of Latin numbers into English. Learn the English translation of all of the Latin numbers into the English language together with examples and the meaning of Roman Numerals. There are several different variations of the 8-bit ASCII table. The table below is according to Windows-1252 (CP-1252) which is a superset of ISO 8859-1, also called ISO Latin-1, in terms of printable.. If you're trying to learn Latin Numbers you will find some useful resources including a course about Cardinal Numbers and Ordinal Numbers... to help you with your Latin grammar. Try to concentrate on the lesson and notice the pattern that occurs each time the word changes its place. Also don't forget to check the rest of our other lessons listed on Learn Latin. Enjoy the rest of the lesson! Dance. Country. Latin
Appendix:Latin cardinal numerals - Wiktionar
We offer technical support for Latin America in both Spanish and English. Click here for our support site Welcome back to Latin for Wikiversity. Here you can peruse a new lesson in Latin, in a simple We've learned the basic terms for measuring time; now let's learn the cardinal numbers this lesson In this first table, I've listed the Latin words for 1 through 12 along with the appropriate prefix that is derived from it. For each of the above categories, check the appropriate column and find the word list. In cases where the word couldn't be found in regular dictionaries, I've extrapolated from the other words and used appropriate prefixes and endings to construct the correct form. In such hypothetical cases, the word is marked with an asterisk and put it in italics. In this page, I discuss a curious set of unusual words: adjectives and nouns for numerical values or multiples. What do you call a group of eleven musicians? An athletic competition with six events? An event that recurs every twenty years? It can be very difficult to figure out what sort of prefix to use, and there are plenty of exceptions to the rules. Because many of these words aren't found in dictionaries (particularly as the relevant numbers get larger), having some general principles can help. Thus, where other word lists of the Phrontistery are simply listed in "word: definition" form, this page will try to show you, in tabular format, how to construct your own terms from the basic principles, and to give you a better grasp of this tricky topic. Let's begin!
Latin Numbers LEARN101
When you see a numeral of less degree than another proceed it, then it means subtract the smaller value from the larger one. In other words, IV actually means 5-1, which is four. VI would be 5+1, which is six. Similarly, IX would mean 9; XLV means 50-10+5, or 45.
the Later Latin Society. a Brief Guide to Latin Numerals. Compound Numbers: from tredecim to undeviginti inclusive, the smaller number is prefixed without et; from XXI to XCIX either (a) the larger..
These numbers do not decline. (We will cover numbers one, two and three in Lesson 5.) how to count from four to ten in Latin? the numbers 4 to 10 and 100
Modern use of Latin Numbers & Roman NumeralsThis simple guide to the translation of Latin numbers and Roman numerals will increase your Latin vocabulary and help you learn the words in the language associated with numbers and numerals. Although the language is ancient we still use it in our modern world. Probably the most common example in the use of this ancient numeric system is on many clock faces in which the hours are marked as I to XII. Latin-Roman Numerals are used in English and other modern languages especially in relation to dates. This ancient style of Latin-Roman numbers are also used as version numbers of products e.g. Version II or on reports e.g. Appendix IV. Latin Roman numerals are used for sporting events such as the Super Bowl and the Olympics can also be seen on monuments, public buildings and gravestones.
The Latin numerals are the words used to denote numbers within the Latin language. They are essentially based on their Proto-Indo-European ancestors, and the Latin cardinal numbers are.. Latino-Faliscan. Latin was originally spoken in Latium, in the Italian Peninsula.[3] Through the power of the Roman Republic, it became the dominant language, initially in Italy and subsequently.. Latin letters and Arabic numerals Edit. Letters are rendered in italic font; numbers are upright Blackboard bold (no lowercase) is used to represent standard sets of numbers, e.g. $ \mathbb{C}.. A list of Latin cardinal and ordinal numbers. (e.g. unus, una, unum, duo, duae, duo, tres, tres, tria, quattuor, quinque, sex, septem, octo, novem, decem) Welcome to Memrise Learn about latin numerals numbers roman with free interactive flashcards. Latin numbers and Roman Numerals. Unus, -a, -um
All the numbers given below are in the form in which they would appear in documents from the medieval period onwards. Numbers written out in full (often used for years in deeds). Latin. English Proper usage of Roman numerals. Numbering Superbowls. Easy foolproof conversions. Lesser-known roman numerals for large numbers (Etruscan symbols) and fractions Ordinals higher than 20th follow the same patterns and variations as those seen in first through nineteenth. Trying to learn Latin? We can help! Memorize these flashcards or create your own Latin flashcards with Cram.com. Latin Numbers And Numerals. by mystic119, Feb Latin Numbers is a work of performance history, examining the way in which Latino actors on the His book tracks the conspicuously Latin musical number; the casting of Latino actors; the history of..
Latin numerals: Numbers with English Equivalents - Word Informatio
Convert number to Roman numerals numeric system, find the numbers in Latin alphabet I, V, X, L, C, D This simple Roman Numerals Converter can be used at any time to convert numbers to Roman.. Download Presentation. Latin Numbers. Loading in 2 Seconds... Latin Numbers - PowerPoint PPT Presentation. Eliora Robbins. + Follow Many Roman numerals are simply added together. Our example of the numbers MMXIII that translate as 2013 as follows: each 'M' stands for 1000, 'X' stands for 10 and each letter 'I' stands for a single number or digit.
List of Roman numerals / numbers
Latin - English ONLINE translator - dictionary in both directions. Choose a language from which you wish to translate a text and the translation target language and type in (paste) the text How to Count in Latin. Latin is a language not spoken by any modern nation. Such a language normally is called a dead (extinct) or ancient language. Latin is a Classical language learned by.. Variations are present in the Latin ordinals for tenth through nineteenth. If that seems strange, recall that English ordinals for 11th (eleventh) and 12th (twelfth) are formed differently than higher ones (thirteenth through nineteenth). Latin, Arabic, Cyrillic, hieroglyphs, pictographic. Letters, digits, punctuation. Also Unicode standard covers a lot of dead scripts (abugidas, syllabaries) with the historical purpose
Statistics for Latin Numbers. Click here to take the quiz! Score Distribution. Percentile by Number Answered Numbers are like adjectives in nearly every way. They describe nouns. The only difference is that numbers answer a different question than conventional adjectives. Rather than answering qualitative characteristics, numbers answer quantitative characteristics. Rather than specifying how a group of people are (ie. happy or sad), we want to know how many of the people there are, or perhaps in what order they are in. Numbers answer these questions.
In general, however, Latin number vocabulary has entered the English language as COMBINING FORMS, which—in the Latin language itself—were sometimes quite different from the independent.. Salvēte omnēs! Welcome back to Latin for Wikiversity. Here you can peruse a new lesson in Latin, in a simple format. If you would like to catch up, you can find a directory of lessons, a classified vocabulary list, and Memrise courses at the links on the right. The Latin numerals are the words used to denote numbers within the Latin language. They are essentially based on their Proto-Indo-European ancestors..
Latin-Roman Numerals that are used in English and other modern languages. Numbers are the common denominator of everyday life, as essential to modern society as language In contrast, cardinal numbers are nouns which tell you how many objects there are. Cardinal numbers in Latin are "unus," "duo," "tres"; English versions of those are "one," "two," "three." Download 631 Latin Numbers Stock Photos for FREE or amazingly low rates! New users enjoy 60% OFF. 132,079,342 stock photos online The genitive case is a descriptive case. The genitive case describes the following features of the described noun: Possession e.g. The dog of Marcus or Marcus's dog (canis Marcī). Origin e.g. Marcus of Rome (Marcus Romae). Relation e.g. A thing of beauty (rēs pulchrae)
Meagan Ayer, Allen and Greenough's New Latin Grammar for Schools and Colleges. Carlisle, Pennsylvania: Dickinson College Commentaries, 2014. ISBN: 978-1-947822-04-7. https.. So far, so good. We can see that there are a few exceptions to the general rule, particularly for the numbers 1 and 2, and in some cases such as "quinary / quinquenary" where multiple forms exist. Since I'm not being hardline about "proper" forms, I'm including all the forms normally used, even when they don't strictly follow the rules. Up to 12, the Latin prefixes hold up pretty well; most of the forms exist; only "sexilateral", of all the hypotheticals, is less than nine. My theory is that it sounds too lewd to have been adopted as the term for something with six sides. Well enough, then. I couldn't find any of those numbers above when I clicked on insert 'symbol' on the toolbar. Should I only use the letters on Keyboard, such as 'I', V, X' and so on. I don't think so since sequencing Latin.. all translations of Latin_numbers:_1-10
Ключевые слова: borrowings, Latin borrowings in English, native word stock, borrowed word units, the One of the practical objectives of the paper is to review and study a number of words of Latin.. Latin Extended Additional. 10100 — 1013F. Aegean Numbers. 1F00 — 1FFF. Greek Extended In Latin numbers from 1 to 10 are unique and therefore need to be memorized individually. Numbers from 11 and upwards are formed by using the following pattern: first 2 or 3 letters plus ten (decim). For example 13 tredecim can be formed by using 10 + 3 while connecting them. Higher numerals are formed by stating twenty, thirty, etc. and the digit following. 22 = vīgintī duo. 45 can be formed by quadrāgintā quīnque.
Richard McClintock, a Latin professor at Hampden-Sydney College in Virginia, looked up one of the more obscure Latin words, consectetur, from a Lorem Ipsum passage, and going through the cites of.. Latin (lingua Latina, sermo Latinus) is an Italic language originally spoken in Latium and Ancient Latin numbering rules. Now that you've had a gist of the most useful numbers, let's move to the.. Table 3 lists the higher Latin numbers and prefixes. I'll stick to listing the numerical bases and adjectives of relation, and let you figure out the rest on your own, on the pattern described above We have included articles on all the numbers with the translation of the words into the ancient language from numbers 1 - 1000. Examples of the Roman numerals for each number are also included. The Latin language and numbering system is still commonly used as part of our English language and vocabulary.
iTunes top 100 Latin American Spanish language songs downloaded on iTunes. Chart of the most popular Latino songs of 2020 is updated daily Many others come from Latin, the language from which French originated. This means that a significant number of English words have either exact French counterparts or very similar equivalents in French System of Roman numbers, numerals, cardinal and ordinal Latin numbers are essentially adjectives as they are in English, and so we will treat them as such. However, there are some nuances that must be addressed. Topics in this article includ
Previously Viewed. clear. What is strength in numbers in latin? Unanswered Questions. The Latin letters are the same as the English and the numbers or numerals are really quite simple once you get.. Latin names include many of the most popular baby names in the Western world, including Lucy and Latin was the language of ancient Rome, another source for fashionable modern names, such as.. Писане на Кирилица с латинска клавиатура Latin to Cyrillic online converter. Най-удобният преводач кирилица - латиница 2020 popular Latin Numbers trends in Automobiles & Motorcycles, Home & Garden, Paint By Discover over 96 of our best selection of Latin Numbers on AliExpress.com with top-selling Latin.. Latin Numbers & Roman Numerals - the RulesFor larger numbers, the Romans invented new numeric symbols, so number 5 was V, number 10 was X, and so on. Learn Something New Every Day Email Address Sign up There was an error. Please try again.
Henry nielsen laivat ja merenkulku.
Autohuolto nurmijärvi.
Hd wallpapers iphone.
Isku möhkö mitat.
Marienplatz zwischengeschoss geschäfte.
Nissan almera radio code.
Half life 2 release.
Karpalomehu ohje.
Xbox one konsol.
Pantti vintage.
Brandos uutiskirje alennuskoodi.
La baraque pannenkoeken.
Enchanted ella.
Tuulilasi sulaksi.
Jyvänen myllypuro.
Pienkustantamot suomessa.
Jumalattomat netflix.
Frauen saalfeld.
Pmmp lautturi sanat.
C s lewis zitate erfahrung.
Peugeot pyöränpultti.
Siivet teleks.
Zz top live from texas.
Lundia työtaso.
Tavelupphängning clas ohlson.
Pigmenttihäiriö lapsella.
Maybeauty kasvonaamio kokemuksia.
Biokill turvallisuus.
50 luvun senkki.
Leland chapman jamie pilar chapman.
Deloitte finland oy.
Käytetyt tietokoneet.
Harvard suomalaiset.
Limousine uppsala.
Vauva rohiseva hengitys.
Aku traileri varaosat.
Huono verenkierto jaloissa oireet.
National days calendar 2018.
Dating chat.
Suomen paimenkoirat.
Vhs heidelberg kontakt.
|
CommonCrawl
|
Hypertension and its associated factors in Hosanna town, Southern Ethiopia: community based cross-sectional study
Likawunt Samuel Asfaw1,
Samuel Yohannes Ayanto2 &
Fiseha Laemengo Gurmamo3
This study was conducted to determine the prevalence of hypertension and its associated factors among residents of Hosanna town in Hadiya Zone.
The overall prevalence of hypertension was 30% among the study participants. Out of the study participants who were identified as being hypertensive, only 24.6% knew their hypertensive status. The odds of being hypertensive is significantly higher among males when compared to females (adjusted odds ratio (AOR) 1.9, confidence interval (CI) 1.14–3.23) and married participants as compared to their unmarried counterparts (AOR 4.1; CI 1.10–16.18). High prevalence and increased risks for hypertension were noted among the study participants in the study area. The experiences of aerobic physical activities were reported only in 22.9% of the study participants. These evidences may suggest the need for urgent interventions.
Hypertension is persistent elevation of BP above this normal range [1] and is classified into different groups based on causes and degree of severity [2,3,4,5].
Hypertension has become major public health problem of human being globally. It is estimated to cause 1 billion deaths, which is about 12.8% of all deaths worldwide [6, 7]. In Africa 46% of its adult population had hypertension, which is the highest for regions in the world [6, 8]. Similarly, the figure for sub-Saharan Africa was 47.5% [9, 10].
Ethiopia shares similar profile like most sub-Saharan African countries. Findings of World Health Organization on prevalence of hypertension showed that 35.2% of the community in Ethiopia has high likelihood of being hypertensive [6]. To a large extent hypertension is associated with environmental, rapid urbanization and life style changes [11, 12]. There are controversial opinions on the association between hypertension and gender. In prevalence study in rural Bareilly there was no significant difference between males and females [13, 14]. However, in most global and Ethiopian studies hypertension is more prevalent among males than females [14,15,16]. Obesity, tobacco smoking and harmful alcohol use are significantly associated with hypertension [17,18,19,20,21,22,23,24,25]. Majority of previous studies done in Ethiopia were based on hospital records and reported contradicting opinions. Therefore, the aim of this study was to assess the prevalence and associated factors of hypertension in a community sample.
Study design and setting
The study was conducted in Hosanna town, the capital of Hadiya Zone, located at a distance of 232 km southwest of Addis Ababa, the capital of Ethiopia. There were 16,707 Households in the town. Community based cross-sectional study was carried out among residents of the town, in May 2014 [26].
Sample size and sampling technique
The desired sample size for our study was estimated by taking prevalence of hypertension (35.2%) from previous study [6], 95% confidence level, 5% margin of error and design effect of 1.5. Consequently, the final sample size was determined to be 525 participants. The sample size was calculated using the formula;
$${\text{n}} = \frac{{{\text{Z}}\frac{\alpha }{2}^{2 } {\text{P}}(1 - {\text{P}})}}{{{\text{d}}^{2} }}$$
$${\text{n}} = \frac{{\left( {1.96 } \right)^{2 } 0.35(1 - 0.35)}}{{(0.05)^{2} }} = 350 (1.5) = 525$$
The final sample size was proportionally allocated to sub-administrative units of the town. Sampling frame was created for each sub-unit and randomly generated numbers were used to select the households. Simple random sampling technique was used to select the households from each unit. From each of the selected households, one participant satisfying inclusion criteria was selected by lottery method.
Inclusion and exclusion criteria
Individuals below the age of 25 years, those above the age of 64 years, pregnant mothers and disabled people were excluded from the study. The primary reason for excluding pregnant women and individuals above the age of 64 is that they are most at risk for hypertension and their inclusion could preclude generalization. Contrarily, young people below the age of 25 years are at low risk for hypertension and disabled people were not eligible for exercise related inquiries relevant for our research which might affect the true finding in the population.
Data collection instrument and measurement
The WHO STEPS instrument and global physical activity questionnaire (GPAQ) were modified and used [27, 28]. The tool has three major parts: socio-demographic characteristics, behavioral profile and physical measurements. The modified instrument was translated into the local language, Amharic. Data were collected through interviewer administered and physical body measurement techniques using structured questionnaire.
Two days training was provided for data collectors and supervisors regarding research ethics, data collection procedures and contents of the instrument to increase the quality of our data. Supportive supervision was carried out by the supervisors on a daily basis during the data collection period. The completed questionnaire had been checked daily for its completeness and consistency.
The blood pressure was measured after the participant had rested for at least 5 min. Two measurements at 10 min interval were taken from right arm by a mercury sphygmomanometer. The mean value of the two measurements was recorded as a BP for each participant.
Height was measured using fixed height measuring board in upright position with participant's heel, shoulder and buttock touching the vertical board behind. The measurement value was recorded to the nearest millimeter. Weight was measured using calibrated weight scale where participants being in light clothing and barefooted. Its reading was taken to the nearest 0.1 kg. Waist circumference measurement was taken at midpoint between lower measure margin of the last plain rib and top of iliac crest using non elastic tape meter. Each participant was told to take little deep, natural breath before taking the measurement. The measurement was taken at the end of normal expiration, when the lungs are at their residual capacity.
Data analysis techniques
The collected data were cleaned and entered to Epi-Data version 3.2, and exported to STATA version 12.0 for analysis. Descriptive statistics and multivariable logistic regression were used to analyze the data. Candidate variables with P value < 0.2 in Bivariable model were entered to multivariable model to adjust for predictors. The 95% CI for the corresponding Odds Ratio (OR) was used to assess the degree of associations at (P < 0.05) to declare significance.
Variables and definitions
The participant was regarded as hypertensive when an average SBP ≥ 140 mmHg, and/or DBP ≥ 90 mmHg was recorded and/or the participant is currently on antihypertensive medications.
The body mass index (BMI) was interpreted according to WHO classification as underweight (BMI < 18.5 kg/m2), normal (BMI 18.5–24.9 kg/m2), overweight (BMI 25.0–30.0 kg/m2) and obese (BMI > 30.0 kg/m2).
Men having waist circumference greater than 94 cm were identified as having increased risk for hypertension and metabolic complications whereas men having waist circumference greater than 102 cm were identified as having substantially increased risk for hypertension and metabolic complications.
Women having waist circumference greater than 80 cm were identified as having increased risk for hypertension and metabolic complications whereas women having waist circumference greater than 88 cm were identified as having substantially increased risk for hypertension and metabolic complications.
A total of 524 participants were involved in the study which gives response rate of 99.8%. The majority (52.9%) of study participants were males. The mean age of the study participants was 35.4 ± 7.7 SD years. Majority (38.5%) of participants were government employee. Nearly half (48.8%) of the study participants were College or University graduates. The average monthly income of the study participants was 72.31 ± 916.33 USD. The average number of individuals per household was nearly 6 (Table 1). One hundred twenty-two (23.3%) participants reported their experience of alcohol consumption on daily basis and 77 (14.7%) participants were smokers.
Table 1 Socio-demographic characteristics of study participants in Hosanna town 2014
The mean systolic and diastolic BP reading for the study participants were 118.37 ± 13.42 (SD) mmHg and 74.24 ± 11.18 (SD) mmHg respectively. The prevalence of hypertension among the study participants was 30% (CI 26.0–33.8%) out of which only 39 (24.6%) knew their hypertensive status (Fig. 1).
Distribution of hypertension by sex of participants
The mean BMI of the study participants was 23.79 ± 2.60 kg/m2 (SD). Fifty-four (10.3%) participants were overweight and 28 (5.3%) were obese.
The mean waist circumference of men was 86.9 ± 4.9 cm (SD). The vast majority (84.1%) of men had waist circumference measurement of ≤ 94 cm whereas 36 (12.9%) of them had > 94 cm. Few, 3.0%, of male participants had > 102 cm waist circumference measurement. The findings indicate that 12.9% men had increased risk for hypertension and metabolic complications.
The mean waist circumference for women was 83.6 ± 8.35 SD cm. One hundred four (42.1%) women had waist circumference measurement of < 80 cm. Nearly one out of three (29.1%); women have waist circumference measurement of > 80 cm. On the other hand, 71 (28.7%) of women had waist circumference measurement of > 88 cm. This finding represents that more than half of women had higher Waist circumference measurements. Twenty-nine percent of women had increased risk for hypertension and metabolic complications and 28.7% of women had substantially increased risk for hypertension and metabolic complications. Overall, 37.2% (15.8% men and 61.1% women) participants were identified as having increased risk for hypertension and metabolic complications.
Fifty-seven (10.9%) and 108 (20.6%) of the study participants undertake vigorous and moderate-intensity physical activities respectively. Only 22.9% of the participants reported the experience of aerobic physical activities. The average estimated time spent without movement among the study participants was 10.25 ± 3.1 (SD) hours.
Factors associated with hypertension
The presence or absence of significant association between independent and outcome variables was determined. Accordingly, sex, marital status and aerobic physical activities were significantly associated with hypertension. The odds of hypertension was 1.92 times higher among men as compared to females (AOR 1.92; CI 1.14–3.23). The likelihood of hypertension was significantly higher among married participants when compared to unmarried ones (AOR 4.1; CI 1.0–16.18). Participants who did not undertake aerobic physical activities had three times more likely to develop hypertension as compared to participants who did (AOR 3.0; CI 1.1–6.5) (Table 2).
Table 2 Bivariate and multivariate logistic regression analysis of factors associated with hypertension in Hosanna (n = 524), Ethiopia, 2014
The overall prevalence of hypertension in this study was 30.0%. This is comparable with report from Addis Ababa (31.5%) [25]. But higher than reported for Jimma town (13.2%) [29], Gondar town (28.3%) [12] and Sidama Zone (9.9%) [30]. However, the prevalence of hypertension in our study was lower than the national average (35.2%) [6] and Africa sub-regional prevalence (47.5%) [9, 10]. The national average was higher because it used health facility reports in its estimation, which might have not represented the true magnitude in the general public. This difference could be explained by differences in life style factors including diet, exercise and the use of different substances, etc. Among hypertensive participants, those participants who are aware of their hypertensive status were 24.6%, which matches with other study findings [28, 29]. This might indicate low awareness and screening practices for hypertension in the community.
The prevalence of hypertension was significantly higher among men participants than females. In contrast, previous study reports showed that the variation in occurrence of hypertension between the two sexes was not statistically significant [12]. Conversely, the higher rate of hypertension among men in our study was congruent with previous study reports [7, 9, 14, 16]. More likely, this difference could be explained as large number of men engage in risky behaviors such as excess alcohol consumption, smoking tobacco products, and ka'hat chewing that predispose to hypertension when compared to females. However, the controversy between reports on association between sex and hypertension warrant further study.
The prevalence of hypertension was found to be higher among higher age groups in previous studies [25, 29,30,31]. In our study age was obtained subjectively which might not be participants' exact age due to absence of birth certificates in majority of the cases. Although the risk of hypertension increased with advancing age because of biological reasons, substance use in younger age groups balanced the prevalence of hypertension across all age groups. These facts could also, more likely, explain the importance of hypertension at any age.
In our study, marital status is significantly associated with hypertension and married participants were more likely to develop hypertension when compared to their unmarried counterparts. Community-based study in Jazan region of Saudi Arabia also reported the presence of association between marital status and hypertension [9]. This could be explained in such a way that married couples are vulnerable to and face disputes from different life dimensions. These stressful life conditions they may face could increase the risk of hypertension among them.
High prevalence of hypertension was noted among the study participants. Only few participants were aware of their hypertensive status. The community is at increased risk for hypertension and metabolic complications. Women had substantially increased risk for hypertension when compared to males. Sex, marital status and limited exercise were significantly associated with hypertension. Increased prevalence of hypertension and its associated factors imply the need for urgent intervention by designing strategies to increase public awareness on risks, preventive measures and screening behaviors.
This study is cross-sectional; therefore, we cannot ascribe causality to any of the associated factors. Moreover, prevalence may not be representative as some severe cases may die soon after they develop the disease.
DALY:
disability adjusted life year
EDHS:
ethiopia demographic and health survey
GPAQ:
global physical activity questionnaire
HBP:
HSDP:
health sector development
MMHG:
millimeter mercury
NCD:
SBP:
systolic blood pressure
SSA:
US Department of Health and Human Service, National Institute of Health, National Heart, Lung, and Blood Institute, National High Blood Pressure Education Program. JNC-7: the seventh report of the national committee on prevention, detection, evaluation and treatment of high blood pressure. NIH; 2003.
Oparil S. Pathogenesis of hypertension. Ann Intern Med. 2003;139:761–70.
Article PubMed CAS Google Scholar
National Heart Foundation of Australia: National Blood pressure and vascular diseases Advisory Committee. Guide to management of hypertension; 2008. www.heartfoundation.org.aupdf. Accessed 1 Jan 2014.
Oparil S. Physiology in medicine: a series of Articles linking medicine with science: pathogenesis of hypertension. Anns Intern Med. 2003;139:761–76.
Maryon-Davis A. Faculty of public health of the royal colleges of physicians of UK hypertension—the silent killers; 2005. www.fph.org.ukpdf. Accessed 30 Dec 2013.
Alwan A. Global status report on non-communicable diseases 2010. Geneva, Switzerland: World Health Organization; 2010. p. 2–17.
Dreisbach AW. Epidemiology of hypertension. Meds cape drugs, diseases and procedure reference. 11 July 2013.
Vijver SVD, Akiny H, Oti S, Olajide A, Agyemang C, Aboderin I, Kyobutungi C. Status report on hypertension in Africa—consultative review for the session of the African Union conference of minister's health on non-communicable diseases. Pan Afr Med J. 2013;16:38.
Addo J, Smeeth L, Leon DA. Hypertension in sub-Saharan Africa: a systematic review. Hypertension. 2007;50:1012–8. https://doi.org/10.1161/hypertensionAHA.107.09336.
Dzudie A, Kengere AP, Muna WFT, Ba H, Menanga A, Kouam CK, et al. Prevalence, awareness, treatment and control of hypertension in a self-selected Sub-Saharan African urban population: a cross-sectional study. BMJ Open. 2012. https://doi.org/10.1136/bmjopen-2012-001217.
PubMed PubMed Central Article Google Scholar
Chelkeba L, Dessie S. Ant hypertension medication adherence and associated factors at Dessie Hospital, Northeast Ethiopia, Ethiopia. Int J Res Med Sci. 2013;1:191–7. https://doi.org/10.5455/2320-6012.ijrms20130802.
Awoke A, Awoke T, Alemu S, Megabiaw B. Prevalence and associated factors of hypertension among adults in Gondar, Northwest Ethiopia: a community based cross-sectional study. BMC Cardiovasc Disord. 2012;12:113–7.
Esam SM, Husain AS. Prevalence of pre hypertension and hypertension in rural Bareilly. Nat J Med Res. 2012;2:291–4.
Ibrahim NKR, Hijazi NA, Al-Bar A. Prevalence and determinants of pre hypertension and hypertension among preparatory and secondary school teachers in Jeddah. J Egypt Public Health Assoc. 2008;83:184–203.
Tesfaye F, Byass P, Berhane Y, Bonita R, Wall S. Association of smoking and Khat (Catha-edulisForsk) use with high blood pressure among adults in Addis, Ethiopia. Prev Chronic Dis 2008;5. http://www.cdc.gov/pcd/issues/2008/jul/07-0137html. Accessed 20 Feb 2014.
Ekwunife O, Udeogaranya P, Nwatu T. Prevalence, awareness, treatment and control of hypertension in a Nigerian population. Health. 2010;2:731–5. https://doi.org/10.4236/health.2010.27111.
Pongwecharak J, Treeranurat T. Screening for pre-hypertension and elevated cardiovascular risk factors in a Thai community pharmacy. Pharm World Sci. 2010;32:329–33. https://doi.org/10.1007/s11096-010-9373-1.
Amira CO, Sokunbi DOB, Sokunbi A. The prevalence of obesity and its relationship with hypertension in an urban community: data from, world kidney day screening program. Int J Med Biomed Res. 2012;1:104–10.
Narksawat K, Chansatitporn N, Panket P, Hangsantea J. Screening high risk population for hypertension and type 2 diabetes among Thais. WHO South-east Asia. J Public Health. 2012;1:320–9.
John J, Muliyil J, Balraj V. Screening for hypertension among adults: a primary care is high risk approach. Indian J community Med. 2010;35:67–9. https://doi.org/10.4103/0970-0218.62561html.
Sliwa K, Stewart S, Gersh BJ. Hypertension: a global perspective circulation. Hypertension. 2011;123:2892–6. https://doi.org/10.1161/circulationaha.110992362.
Schutte AE, Schutte R, Huisman HW, Rooyen JM, Foume CMT, Malan NT, et al. Are behavioral risk factors to be blamed for the conversion of optimal blood pressure to hypertensive status in black South Africans? A 5-year prospective study. Int J Epidmiol. 2012;41:1114–23. https://doi.org/10.1093/IJE/DYS106.
WHO. 2008–2013 action plan for global strategy for the prevention and control of non-communicable diseases: Geneva: World Health Organization; 2008.
Queensland Health. The health of Queenslanders 2012: Advancing good health. Fourth report of the chief health officer Queensland. Brisbane, October 2012.
Tesfaye F, Byass P, Wall S. Population based prevalence of high blood pressure among adults in Addis Ababa: uncovering a silent epidemic. BMC Cardiovasc Disord. 2009;9:39.
CSA. Summary and statistical report of the 2007 population and Housing Census, Addis Ababa. Ethiopia: Population and Housing Census Commission; 2008.
WHO. World health organization Global recommendation on physical activity for health. Geneva: World health organization; 2011. http://www.who.int/dietphysicalactivity/pa/en/index.html. Accessed 26 May 2014.
WHO. WHO STEPS approach to chronic disease risk factors surveillance (STEPS). Geneva: WHO; 2005. www.who.int/chp/stepspdf. Accessed 26 May 2014.
Gudina EK, Michael Y, Assegid S. Prevalence of hypertension and its risk factors in Southwest Ethiopia: a community based cross-sectional survey. Integr Blood Press Control. 2013;6:111–7.
Giday A, Taddese B. Prevalence and determinants of hypertension in rural and urban areas of southern Ethiopia. Ethiop Med J. 2011;49:139–47.
WHO. Global recommendations on physical activity for health. Geneva: World Health Organization; 2010.
LSA conceived and designed the study idea, developed proposal, organized the data collection tool, created data entry template, interpreted findings and wrote the manuscript. SYA edited the proposal and approved the manuscript. FLG edited the proposal and approved the manuscript. All authors read and approved the final manuscript.
The authors would like to thank Hosanna College of Health Sciences Research and community service. We are also grateful to Hosanna town residents, data collectors and Hosanna town health office for their cooperation during the entire process of data collection.
Data will be obtained from the corresponding author whenever required.
This study was approved by institutional review board of Hosanna College of Health Sciences. Informed verbal consent was obtained from all study participants before data collection after explaining the objectives of the research. In this research we obtained informed verbal consent from the research participants because all the data sought was associated purely with information rather than human samples or did not put participants on experiment, which needs national ethical approval in our context. We obtained ethical clearance for the research to be conducted in this way. This is the reason why we obtained informed verbal consent than written.
Hosanna College of Health Sciences funded the study. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Department of Nursing, Hosanna College of Health Sciences, Post Box 159, Hosanna, Ethiopia
Likawunt Samuel Asfaw
Department of Midwifery, Hosanna College of Health Sciences, Hosanna, Ethiopia
Samuel Yohannes Ayanto
Hosanna College of Health Sciences, Hosanna, Ethiopia
Fiseha Laemengo Gurmamo
Correspondence to Likawunt Samuel Asfaw.
Asfaw, L.S., Ayanto, S.Y. & Gurmamo, F.L. Hypertension and its associated factors in Hosanna town, Southern Ethiopia: community based cross-sectional study. BMC Res Notes 11, 306 (2018). https://doi.org/10.1186/s13104-018-3435-1
Accepted: 11 May 2018
Cardio-vascular disorders
|
CommonCrawl
|
Existence of $ \mathcal{D}- $pullback attractor for an infinite dimensional dynamical system
DCDS-B Home
Modeling, approximation, and time optimal temperature control for binder removal from ceramics
January 2022, 27(1): 141-165. doi: 10.3934/dcdsb.2021035
Monotonic and nonmonotonic immune responses in viral infection systems
Shaoli Wang 1,, , Huixia Li 2, and Fei Xu 3,
School of Mathematics and Statistics, Bioinformatics Center of Henan University, Kaifeng 475001, Henan, China
Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, Jiangsu, China
Department of Mathematics, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
* Corresponding author: Shaoli Wang
Received September 2019 Revised November 2020 Published January 2022 Early access January 2021
Fund Project: This work is supported by Science and Foundation of Technology Department of Henan Province (No.192102310089), Foundation of Henan Educational Committee (No.19A110009), Natural Science Foundation of Henan (No. 202300410045) and Grant of Bioinformatics Center of Henan University (No. 2019YLXKJC02)
Figure(7) / Table(4)
In this paper, we study two-dimensional, three-dimensional monotonic and nonmonotonic immune responses in viral infection systems. Our results show that the viral infection systems with monotonic immune response has no bistability appear. However, the systems with nonmonotonic immune response has bistability appear under some conditions. For immune intensity, we got two important thresholds, post-treatment control threshold and elite control threshold. When immune intensity is less than post-treatment control threshold, the virus will be rebound. The virus will be under control when immune intensity is larger than elite control threshold. While between the two thresholds is a bistable interval. When immune intensity is in the bistable interval, the system can have bistability appear. Select the rate of immune cells stimulated by the viruses as a bifurcation parameter for nonmonotonic immune responses, we prove that the system exhibits saddle-node bifurcation and transcritical bifurcation.
Keywords: Monotonic and nonmonotonic immune response, post-treatment control threshold, elite control threshold, saddle-node bifurcation, transcritical bifurcation.
Mathematics Subject Classification: Primary: 34D20, 37N25; Secondary: 92B05.
Citation: Shaoli Wang, Huixia Li, Fei Xu. Monotonic and nonmonotonic immune responses in viral infection systems. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 141-165. doi: 10.3934/dcdsb.2021035
J. F. Andrews, A mathematical model for the continuous culture of microorganisms utilizing inhibitory substrates, Biotechnol. Bioeng., 10 (1968), 707-723. doi: 10.1002/bit.260100602. Google Scholar
C. Bartholdy, J. P. Christensen, D. Wodarz and A. R. Thomsen, Persistent virus infection despite chronic cytotoxic T-lymphocyte activation in Gamma interferon-deficient mice infected with lymphocytic chroriomeningitis virus, J. Virol., 74 (2000), 10304-10311. doi: 10.1128/JVI.74.22.10304-10311.2000. Google Scholar
S. Bonhoeffer, R. M. May, G. M. Shaw and M. A. Nowak, Virus dynamics and drug therapy, Proc. Natl. Acad. Sci. USA, 94 (1997), 6971-6976. doi: 10.1073/pnas.94.13.6971. Google Scholar
S. Bonhoeffer, M. Rembiszewski, G. M. Ortiz and D. Nixon, Risks and benefits of structured antiretroviral drug therapy interruptions in HIV-1 infection, AIDS, 14 (2000), 2313-2322. doi: 10.1097/00002030-200010200-00012. Google Scholar
J. M. Conway and A. S. Perelson, Post-treatment control of HIV infection, Proc. Natl. Acad. Sci. USA, 112 (2015), 5467-5472. doi: 10.1073/pnas.1419162112. Google Scholar
R. Culshaw, S. Ruan and R. J. Spiteri, Optimal HIV treatment by maximising immune response, J. Math. Biol., 48 (2004), 545-562. doi: 10.1007/s00285-003-0245-3. Google Scholar
S. Debroy, B. M. Bolker and M. Martcheva, Bistability and long-term cure in a within-host model of hepatitis C, J. Biol. Systems, 19 (2011), 533-550. doi: 10.1142/S0218339011004135. Google Scholar
M. Haque, Ratio-dependent predator-prey models of interacting populations, Bull. Math. Biol., 71 (2009), 430-452. doi: 10.1007/s11538-008-9368-4. Google Scholar
A. V. M. Herz, S. Bonhoeffer and R. M. Anderson, Viral dynamics in vivo: Limitations on estimates of intracellular delay and virus decay, Proc. Natl. Acad. Sci. USA, 93 (1996), 7247-7251. doi: 10.1073/pnas.93.14.7247. Google Scholar
J. Huang and D. Dong, Analyses of bifurcations and stability in a predator-prey system with Holling Type-IV functional response, Acta Math. Appl. Sin. Engl. Ser., 20 (2004), 167-178. doi: 10.1007/s10255-004-0159-x. Google Scholar
Y. Iwasa, F. Michor and M. Nowak, Some basic properties of immune selection, J. Theoret. Biol., 229 (2004), 179-188. doi: 10.1016/j.jtbi.2004.03.013. Google Scholar
H. K. Khalil, Nonlinear System, Prentice-Hall, 1996. Google Scholar
A. Korobeinikov, Global properties of basic virus dynamics models, Bull. Math. Biol., 66 (2004), 879-883. doi: 10.1016/j.bulm.2004.02.001. Google Scholar
J. P. La Salle, The Stability of Dynamical Systems, SIAM, Philadelphia, 1976. Google Scholar
P. D. Leenheer and H. L. Smith, Virus dynamics: A global analysis, SIAM J. Appl. Math., 63 (2003), 1313-1327. doi: 10.1137/S0036139902406905. Google Scholar
F. Li, W. Ma, Z. Jiang and D Li, Stability and Hopf bifurcation in a delayed HIV infection model with general incidence rate and immune impairment, Comput. Math. Methods Med., 2015 (2015), 206205. doi: 10.1155/2015/206205. Google Scholar
W. Liu, Nonlinear oscillations in models of immune responses to persistent viruses, Theoret. Population Biol., 52 (1997), 224-230. doi: 10.1006/tpbi.1997.1334. Google Scholar
M. A. Nowak and C. R. M. Bangham, Population dynamics of immune responses to persistent viruses, Science, 272 (1996), 74-79. doi: 10.1126/science.272.5258.74. Google Scholar
M. A. Nowak, S. Bonhoeffer, A. M. Hill, R. Boehme, H. C. Thomas and H. McDade, Viral dynamics in hepatitis B virus infection, Proc. Natl. Acad. Sci. USA, 93 (1996), 4398-4402. doi: 10.1073/pnas.93.9.4398. Google Scholar
L. Perko, Differential Equation and Dynamical System, Speinger-Verlag, New York, 7 2001. doi: 10.1007/978-1-4613-0003-8. Google Scholar
R. R. Regoes, D. Wodarz and M. A. Nowak, Virus dynamics: The effect of target cell limitation and immune responses on virus evolution, J. Theoret. Biol., 191 (1998), 451-462. doi: 10.1006/jtbi.1997.0617. Google Scholar
F. Rothe and D. S. Shafer, Multiple bifurcation in a predator-prey system with non-monotonic predator response, Proc. Roy. Soc. Edinburgh Sect. A, 120 (1992), 313-347. doi: 10.1017/S0308210500032169. Google Scholar
S. Ruan and D. Xiao, Global analysis in a predator-prey system with nonmonotonic function response, SIAM. J. Appl. Math., 61 (2000/01), 1445-1472. doi: 10.1137/S0036139999361896. Google Scholar
W. Sokol and J. A. Howell, Kinetics of phenol oxidation by washed cells, Biotechnol. Bioeng., 23 (1980), 2039-2049. doi: 10.1002/bit.260230909. Google Scholar
J. Sotomayor, Generic bifurcation of dynamical system, Dynam. Syst., Academic Press, New York, 1973,561–582. Google Scholar
K. Wang, Y. Jin and A. Fan, The effect of immune responses in viral infections: A mathematical model view, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 3379-3396. doi: 10.3934/dcdsb.2014.19.3379. Google Scholar
K. Wang and Y. Kuang, Fluctuation and extinction dynamics in host-microparasite systems, Commun. Pure Appl. Anal., 10 (2011), 1537-1548. doi: 10.3934/cpaa.2011.10.1537. Google Scholar
Z. Wang and X. Liu, A chronic viral infection model with immune impairment, J. Theoret. Biol., 249 (2007), 532-542. doi: 10.1016/j.jtbi.2007.08.017. Google Scholar
K. Wang, Z. Qiu and G. Deng, Study on a population dynamic model of virus infection, J. Systems Sci. Math. Sci., 23 (2003), 433-443. Google Scholar
S. Wang and F. Xu, Analysis of an HIV model with post-treatment control, J. Appl. Anal. Comput., 10 (2020), 667-685. doi: 10.11948/20190081. Google Scholar
S. Wang and F. Xu, Thresholds and bistability in virus-immune dynamics, Appl. Math. Lett., 78 (2018), 105-111. doi: 10.1016/j.aml.2017.11.002. Google Scholar
S. Wang, F. Xu and L. Rong, Bistability analysis of an HIV model with immune response, J. Biol. Systems, Vol 25 (2017), 677–695. doi: 10.1142/S021833901740006X. Google Scholar
S. Wang, F. Xu and X. Song, Threshold and bistability in HIV infection models with oxidative stress, arXiv: 1808.02276 (2018). Google Scholar
D. Wodarz, J. P. Christensen and A. R. Thomsen, The importance of lytic and nonlytie immune responses in viral infections, Trends Immunol., 23 (2002), 194-200. doi: 10.1016/S1471-4906(02)02189-0. Google Scholar
D. Wodarz, Hepatitis C virus dynamics and pathology: The role of CTL and antibody responses, J. Gen. Virol., 84 (2003), 1743-1750. doi: 10.1099/vir.0.19118-0. Google Scholar
Figure 1. Bifurcation diagram of system (3). The solid line represents the stable equilibrium of infected CD4+ T cells and the dashed line represents the unstable equilibrium of infected CD4+ T cells. The post-treatment control threshold is $ c_{2} = 0.2500 $, the elite control threshold is $ c^{**}_1\approx0.6505 $ and the bistable interval is $ (0.2500, 0.6505). $ Here, $ c = 0.37\; \; \mbox{day}^{-1} $ and the values of other parameters are listed in (4)
Figure 2. Time histories and trajectories of system (3) with different initial conditions. The system has a stable equilibria $ E^{(2)}_{1} $. Here, $ c = 0.2\; \; \mbox{day}^{-1} $ is less than the post-treatment control threshold $ P_I $ and other parameter values are listed in (4)
Figure 3. Time histories and trajectories of system (3) with different initial conditions. Here, system (3) has two different stable equilibria $ E^{(2)}_{1} $ and $ E_{-}^{2*} $ with $ c = 0.37\; \; \mbox{day}^{-1} $. Other parameter values are listed in (4)
Figure 4. Time histories and trajectories of system (3) with different initial conditions. System (3) only has the positive equilibrium $ E_{-}^{2*} $, which is stable with $ c = 0.65\; \; \mbox{day}^{-1} $. Other parameter values are listed in (4)
Figure 5. Bifurcation diagram of system (6). The solid line is the stable equilibrium and the dashed line denotes the unstable equilibrium. The post-treatment control threshold is $ c_{2} = 2.5000 $, the elite control threshold is $ c^{**}_2\approx3.5278 $ and the bistable interval is $ (2.5000, 3.5278). $ Here, $ c = 3\; \; \mbox{day}^{-1} $ and other parameter values are listed in (7)
Figure 6. Time histories and phase portraits of system (6). System (6) has two different stable equilibria $ E^{(4)}_{1} $ and $ E_{*}^{4-} $. Here, $ c = 3\; \; \mbox{day}^{-1} $ and other parameter values are listed in (7). We choose different initial values
Figure 7. Phase portraits of system (6). (A) Choosing $ c = 2\; \; \mbox{day}^{-1} $, which is less than the post-treatment control threshold $ c_{2} = 2.5000 $, system (6) only has a stable equilibrium $ E^{(4)}_{1} $; (B) Choosing $ c = 4\; \; \mbox{day}^{-1} $, which is greater than the elite control threshold $ c^{**}_2\approx3.5278 $, system (6) only has the stable equilibria $ E_{*}^{4-} $. Other parameter values are listed in (7)
Table 1. The stabilities of the equilibria and the behaviors of system (3) in the case $ 1<\mathcal {R}^{(2)}_{0}<\mathcal {R}^{(1)}_{c} $
$ E^{(2)}_{0} $ $ E^{(2)}_{1} $ $ E_{*}^{2-} $ $ E_{*}^{2+} $ System (3)
$ R^{(2)}_{0}<1 $ GAS — — — Converges to $ E^{(2)}_{0} $
$ 1<R^{(2)}_{0}<R^{(1)}_{c}, $ $ 0<c<c^{**}_1 $ US LAS — — Converges to $ E^{(2)}_{1} $
$ 1<R^{(2)}_{0}<R^{(1)}_{c}, $ $ c^{**}_1<c $ US US LAS — Converges to $ E_{*}^{2-} $
Table Options
Download as excel
Table 2. The stabilities of the equilibria and the behaviors of system (3) in the case $ \mathcal {R}^{(2)}_{0}>\mathcal {R}^{(1)}_{c} $
$ R^{(2)}_{0}>1 $, $ 0<c<c_{2} $ US LAS — — Converges to $ E^{(2)}_{1} $
$ R^{(2)}_{0}>R^{(1)}_{c}>1, $ $ c_{2}<c<c^{**}_1 $ US LAS LAS US Bistable
$ R^{(2)}_{0}>R^{(1)}_{c}>1, $ $ c>c^{**}_1 $ US US LAS — Converges to $ E_{*}^{2-} $
$ 1<R^{(4)}_{0}<R^{(2)}_{c}, $ $ 0<c<c^{**}_2 $ US GAS — — Converges to $ E^{(4)}_{1} $
$ 1<R^{(4)}_{0}<R^{(2)}_{c}, $ $ c^{**}_2<c $ US US GAS — Converges to $ E_{*}^{4-} $
$ R^{(4)}_{0}>1 $, $ 0<c<c_{2} $ US GAS — — Converges to $ E^{(4)}_{1} $
$ R^{(4)}_{0}>R^{(2)}_{c}>1, $ $ c_{2}<c<c^{**}_2 $ US GAS GAS US Bistable
$ R^{(4)}_{0}>R^{(2)}_{c}>1, $ $ c>c^{**}_2 $ US US GAS — Converges to $ E_{*}^{4-} $
Rui Dilão, András Volford. Excitability in a model with a saddle-node homoclinic bifurcation. Discrete & Continuous Dynamical Systems - B, 2004, 4 (2) : 419-434. doi: 10.3934/dcdsb.2004.4.419
Ping Liu, Junping Shi, Yuwen Wang. A double saddle-node bifurcation theorem. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2923-2933. doi: 10.3934/cpaa.2013.12.2923
Kie Van Ivanky Saputra, Lennaert van Veen, Gilles Reinout Willem Quispel. The saddle-node-transcritical bifurcation in a population model with constant rate harvesting. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 233-250. doi: 10.3934/dcdsb.2010.14.233
Flaviano Battelli. Saddle-node bifurcation of homoclinic orbits in singular systems. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 203-218. doi: 10.3934/dcds.2001.7.203
Xianjun Wang, Huaguang Gu, Bo Lu. Big homoclinic orbit bifurcation underlying post-inhibitory rebound spike and a novel threshold curve of a neuron. Electronic Research Archive, 2021, 29 (5) : 2987-3015. doi: 10.3934/era.2021023
Alain Rapaport, Jérôme Harmand. Biological control of the chemostat with nonmonotonic response and different removal rates. Mathematical Biosciences & Engineering, 2008, 5 (3) : 539-547. doi: 10.3934/mbe.2008.5.539
Sumei Li, Yicang Zhou. Backward bifurcation of an HTLV-I model with immune response. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 863-881. doi: 10.3934/dcdsb.2016.21.863
Victoriano Carmona, Soledad Fernández-García, Antonio E. Teruel. Saddle-node of limit cycles in planar piecewise linear systems and applications. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5275-5299. doi: 10.3934/dcds.2019215
Ale Jan Homburg, Todd Young. Intermittency and Jakobson's theorem near saddle-node bifurcations. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 21-58. doi: 10.3934/dcds.2007.17.21
W.-J. Beyn, Y.-K Zou. Discretizations of dynamical systems with a saddle-node homoclinic orbit. Discrete & Continuous Dynamical Systems, 1996, 2 (3) : 351-365. doi: 10.3934/dcds.1996.2.351
Xiao-Biao Lin, Changrong Zhu. Saddle-node bifurcations of multiple homoclinic solutions in ODES. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1435-1460. doi: 10.3934/dcdsb.2017069
Urszula Ledzewicz, Mohammad Naghnaeian, Heinz Schättler. Dynamics of tumor-immune interaction under treatment as an optimal control problem. Conference Publications, 2011, 2011 (Special) : 971-980. doi: 10.3934/proc.2011.2011.971
Yuan Zhao, Shunfu Jin, Wuyi Yue. Adjustable admission control with threshold in centralized CR networks: Analysis and optimization. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1393-1408. doi: 10.3934/jimo.2015.11.1393
Gang Chen, Zaiming Liu, Jinbiao Wu. Optimal threshold control of a retrial queueing system with finite buffer. Journal of Industrial & Management Optimization, 2017, 13 (3) : 1537-1552. doi: 10.3934/jimo.2017006
Jinhu Xu, Yicang Zhou. Bifurcation analysis of HIV-1 infection model with cell-to-cell transmission and immune response delay. Mathematical Biosciences & Engineering, 2016, 13 (2) : 343-367. doi: 10.3934/mbe.2015006
Majid Gazor, Mojtaba Moazeni. Parametric normal forms for Bogdanov--Takens singularity; the generalized saddle-node case. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 205-224. doi: 10.3934/dcds.2015.35.205
Russell Johnson, Francesca Mantellini. A nonautonomous transcritical bifurcation problem with an application to quasi-periodic bubbles. Discrete & Continuous Dynamical Systems, 2003, 9 (1) : 209-224. doi: 10.3934/dcds.2003.9.209
Fatihcan M. Atay. Delayed feedback control near Hopf bifurcation. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 197-205. doi: 10.3934/dcdss.2008.1.197
Gunog Seo, Gail S. K. Wolkowicz. Pest control by generalist parasitoids: A bifurcation theory approach. Discrete & Continuous Dynamical Systems - S, 2020, 13 (11) : 3157-3187. doi: 10.3934/dcdss.2020163
Mudassar Imran, Hal L. Smith. The dynamics of bacterial infection, innate immune response, and antibiotic treatment. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 127-143. doi: 10.3934/dcdsb.2007.8.127
Shaoli Wang Huixia Li Fei Xu
|
CommonCrawl
|
Prevention of transmission of Babesia canis by Dermacentor reticulatus ticks to dogs treated orally with fluralaner chewable tablets (Bravecto™)
Janina Taenzler1,
Julian Liebenberg2,
Rainer K.A. Roepke1 &
Anja R. Heckeroth1
Parasites & Vectors volume 8, Article number: 305 (2015) Cite this article
The preventive effect of fluralaner chewable tablets (Bravecto™) against transmission of Babesia canis by Dermacentor reticulatus ticks was evaluated.
Sixteen dogs, tested negative for B. canis by PCR and IFAT, were allocated to two study groups. On day 0, dogs in one group (n = 8) were treated once orally with a fluralaner chewable tablet according to label recommendations and dogs in the control group (n = 8) remained untreated. On days 2, 28, 56, 70 and 84, dogs were infested with 50 (±4) B. canis infected D. reticulatus ticks with tick in situ thumb counts 48 ± 4 h post-infestation. Prior to each infestation, the D. reticulatus ticks were confirmed to harbour B. canis by PCR analysis. On day 90, ticks were counted and removed from all dogs. Efficacy against ticks was calculated for each assessment time point. After treatment, all dogs were physically examined in conjunction with blood collection for PCR every 7 days, blood samples for IFAT were collected every 14 days and the dog's rectal body temperature was measured thrice weekly. From dogs displaying symptoms of babesiosis or were PCR positive, a blood smear was taken, and, if positive, dogs were rescue treated and replaced with a replacement dog. The preventive effect was evaluated by comparing infected dogs in the treated group with infected dogs in the untreated control group.
All control dogs became infected with B. canis, as confirmed by PCR and IFAT. None of the 8 treated dogs became infected with B. canis, as IFAT and PCR were negative throughout the study until day 112. Fluralaner chewable tablet was 100 % effective against ticks on days 4, 30, 58, and 90 and an efficacy of 99.6 % and 99.2 % was achieved on day 72 and day 86 after treatment, respectively. Over the 12-week study duration, a 100 % preventive effect against B. canis transmission was demonstrated.
A single oral administration of fluralaner chewable tablets effectively prevented the transmission of B. canis by infected D. reticulatus ticks over a 12-week period.
Canine babesiosis, caused by protozoa of the genus Babesia through the bite of a vector tick, is a clinically important tick-borne disease. In Europe to date, four Babesia species known to affect dogs have been identified. Babesia canis, Babesia vogeli, Babesia gibsoni and Babesia vulpes sp. nov., previously known as Babesia microti-like [1–3]. Of these species, B. canis is the most widely distributed in Europe, coinciding with the distribution of its known vector Dermacentor reticulatus, the ornate dog tick. B. vogeli is most often found around the Mediterranean basin where Rhipicephalus sanguineus is the predominant tick species. The Babesia vulpes sp. nov. species seems to be centered in the northwest of Spain whereas the occurrence of B. gibsoni is reported more sporadically [2, 4].
Babesia spp. are intracellular protozoa habiting in the red blood cells of the host. The clinical signs of babesiosis in dogs vary from mild transient illness to acute disease due to severe haemolysis that rapidly results in death. Clinical findings include pale mucus membranes, elevated body temperature, anorexia, icterus, pyrexia, and splenic and hepatic enlargement [2]. However, the severity of the disease depends on various factors such as the Babesia species involved, the age and immune status of the dog and the presence of other infectious diseases [4].
Worldwide, canine babesiosis is one of the most eminent tick-borne diseases [5]. Due to increasing pet ownership, more owners travelling with their pets and the ability of vector arthropods to establish themselves in new localities [6], ticks and tick-borne diseases are spreading throughout the world and are no longer restricted to certain areas.
Once the infected tick has attached to the dog, the risk of pathogen transmission from the tick to the dog is increased by sustained feeding. In most tick-borne disease systems, after initial tick attachment, a feeding period of at least 24 to 48 h is required before transmission of protozoa occur [7]. To prevent the pathogen transmission, it is necessary to kill the infected tick within this time period. To quantify the dynamics of transmission of tick-borne pathogens, transmission blocking tick models have been developed. Such a model includes a sufficient number of treated dogs to test the duration of preventive activity, plus an untreated control group wherein the majority of dogs become infected with the tick-borne pathogen.
In the current study, dogs were treated once orally with fluralaner chewable tablets (Bravecto™). Fluralaner, a new ectoparasiticide in the novel isoxazoline compound class, elicits its primary action through feeding activity, with a duration of efficacy over 12 weeks resulting in the immediate and persistent killing of ticks and fleas on dogs [8]. As fluralaner is a systemic acting ectoparasiticide; its efficacy depends on ticks attaching to the host's skin and commencing to feed, thereby ingesting the active compound. Due to its rapid speed of killing within 12 h after tick attachment [9], the potential of orally administered fluralaner to prevent B. canis transmission was tested in the outlined study.
Study set-up
The study was conducted in accordance with Good Clinical Practice (VICH guideline GL9, Good Clinical Practice, EMA, 2000), and was in compliance with the South African National Standard "SANS 10386:2008: The care and use of animals for scientific purposes" and ethical approval was obtained by the ClinVet Animal Ethics Committee (CAEC) before the study start. The study was performed as a negative controlled, partly blinded, randomized efficacy study.
Sixteen mixed breed dogs (8 males, 8 females) tested negative for babesial DNA by PCR analysis and negative for B. canis antibodies (IFAT) before treatment were used. All dogs included were between 1 and 8 years of age and weighed between 13.2 and 26.9 kg. Each dog was in good health; had not been treated with any parasite control product within 3 months prior to a 7-day acclimatization period; did not harbour any ticks before treatment; and was uniquely identified by a microchip number.
Prior to randomization, dogs were clinically examined and weighed. Dogs were ranked within gender by descending body weight and randomly allocated to two study groups (one treatment and one control group) of 8 dogs each using a computer-generated randomization list.
All dogs were kept indoors and housed individually during the study course. Temperature in the dog housing facility ranged between 15.1 and 27.9 °C and the relative humidity between 21.9 and 66.4 %. All dogs were fed a standard commercially available dry dog food once daily and drinking water was provided ad libitum.
On day 0 (i.e., day of treatment), dogs in the treatment group were treated once orally with fluralaner chewable tablets (Bravecto™), according to label recommendations. Each dog received half of its daily food ration approximately 20 min before administration of treatment and the balance directly after treatment. The chewable tablet was administered by placement in the back of the oral cavity over the tongue to initiate swallowing. Each treated dog was continuously observed for 1 h after administration to monitor for vomit or tablet spit out, which did not occur in any of the 8 dogs. Dogs in the control group remained untreated. Specific health observations on day 0 were performed hourly, up to 4 h after administration on all dogs (treated and untreated).
Tick infestations and assessments
A laboratory-bred D. reticulatus tick isolate (European origin) infected with B. canis was used for each infestation. Tick infestations were conducted on all dogs on days 2, 28 (4 weeks), 56 (8 weeks), 70 (10 weeks), and 84 (12 weeks). One sample of D. reticulatus ticks (approximately 50) was taken from each batch of ticks used for each infestation to determine the percentage of infection with B. canis by PCR analysis. At each infestation time point, each dog was infested with 50 (± 4) viable, unfed adult D. reticulatus ticks (50 % female; 50 % male). Dogs were not sedated for infestation, but during each infestation every dog was placed in an infestation restrainer measuring 90 × 80 × 70 cm (L x W x H) and ticks were manually applied to the animal's fur; thereafter the animals were restrained for approximately 10 min. During this time ticks which fell off from the animal were re-applied. After 10 min the infestation restrainer was closed, and after 4 h (± 10 min) the dog was released into its cage.
Tick in situ thumb counts were performed on each dog at 48 ± 4 h post each infestation (i.e., on days 4, 30, 58, 72, and 86), but ticks were not removed. On day 90, all remaining ticks on each dog were removed and counted. The personnel conducting tick infestation, tick in situ counting and tick removal on day 90 were blinded to the treatment status of each dog.
To monitor each dog closely for any signs of canine babesiosis, each dog was physically examined by a veterinarian on a 7-day interval until completion of the study (i.e., days 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98, 105, and 112). Starting on day 8 after treatment, the rectal body temperature of each dog was measured thrice weekly. General health observations, noting the dog as normal or abnormal, were performed once daily throughout the complete study duration (i.e., starting 7 days prior to treatment until day 112 after treatment). If a dog was noted as abnormal or the rectal body temperature was measured above or equal to 39.4 °C, an additional physical examination by a veterinarian was performed. If one or more parameters were observed as abnormal during this examination, a blood sample for blood smear performance was collected. The blood smear was air dried and stained with a Diff-quick Stain Kit prior to evaluation.
Blood for serology (IFAT) and PCR analysis
Blood samples for serum analysis for B. canis antibodies were collected on a 14-day interval, starting on day 14 after treatment. IFATs were performed using the "MegaScreen® FLUOBABESIA canis" commercial kit. All sera were diluted at 1:80 and were recorded as positive if specific fluorescence was observed, or negative if no fluorescence was observed.
Blood samples for PCR analysis regarding B. canis DNA were collected in EDTA tubes from each dog on a 7-day interval. Total genomic DNA was isolated from whole blood samples using a commercial genomic DNA isolation kit (GeneJet Genomic DNA Purification kit, Thermo Scientific). PCR entailed the use of primers Babesia2F (5′-GGAAGGAGAAGTCGTAACAAGGTTTCC-3′) and Bcanis2R (5′-CAGTGGTCACAGACCGGTCG-3′) with combined specificity to the B. canis ITS1 region of the DNA in order to amplify a target region of 302 bp [10]. Up to 400 ng isolated DNA served as template for PCR amplification of the target region. PCR products were analysed using agarose gel electrophoresis. A PCR product of approximately 300 bp indicated the presence of the target region in the sample. To verify PCR success in each individual tube, positive, negative, no templates as well as internal amplification controls were included in each run.
Rescue treatment
A blood sample for blood smear performance was collected from each dog displaying a rectal body temperature above or equal to 39.4 °C, or with a positive PCR analysis result for babesial DNA, or clinical signs of babesiosis observed during physical examination. Dogs confirmed positive for B. canis protozoa by blood smear were rescue treated and received the appropriate treatment with diminazene (Berenil; MSD Animal Health) at a dosage of 1 mL/20 kg body weight on the first day and with imidocarb (Forray 65; MSD Animal Health) at a dosage of 1.2 mL/20 kg body weight on the next day. A rescue-treated dog remained part of all health observations (i.e., general health, rectal body temperature assessment, physical examination) but was not subjected to subsequent tick infestations. This dog was moved from the study housing facilities to an outdoor run exposed to ambient environmental conditions, and group housed until final study exclusion. Before moving, all ticks were removed. Blood samples for PCR and IFAT were collected and after confirmation of a babesial infection by both analysis methods, this dog was finally excluded from the study.
Replacement dogs
Due to the several planned tick infestation time points during the study duration (i.e., tick infestations on days 2, 28, 56, 70, and 84), in addition to the 8 primary included dogs in the control group, replacement dogs were included as required, and to replace animals in this group rescue treated for babesiosis. Before study inclusion, a replacement dog was acclimatized for 7 days prior to first tick infestation for that animal. During this period the dog was tested negative for B. canis by PCR analysis and IFAT, and a physical examination by a veterinarian was performed. Replacement dogs were not randomized, but whenever possible the replacement dog had the same sex as the control dog replaced.
Efficacy evaluation
The statistical analysis was performed using the software package SAS® (SAS Institute Inc., Cary, NC, USA, release 9.3). The individual dog was the experimental unit in all statistical calculations. Data from each tick in situ thumb count time point were analysed separately.
The percentage of efficacy against ticks was calculated for the treatment group at each assessment time point using geometric means with Abbott's formula:
Efficacy (%) = 100 × (MC - MT)/MC, where MC was the mean number of total live attached ticks on untreated control dogs and MT the mean number of total live attached ticks on treated dogs. In case of zero counts, the geometric mean was calculated as follows:
$$ {\mathrm{x}}_{\mathrm{g}}={\left({\displaystyle \prod_{\mathrm{i}=1}^{\mathrm{n}}\left({\mathrm{x}}_{\mathrm{i}}+1\right)}\right)}^{\frac{1}{\mathrm{n}}}-1 $$
Significant differences were assessed between the log-counts of live attached ticks in the treated group at each assessment time point and the log-counts of the untreated control group. Study groups were compared using a linear mixed model including study group as a fixed effect and block as a random effect. The two-sided level of significance was declared when P ≤ 0.05.
The percentage of preventive effect against B. canis transmission for the treatment group was calculated as follows: Preventive effect (%) = 100 × (TC - TT)/TC, where TC is the total number of infected dogs in the untreated group and TT is the total number of infected dogs in the treated group. A dog was regarded infected with B. canis, if it was tested serologically positive for B. canis antibodies (IFAT) and positive for B. canis DNA in PCR assay.
No treatment-related adverse events were observed in any of the 8 dogs treated orally with fluralaner during the 12-week post-treatment observation period. In total 19 replacement dogs (10 male, 9 female) were included in the control group throughout the study, ensuring that at each tick infestation time point, the control group consisted of 8 animals, which was possible for all infestation time points except the last one on day 84. For tick challenge on day 84 only 6 control animals were available, from which two were tested positive by blood smear and PCR analysis on day 85 and rescue treated, so that for tick in situ thumb counting on day 86 the control group consisted of 4 animals. Efficacy calculation for day 86 and day 90 were therefore calculated with 4 control dogs. The mean tick counts and the detailed efficacy against ticks are shown in Table 1. An efficacy against ticks at each assessment time point between 99.2 and 100 % was achieved after single oral fluralaner treatment.
Table 1 Mean tick counts and efficacy against ticks after single oral treatment with fluralaner chewable tablets
None of the dogs treated with Bravecto™ chewable tablets developed any clinical signs referring to babesiosis. Dogs in the control group developed clinical signs referring to babesiosis as pale mucous membranes, rectal body temperature above or equal to 39.4 °C, depressed/listless general behaviour, enlarged lymph nodes and enlarged spleen.
Rectal body temperatures were measured for each dog thrice weekly. In 19 of 27 control dogs a rectal body temperature above or equal to 39.4 °C was measured on at least one measurement time point during the study. In the treated group in 1 of 8 dogs a rectal body temperature above or equal to 39.4 °C was measured once 17 days after treatment. This elevated rectal body temperature was not confirmed to be related to an infection with B. canis, as blood analysis results (blood smear, PCR and IFAT) for this animal were negative for B. canis throughout the study period (see Table 2).
Table 2 Number of dogs with increased rectal body temperature (RBT) and number of dogs tested positive for B. canis via blood smear, PCR and IFAT
At each infestation time point, 12–16 % of ticks were found to be infected with B. canis by PCR analysis. The infection model was regarded as valid as all dogs in the untreated control group were infected with B. canis, as confirmed positive for B. canis by blood smear; for babesial DNA by PCR analysis; and for B. canis antibodies by IFAT after first or subsequent tick infestation. None of the treated dogs became infected with B. canis during the complete study duration, as confirmed by the absence of B. canis antibodies in the IFAT and a negative test result for babesial DNA by PCR analysis on any of the scheduled blood analysis time points up to 4 weeks after the last tick infestation (see Table 2). A 100 % preventive effect against the transmission of B. canis by infected D. reticulatus ticks was achieved after single oral fluralaner treatment (see Table 3).
Table 3 Preventive effect against the transmission of B. canis by D. reticulatus ticks
The blocking of pathogen transmission (preventive effect) to dogs through the bite of vector ticks has become an increased demand of pet owners and veterinarians in the evaluation of the capacity of anti-tick compounds.
Canine babesiosis is one of the clinically most significant and eminent tick-borne diseases [5], and was therefore used as study model to determine the ability of fluralaner to prevent the transmission of B. canis by infected D. reticulatus ticks. B. canis protozoa infect the red blood cells, causing mild to severe disease with different severities in clinical signs until death, if untreated. For this reason, dogs in the untreated control group were immediately rescue treated after they had been tested positive by blood smear. For these rescue-treated dogs, replacement dogs were included, to maintain a sufficient number of at least 6 dogs in the control group for statistical analysis, as required by the guideline for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestations on dogs and cats [11].
Fluralaner is the first orally-administered compound leading to systemic activity with an efficacy duration over 12 weeks against ticks and fleas [8, 12]. Until 2014, tick control compounds for dogs were available as spot-ons, sprays or collars exhibiting its tick-killing efficacy via blood meal and/or contact exposure/repellency [13]. In the speed of kill studies from Wengenmayer et al. [9], it was demonstrated that orally-administered fluralaner starts to kill ticks present on the dog as early as 4 h (89.6 %), showing almost complete tick-killing efficacy within 12 h after treatment over the entire 12-week period of efficacy. These results are confirmed in this study by the excellent efficacy results against ticks (see Table 1). As fluralaner elicits its primary action through feeding activity, the protective effect of fluralaner is less obvious. The efficacy of fluralaner depends on ticks attaching to the host's skin and commencing feeding, thereby ingesting the active compound before being killed [8]. The transmission time of B. canis from infected D. reticulatus ticks is given by Heile et al. [14] with 48–72 h after tick attachment. The tick's attachment to the host's skin starts the maturation of the sporozoites located in the salivary glands of the tick. A few days later after the tick has attached, pathogen transmission through the saliva from the tick causes host infection [15]. Due to its rapid tick-killing effect, fluralaner effectively prevented the transmission of B. canis from infected D. reticulatus ticks to the dogs (Table 3). Fluralaner chewable tablets demonstrated an efficacy against ticks between 99.2 and 100 % over the entire 12-weeks study duration.
An active ingredient with a longer re-treatment interval such as fluralaner reduces the risk of treatment failure as a consequence of poor owner compliance with monthly treatment recommendations. Owner compliance is an important component of successful control and prevention of tick infestations during tick season. This study demonstrated that treatment with fluralaner chewable tablet is not only effective against ticks and protects the dog against pathogen transmission, but also remains effective over a 12-week period following treatment. Moreover, in addition to its efficacy against D. reticulatus, fluralaner is effective for the same period of time against other ticks and fleas that may concomitantly infest these animals [12, 16].
Single oral administration of fluralaner chewable tablets (Bravecto™) to dogs prevented the transmission of B. canis by infected D. reticulatus ticks by 100 % over 12 weeks. An efficacy against ticks between 99.2 and 100 % was achieved over the entire 12-week study duration. The long re-treatment interval of fluralaner chewable tablets offers more convenience over monthly tick-control treatments, with a potential compliance advantage.
PubMed Article Google Scholar
Solano-Gallego L, Baneth G. Babesiosis in dogs and cats–expanding parasitological and clinical spectra. Vet Parasitol. 2011;181:48–60.
Baneth G, Florin-Christensen M, Cardoso L, Schnittger L. Reclassification of Theileria annae as Babesia vulpes sp. nov. Parasit Vectors. 2015;8:207.
PubMed Central PubMed Article Google Scholar
Irwin PJ. Canine babesiosis: from molecular taxonomy to control. Parasit Vectors. 2009;2 Suppl 1:S4.
Jongejan F, Uilenberg G. The global importance of ticks. Parasitology. 2004;129(Suppl):3–14.
Irwin PJ. It shouldn't happen to a dog… or a veterinarian: clinical paradigms for canine vector-borne diseases. Trends Parasitol. 2014;30:104–12.
Little SE. Changing paradigms in understanding transmission of canine tick-borne diseases: the role of interrupted feeding and intrastadial transmission. Mazara del Vallo, Sicily, Italy: 2nd Canine Vector-Borne Disease (CVBD) Symposium 2007, p. 30–4.
Bravecto EPAR summary for the public. European Medicines f. Agency. [http://www.ema.europa.eu/docs/en_GB/document_linrary/EPAR_-_Summary_for_the_public/veterinary/002526/WC500163861.pdf].
Wengenmayer C, Williams H, Zschiesche E, Moritz A, Langenstein J, Roepke R, et al. The speed of kill of fluralaner (Bravecto) against Ixodes ricinus ticks on dogs. Parasit Vectors. 2014;7:525.
Beugnet F, Halos L, Larsen D, Labuschagne M, Erasmus H, Fourie J. The ability of an oral formulation of afoxolaner to block the transmission of Babesia canis by Dermacentor reticulatus ticks to dogs. Parasit Vectors. 2014;7:283.
Marchiando AA, Holdsworth PA, Green P, Blagburn BL, Jacobs DE. World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) guideline for evaluating the efficacy of parasiticed for the treatment, prevention and control of flea and tick infestations on dogs and cats. Vet Parasito. 2007;145:332–44.
Rohdich N, Roepke RK, Zschiesche E. A randomized, blinded, controlled and multi-centered field study comparing the efficacy and safety of Bravecto (fluralaner) against Frontline (fipronil) in flea- and tick-infested dogs. Parasit Vectors. 2014;7:83.
Blagburn BL, Dryden MW. Biology, treatment, and control of flea and tick infestations. Vet Clin North Am Small Anim Pract. 2009;39:1173–200.
Heile C, Heydorn AO, Schein E. Dermacentor reticulatus (Fabricius, 1794)–distribution, biology and vector for Babesia canis in Germany. Berl Munch Tierarztl Wochenschr. 2006;119:330–4.
Uilenberg G. Babesia–a historical overview. Vet Parasitol. 2006;138:3–10.
Williams H, Young DR, Qureshi T, Zoller H, Heckeroth AR. Fluralaner, a novel isoxazoline, prevents flea (Ctenocephalides felis) reproduction in vitro and in a simulated home environment. Parasit Vectors. 2014;7:275.
The authors would like to thank all the staff at ClinVet for their assistance and contribution to this study.
MSD Animal Health Innovation GmbH, Zur Propstei, 55270, Schwabenheim, Germany
Janina Taenzler, Rainer K.A. Roepke & Anja R. Heckeroth
ClinVet International, Uitsigweg, Bainsvlei, 9338, Bloemfontein, Free State, South Africa
Julian Liebenberg
Janina Taenzler
Rainer K.A. Roepke
Anja R. Heckeroth
Correspondence to Janina Taenzler.
JL is employed at ClinVet and all other authors of this paper are employees of MSD Animal Health. The study was conducted as part of a research program to evaluate the potential of fluralaner to inhibit the transmission of pathogens to hosts after tick attachment after oral fluralaner treatment.
The study design, protocol and report of the study were prepared by JT, JL, AH, and RR. JL and his team at ClinVet were responsible for the animal phase, data collection, and statistical calculations. All authors revised and approved the final version.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Taenzler, J., Liebenberg, J., Roepke, R.K. et al. Prevention of transmission of Babesia canis by Dermacentor reticulatus ticks to dogs treated orally with fluralaner chewable tablets (Bravecto™). Parasites Vectors 8, 305 (2015). https://doi.org/10.1186/s13071-015-0923-1
Bravecto™
Babesia canis
Chewable tablets
Fluralaner
Dermacentor reticulatus
Preventive effect
Tick-borne disease
Transmission blocking
Bravecto TM
|
CommonCrawl
|
Header - Solarlits
Contact About Solarlits Copyright
Submit Conferences
Journal of Daylighting
Career Register Login
Home Submit Journals Contact About Solarlits
Journal About For Authors Reviewers Editorial Board All Issues Publication Ethics
2. Related studies
4. Results and analysis
Declaration of competing interest
Figures (21)
Volume 8 Issue 2 pp. 239-254 • doi: 10.15627/jd.2021.19
Measurement, Simulation, and Quantification of Lighting-Space Flicker Risk Levels Using Low-Cost TCS34725 Colour Sensor and IEEE 1789-2015 Standard
Sivachandran R. Perumal,* Faizal Baharum
School of Housing, Building & Planning, Universiti Sains Malaysia, 11800 Pulau Pinang, Malaysia
*Corresponding author.
[email protected] (S. R. Perumal)
[email protected] (F. Baharum)
History: Received 24 April 2021 | Revised 14 June 2021 | Accepted 6 July 2021 | Published online 26 August 2021
Copyright: © 2021 The Author(s). Published by solarlits.com. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Citation: Sivachandran R. Perumal, Faizal Baharum, Measurement, Simulation, and Quantification of Lighting-Space Flicker Risk Levels Using Low-Cost TCS34725 Colour Sensor and IEEE 1789-2015 Standard, Journal of Daylighting 8 (2021) 239-254. https://dx.doi.org/10.15627/jd.2021.19
Building owners are transitioning towards a smart lighting solution for illumination purposes. LED (Light Emitting Diode) lighting application has become a norm given its high efficacy and energy efficiencies. This paper presents an approach to monitor the percent flicker conformance of interior building lighting to international standards. The focus is on flickers induced by LED lightings. This experiment utilises a TCS34725 RGB (red, green, blue) colour sensor to measure the flicker parameters of interior lighting spaces. Light-sensitive photodiodes in the sensor detect changes in lighting intensity, and output digitised values. A Raspberry Pi4 minicomputer processes the data measured for comparison to several standards. Non-conformance is reported to building owners to take corrective actions and minimise flicker discomfort exposure to building occupants. A flicker risk level factor is determined to gauge the severity when flickers are present. This method may be used to replace luminaires or fix flickering lighting issues in buildings. The results show that the monitoring system is functional. The proposed measurement and data processing method can be incorporated into any smart building hub for automation and building performance analysis. The method may also be used to measure non-LED lighting flickers.
Flicker, Percent flicker, Flicker index, IEEE 1978-2015
Today, the notion of energy efficiencies and energy conservation measures is catalysing building owners and lighting designers to utilise LED lighting for illumination purposes. The widespread conversion to LED lighting, which yields favourable energy-saving results, has made this approach the most popular amongst other measures due to a faster capital investment return [1]. Researches on lighting and built environment primarily focus on illuminance and correlated colour temperatures. However, one of the often-overlooked lighting parameters is flicker caused by sub-par LED lighting devices. Severe flickers are noticeable by the average human eyes. In contrast, high-frequency flickers exist in almost all solid-state lightings, which is a concern to be tackled. Unaddressed flicker issues may lead to Sick Building Syndrome over time, which every organisation tend to avoid.
Lighting flickers are defined as rapid and repeated shifts in light intensity [2]. It can be graded as visually perceptible or not based on flicker frequency. Moreover, when the lighting source and the observer shift about each other, flicker occur – stroboscopic effect. Temporal Light Artefacts (TLA) are unwanted lighting effects caused by variations in light output, and flicker is a type of temporal light modulation (TLM) [3]. Humans do not detect flickers consciously, but they are processed subliminally by the average human brain. They affect visual and cognitive performance. Adverse health concerns are seen in some cases [4].
Fluorescent lighting, for example, is powered by an alternating current (AC) mains supply that varies over time (50 Hz or 60 Hz). As a result, the light output follows the same pattern to turn on and off due to the time-varying source, causing flicker. Flicker in the lighting area can cause seizures, migraines, headaches, and being visually unpleasant or constantly distracted [2]. As a result, flickers are harmful to one's well-being.
Electrical transformers were once used to step up or down voltages to control interior lighting. The AC mains input voltages exhibit flicker characteristics due to its low frequencies – power-line flicker. Today, direct current (DC) source drives LED lighting with the advances from power electronics technology. A DC source-driver at the lighting output reduces flickering to appropriate levels, improving occupant visual comfort. The Pulse Width Modulation (PWM) technique is used to control lighting intensity levels [5]. Though lighting equipment moves toward modern LED lighting standards, source flicker can still exist at lighting ends, such as through PWMs.
This paper aims to tackle subliminal flicker by devising a lighting-space flicker detection system using a low-cost RGB colour sensor. The system alerts buildings owners when non-conformance against international flicker regulations occurs, specifically IEEE 1789-2015 standard. Moreover, the system would produce a refined risk level factor quantifying marginally-risky luminaries, which benefits buildings owners in corrective action plans. The scope of the research is to test a LED lighting-lit small room limited to a sensor unit using a minicomputer that acts as a building monitoring server. Upon successful flicker detection and risk level analysis, expansion of the system onto large building environments will be proposed, such as integration to Building Monitoring Systems. The proposed system may be used in any built environment, which has lighting systems. Figure 1 shows the steps taken to conduct this research.
Fig. 1. Research conduction steps.
In a study to detect flickers and stroboscopic effects caused by LED lighting on a work table, the authors calculated flickers through their control parameters to simulate flickers [6]. In other words, they manipulated input parameters such as user-determined current source and lighting intensity through the Pulse Width Modulation (PWM) technique for LED lighting. Conventional lamps were simulated to produce flickers by varying input voltages. In other words, flickers were not detected through external sensors but calculated through input source data. Here, the output may not always be the same as desired inputs in practical applications, as intrinsic noises in the system may cause deviations. Lamps, albeit LED or conventional, are an open-loop system. In addition, their photometric details are usually listed in their device's datasheet. However, low-end lighting in the markets sometimes does not come with factory measurement data.
One research noted that the appropriate parameter to measure relevant to flicker is luminous flux (lumens), but there is no standard procedure to measure them [7]. Lighting waveform flicker was manually calculated using measurement data from selenium cell (sensor) and oscilloscope readings in another research [8]. However, the researchers determined flickers from multiple single commercial LED lamp sources instead of the whole space or room. Similarly, for single lamp measurements, in another work to measure flickers, the researchers used photodiode (TSL257 sensor with additional circuitry) and oscilloscope combinations to generate flicker waveform. Fluctuations in the photodiodes' voltage drops were plotted in the oscilloscopes and analysed [9].
Certain studies have used specific flicker measurement units such as IEC flicker meters and luminance meters [10]. However, they come in single measurement equipment and needs calibration before use. Moreover, in a built environment application, manual measurement has to be done room by room and becomes tedious. In conclusion, previous researches have shown that combining the photodiode sensor and a detection system as a whole can automate these procedures. Having a centralised system that monitors flickers and alerts building owners on the go would be beneficial when safety risk assessments are done.
2.1. Flickers in LED lighting
Today's widespread use of LED lighting necessitates the development of new methods for assessing lighting flicker. The LED driver determines the flicker and dimming efficiency of LED lighting. Dimmers and other electronics can induce or increase lighting flicker. Lighting devices that use AC-driven LEDs are more likely to flicker. Due to inadequate filtering capacitors, DC-driven LEDs with simple or inexpensive drivers often cause systemic flickering [3,4,8]. Capacitors consume space on electronic boards. Complex electronics, such as phase-cut dimmers (triacs) and pulse width modulation (PWM) techniques, can pass on switching noise from the electronics to the lighting output in the form of flicker. Switching noise is created by high operating frequencies in Switching Mode Power Supplies (SMPS) [11].
2.2. Task performance and health issues from lighting flickers
Lighting flickers from artificial lighting can cause a variety of health problems. Neurological disorders such as epileptic seizures, headaches, nausea, blurred vision, eyestrain, and migraines are among them [1-4,8,12]. Flickers are known to reduce task efficiency and effectiveness when it comes to task execution. Studies have also linked flicker exposure to an increase in autistic behaviours, especially in children. The stroboscopic effect causes motion to appear to slow or stop. Flicker discomfort detracts from one's well-being and efficiency at work. It can be life-threatening in severe cases [4].
When a subject is subjected to flickers for prolonged periods, the effects are exacerbated. Repeated stimulation and exposure to a retinal field in human eyes magnifies the adverse health effects [1-4]. Furthermore, the effects are stronger when the flicker source is in the middle of the field of vision since it projects to a broad region of the visual cortex, even though the flickering incident is less visible. In addition, the quanta of light influences flicker effects. High luminances in the mesopic and photopic regions produce a higher health risk [4]. When the brightness variation is high, flickers become more noticeable. Moreover, subjects are vulnerable to health issues when the contrast ratio between the flicker source and the environment is high. The flicker source's colour contrast variance in the red light channel is considered the worst of all.
2.3. Flicker metrics
Illuminating Engineering Society of North America (IESNA) is a non-profit organisation that works on lighting standards. According to IESNA, there are two key ways to identify flickers: percent flicker and flicker index. Both metrics are defined in Eq. 1 and 2 using parameters from Fig. 2. They are older but more well-known and commonly used [7].
\[ PF=\frac{A-B}{A+B}\times100 \% \]
where PF is the percent flicker, expressed in (%), A is the maximum amplitude value of lighting waveform (max luminance), and B is the minimum amplitude value of lighting waveform (min luminance).
\[ FI=\frac{A1}{A1+A2} \]
where FI is the flicker index, A1 is the Area 1, the area above the average value as per Fig. 2, and A2 is the Area 2, the area below the average value as per Fig. 2.
Fig. 2. Flicker waveform.
Percent flicker is measured on a 0 to 100 % scale by considering the average and peak-to-peak amplitude measurements. Amplitude refers to the lighting intensity (luminance) of the waveform. They do, however, ignore the shape, duty cycle, and frequency of the lighting waveform [4,7].
On the other hand, the flicker index is calculated on a scale of 0 to 1.0. It is a more recent formulation, but the formula is less known and used. Waveform average, peak-to-peak amplitude, shape, and duty-cycle are taken into account by the flicker index calculation as per Eq. (2). The formula, however, does not take frequency into account [4,7].
2.4. Standards on safe lighting flicker ratings
There are several newer standards and guidelines associated with flicker ratings. Amongst the properties emphasised by the newer standards are waveform modulation frequency and amplitude and waveform DC component and duty cycle.
The IEEE PAR 1789-2015 standard is the primary source of reference in this article because it breaks down the flicker ratings into three levels of risk. Having ranges of risk, particularly high-risk, low-risk, and no-risk, enables this research to generate the refined risk-level factor, quantifying marginality.
2.4.1. IEEE PAR 1789-2015
In 2008, IEEE PAR 1789 established a technical committee to assess and address solid-state lighting (SSL) flicker risk issues. LED lighting is a form of SSL. In 2012, the committee released a paper outlining a Risk Assessment protocol as a best practice [4].
The risk evaluation matrix, shown in Fig. 3, segments the effects of flickers into several risk levels. Table 1 tabulates the degree of certainty for risk level with colour saturation ranging from green to red.
Fig. 3. IEEE 1789-2015 Risk Assessment Matrix (RAM).
Table 1. Colour codes from IEEE 1789-2015 RAM.
Potential significant adverse health effects of flicker are mapped into the risk matrix using various sources, including reliable evidence and field expert opinions. They are represented in Fig. 3 by the oval shapes.
A selection of recommended practices is adopted based on the risk analysis performed by mapping the risk assessment matrix. Maximum flicker ratings are described mathematically using the boundary conditions between the Low-Risk and Medium-Risk zones. The boundary's mathematical modelling is expressed in Eq. 3 [4].
\[ Max\ PF\le\ f_0\times0.08 \]
where Max PF is the maximum percent flicker, expressed in (%) and f0 is the operating frequency of the lighting waveform (dominant fundamental frequency).
The operating frequency of an SSL product must be greater than 100 Hz to use the IEEE 1789-2015 risk assessment chart. Therefore, the product must be reviewed to ensure that it meets the application's requirements. The maximum permissible percent flicker is calculated by multiplying the operating frequency by 0.08 and rounding the number to the nearest integer [4]. The SSL product is acceptable if the percent flicker is less than the permitted flicker. This requirement extends to the general public except for the most susceptible individuals. If obtaining an SSL's operating frequency is difficult, the percent flicker must not exceed 10%.
A lighting waveform is further divided into three sub-levels of risk, with Eq. (3) being the maximum permissible percentage flicker. Table 2 lists them in order of risk, from no risk to low risk to high risk. Figure 4 shows the criteria in a graphical representation.
Table 2. Risk level of flicker in IEEE 1789-2015.
Fig. 4. IEEE 1789-2015 risk zones.
The bordering line between Low-Risk and Medium-Risk in Fig. 3 is similar to the bordering line between Low-Risk and No-Risk in Fig. 4. The boundary has been mapped into a modulation (%) and frequency graph by the IEEE 1789-2015 working group. The percent flicker of a waveform is also known as modulation (%).
2.4.2. California joint appendix 8 (JA-8.4.6 and Table-JA-8)
California Joint Appendix 8 is another standard that categorises flicker ratings of waveforms as either PASS (acceptable) or FAIL (not acceptable) [13]. In summary, JA 8.4.6 (Table-JA-8) notes that flickers are suitable to the general public for waveform frequencies greater than 200 Hz. Acceptance conditions are the same, with a percent flicker level of less than 30 %. If neither of the above conditions is met, the waveform is deemed unacceptable or unsafe. Table 3 summarises the criteria of California Joint Appendix 8.
Table 3. Risk classification of JA8.
2.4.3. WELL building standard (L07 Part 2)
The flicker scores in the WELL Building Standard are identical to those in California Joint Appendix 8 [14]. If a minimum frequency of 90 Hz is exceeded at all 10% lighting output intervals from 10% to 100% light, the waveform is deemed PASS. LED products with a low-risk level of percent flicker of less than 5%, mainly when operated at less than 90 Hz, are also classified as PASS. Any other non-conforming waveform properties are considered unsafe or FAIL. Table 4 summarises the criteria.
Table 4. Risk classification of WELL L07 Part 2.
2.4.4. Refined flicker risk level
IEEE 1789-2015 Standard has divided the flicker rating of lighting into three levels: no risk, low risk, and high risk. In Fig. 4, they are graded as being on the low-risk side of the medium-risk spectrum. By assigning decimal ranges to the low-risk level of the IEEE 1789-2015 table, building management teams can make more accurate decisions. Such precision is essential in facility management because those figures affect capital expenditures.
The risk level factor could be incorporated into other risk assessments by facility management teams. Risk evaluation in facility engineering can include safety, system failure, patient recovery in the healthcare industry, and many other things. A risk assessment's findings are typically presented to management and finance teams to obtain funding. These findings are necessary because the finance department needs concrete reasons for facility overhaul plans to be approved.
If risks are present at facilities, it impacts the budgetary needs. For example, if the annual capital expenditure for building lighting maintenance is set at $1,000, multiplying by the flicker risk level factor (say, 0.321), the maintenance team is awarded an extra monetary fund of $1,000+(0.321×$1000)=$1,321 dollars. Annual capital costs for large buildings, such as factories and healthcare operations, range to millions. Thus, the few decimals figures introduced by the flicker risk level signifies thousands of dollars.
In most safety risk assessments, the likelihood (probability) and consequent (severity) variables are loosely set based on past experiences and data clustering [15]. One of the benefits of having a refined flicker risk level is that it aids in objectively breaking down the probability aspect of the risk assessment matrix's likelihood. For example, a study on rating risk level to spacecraft orientation subsystem used an objective function to map the probability scale [16]. Furthermore, the authors objectively defined the probability scale based on spacecraft system parameters. Similarly, in another research that aims to quantify the probability scale of the risk matrix, the authors devised clear and continuous probability ranges. Their approach was through implementing a Monte Carlo simulation of single indicators, hence applying the copula model to calculate the joint risk probability of multiple indicators [17]. Finally, in a building fire risk assessment, the researchers used event tree analysis to quantify the probability scale be more definite rather than estimates [18]. Therefore, in this paper, the refined risk level or factor would further solidify risk assessments for built environments where lighting flickers concerns by objectively scaling the probability scale.
The Lighting Flicker Monitoring system is made up of three modules. Figure 5 shows three: the sensor module, data IO (input/output) processing module, and output module. A low-cost TCS34725 RGB colour sensor makes up the sensor module. A RaspberryPi 4 (RPi4) mini computer processes the waveform data captured by the sensor module. Finally, the flicker performance of a lighting space is displayed on a monitoring screen. The results may be sent to building owners to alert them for the non-conformance of flicker performances. For this paper, the TCS34735 sensor and Raspberry Pi 4 was used. However, the choice is not limited to replicate the procedures in this methods sections using other devices.
Fig. 5. Block diagram of modules for lighting control system.
Figure 6 depicts the flowchart of the measurement and simulation. In general, the lighting space's lighting intensity variations are detected and measured by the sensor. Next, the data is used to generate and analyse waveforms. Hence, flicker performance is assessed by comparing it to industry standards. Finally, multiple statistical regression is used to evaluate the flicker level factor using the data from the waveform analysis.
Fig. 6. Flowchart of lighting-space flicker risk determination.
3.1. TCS34725 sensor
In the electronics market today, various light sensors can detect lighting intensity fluctuations. They range from low-cost to high-cost. Sensors may be in standalone photodiode chips or an integrated circuit with a combination of Analogue to Digital converter (ADC) and photodiodes. One commonly available low-cost sensor unit with ADC built-in is the TCS43725 RGB colour sensor [19]. The support from the online community in providing application notes and driver libraries are abundant. As compared to illuminance or colour sensors, direct luminance sensors are more expensive. A high-performance luminance metre will cost over USD 300 in the electronics industry, while a TCS34725 sensor costs around USD 3.00 [9]. Albeit its low-cost, the sensor has been used successfully in robotic and other colour detection applications and researches [20,21].
3.1.1. Characteristics
Table 5 tabulates the characteristics of the TCS34725 RGB colour sensor. The sensor could read the intensity of lighting via photodiodes on four different channels. In addition, it has an integrated ADC that converts lighting intensity to digital values [19,22,23].
Also, lighting space illuminances (lx) and correlated colour temperatures (CCT) can be calculated using data from each channel. The sensor has a 400 kHz clock frequency. The clock frequency is an important consideration when choosing a sensor because low clock frequencies cannot sample high-frequency flickers.
3.1.2. Sensor application
Data from one channel is deemed sufficient for lighting flicker measurements. Therefore, the "clear" channel is chosen for the system. Figure 7 shows the red, green, and blue channels' strength summation values. Other channels are filtered to output intensities on dominant wavelengths for red, green, and blue in the lighting spectrum. They are not used because they could distort lighting conditions that use a variety of CCTs. Therefore, the clear channel data represent the spectrum additions of red, green and blue components and are more suited for luminance and illuminance measurements, similar to human's perception of visible lighting.
Fig. 7. Photodiode spectral responsivity [19].
Lighting intensity data are stored at memory registers in the ranges of 0 to 65, 535 (16 bits – 2 bytes). Only the lower, 8-bit (256-decimal) byte is of interest. Moreover, the memory address for the clear channel is 0x14 (hex format). The measurement system uses a sensor driver library provided by Dexter Industries [23]. The codings are written in Python language. The driver library allows the minicomputer to access and control the sensor (RPi4). Figure 8 depicts the flow for sensor initialisation, data retrieval, and offline storage. The loop in the flow will keep measuring and recording data for 2 seconds. In the actual built environment, the loop can be set to run every alternate minute to send flicker rating reports to building owners. For the testing purpose in this research, they are run on demand.
Fig. 8. Sensor operation flow.
The temporary data stored in the memory array is sent to a comma-separated value (CSV) file once the loop in Fig. 8 is completed. Waveforms are processed and generated using data from the CSV file.
3.2. Measurements & data processing
3.2.1. Raspberry Pi 4 minicomputer
An RPi4 minicomputer is chosen because of its size versatility. It is about the size of a credit card and can go anywhere in a lighting room. The Raspbian operating system runs on the RPi4 minicomputer. The normalised intensity values collected from the sensor are then stored in a memory of RPi4. Finally, they are processed and analysed for further action.
The RPi4 has Digital Input/Output pins that could receive signals from third-party sensors such as the TCS34725 [24-26]. RPi4 and TCS34725 sensors communicate through the I2C protocol. I2C is a bit-by-bit serial communication interface that sends and receives data [25]. A large amount of data can take longer to transfer between the host (RPi4) and the client (sensor). Therefore, careful consideration is taken by coding optimally to retrieve data from the sensor. Inefficient data retrieval coding may cause the read/write period to be delayed, affecting the sampling time. Figure 9 shows the schematic diagram of the RPi4 connected to the sensor, whereas Fig. 10 shows the breadboard level connections of both units.
Fig. 9. Schematic diagram of RPi4 and TCS34725.
Fig. 10. Breadboard level connections of RPi4 and TCS34725.
3.2.2. Measurement procedure
There are a variety of lighting devices with various properties in any lighting room. For example, a bulb can have a high lumen output as compared to others. In addition, the CCT ratings of different bulbs can differ too. Therefore, this article focuses on measuring the average lighting intensity of a room through the sensor's clear channel photodiode, Fig. 7.
Two units of 6-inch circular LED lighting have been installed in the room. They are each rated 15 watts at 3000 K CCT. Their operating voltages range from 220 to 240 volts, and their maximum lumen output is 1200 lm at 50 Hz. An electronic driver is included with each LED fixture.
Figure 11 depicts the room layout and the LED lighting placements. The room has a large window and a door. The existence of windows or doors does not matter as the measurement detects ambient lighting fluctuations in the room. The room's ambient lighting intensity is a mixture of lighting from artificial lighting and natural daylighting during morning hours. If there are flickers from the LED lighting, it would still show in the lighting waveform through intensity changes. The ambient lighting flicker rating is determined by waveform analysis later through the percent flicker or flicker index. However, if severe flicker exists during measurements and analysis, it would have to be root-caused by building owners. For example, it may be due to external lighting being introduced to the room through the open door. Occurrences such as this pave the opportunity for building owners to take corrective action when flicker non-conformance are reported.
Fig. 11. Room layout.
As daylight lighting intensity fluctuations are null when there are no external stroboscopic influences, the measurement for this research was done during nighttime. It is to be noted that ceiling fans or other moving objects that obstruct lighting sources may cause flickers. However, the proposed flicker detection system is robust in that it will detect flicker caused by moving objects obstructing the lighting source. Nighttime measurements give more focus to the LED lighting installed. For this experiment, the measuring equipment is positioned in the middle of the room.
When all is in place, the sensor measures the room's lighting intensity fluctuations and stores the information in the minicomputer. A time-series light source waveform is formed by taking continuous measurements for 2 seconds. It is possible to populate data for longer than 2 seconds, but this may add more noise. If much noise is present, determining the frequency of the waveform would be difficult.
3.2.3. Luminance and relative intensity
In most rooms, the different surfaces have different colours and material properties. As a result, these surfaces can reflect varying amounts of lighting, altering the lighting distribution in the room. The reflectance index of lighting space affects the brightness (luminance) of a lighting space [8,27]. Reflectance index is mathematically the ratio of reflected lighting to incident light. On the other hand, illuminance is the measure of lighting falling onto a surface area. The illuminance on wall surfaces or tabletops is an example of luminances reflected in objects. For a fully diffusely-reflecting surface (Lambertian surface), illuminance and luminance are linked, as shown in Eq. 4 below [28,29]. In contrast, typical lighting spaces need the reflectance index to balance the equation due to the existence of non-Lambertian surfaces – Eq. (5).
\[ E=L\pi \]
\[ E\rho=L\pi \]
where E is the illuminance (lux), L is the luminance (lumens), ρ is the reflectance coefficient/index, and π is the mathematical constant pi (~3.142).
For flicker measurements of discrete light sources, luminance is the accurate parameter when determining flicker modulation instead of illuminance [7]. This research uses illuminance as the source for flicker ratings due to two reasons. First, in a lighting space, the perceived brightness for occupants is illuminance. The luminance value decreases at a distance away from the light source (inverse distance square law). Secondly, differences in the sensor's channel data (photodiode voltage fluctuations) are directly proportional to lighting intensity and luminance changes. The sensor channel's 8-bit data parameter is equal in magnitude and variance for flicker measurements since the room's reflectance index and pi are constants as in Eq. 5. Thus, this research assumes that the absolute magnitude of luminance is equivalent to illuminance. Therefore, luminance is not measured but assumed to be equivalent to illuminance. Hence, it could be normalised between 0 to 1 and used accordingly for flicker calculations. The same is true for changes in voltages or currents caused by a shift in the lighting waveform amplitude. In LED lighting technology, the LED driver can be either a constant voltage (c.v) or current (c.c) type. As such, Eq. 6 applies to LED lighting drivers of AC or DC drivers in tandem with the assumption [4,7].
\[ ∆E α ∆A α ∆Vcc α ∆Icc \]
where ∆E is the changes in illuminance, ∆a is the changes in lighting waveform amplitudes, ∆Vcc is the changes in the sensor's photodiode voltages (constant-current drivers), and ∆Icc is the changes in the sensor's photodiode currents (constant-voltage drivers).
3.3. Waveform generation & standard compliance check
The RPi4's memory storage is accessed to retrieve digital data measurements corresponding to light intensity. They are referred to as "word" (16-bit) rather than "byte" (8-bit). A lighting waveform is generated using the 16-bit data. The amplitude variance is plotted from minimum to maximum values. To reflect the extracted values as relative light intensity, they are first normalised between 0 (0 decimal) and 1 (255 decimal). Hence, a time-domain relative lighting intensity waveform is generated.
All data processing and analysis use the Python scripting tool. Python has a large number of open-source libraries with data crunching and calculation features. The libraries used in this article are mainly from Python's ecosystem, specifically, NumPy and SciPy, which support mathematical and signal processing work, respectively [30,31]. In addition, Python's Matplotlib library is used to build graphical plots [32].
3.3.1. Sampling time
The sampling time must be determined before the waveforms can be produced. The time for each successive measurement is referred to as sampling time. For RPi4, the measurement interval is not constant due to the RPi4 and sensor electronic circuitry's intrinsic noises. Sampling time is determined by taking the average time differences between each reading from the memory register for the clear channel. Table 6 tabulates an example of the case. The mean value is the sampling time.
Table 6. Determination of sampling time.
3.3.2. Signal noise filtration
Noises from the waveform should be removed to achieve a smoother waveform shape. A Savitzky-Golay filtering technique is used to achieve this (Fig. 12) [30]. A Savitzky–Golay filter smooths the data by applying a digital filter to a series of data points. It also improves data accuracy without distorting the signal's properties. Curve smoothing is accomplished by mathematical function convolution. It involves fitting successive subsets of adjacent data points with a low-degree polynomial using the linear least-squares method. Python's SciPy library has a tool to automate this procedure.
Fig. 12. Savitzky-Golay filtering to remove noise.
3.3.3. Signal frequency
The waveform frequency is one of the most critical parameters to determine. Waveform frequency is found using Zero-Crossing Method (ZCM). The frequency estimation via ZCM is ensured to be accurate with the Fourier Fast Transform (FFT) method [30].
The ZCM is a method for detecting the locations of amplitudes along the zero-axis by scanning them consecutively. What matters is whether they are above or below the zero-axis. The zero-axis is normalised to the average values of all amplitudes before that. The number of points above and below the zero-axis is evaluated by comparing the amplitude sign changes. A relative time interval between consecutive amplitudes can be calculated by taking the mean of the sum of points above and below the zero-axis and dividing by a constant factor of two – Nyquist Frequency Theorem. The theorem states that the sampling rate must be at least twice the waveform's maximum frequency; thus, the constant factor two, to digitise a waveform without aliasing. Finally, multiplying the sampling rate by the relative time interval scales the relative time interval to the actual frequency.
FFT is a method similar to ZCM to determine the waveform frequency. Typical lighting space flicker waveform constitutes many frequencies, such as noises. FFT transforms the time-domain signal to a frequency-domain signal and populates all the frequency contents of the signal using Fourier analysis. The relative intensity of each frequency content is known from FFT and can be plotted in a graph. The most dominant frequency with the highest intensity usually represents the nominal propagating frequency of the waveform.
3.3.4. Flicker from signal
The following formulas calculate the percent flicker and flicker index from the waveform generated [31,33].
\[ V_{pp}=\ V_{max}-V_{min} \]
\[ PF=\ \left(\frac{V_{pp}}{V_{max}+V_{min}}\right)\times100\ % \]
where Vpp is the peak-to-peak voltage of the waveform, Vmin is the minimum voltage of the waveform, Vmax is the maximum voltage of the waveform, and PF is the percent flicker (%).
Furthermore, because the measurements obtained by the sensor are voltage correlated, the Voltage (V) sign can be interchanged with Luminance (L), Illuminance (E), or Amplitude (A).
The waveform is divided into the top half and bottom half around the average value to evaluate the flicker index, similar to Fig. 2. The statistical mean of the amplitude values from the data points is the average value. The area is then calculated by integrating the top and bottom halves' data points through the time interval. The integration technique is used to calculate the waveform area that covers the top and bottom half. These steps are only applied to one cycle.
\[ FI=\ \left(\frac{A_{top}}{A_{top}+A_{bottom}}\right) \]
where FI is the flicker Index, Atop is the area of the top half of the waveform above the average value, and Abottom is the area of the bottom half of the waveform below the average value
The flicker percent and index are used to assess the standard compliance based on the criteria from Tables 2, 3 and 4.
3.4 Refining the low-risk region of ieee 1789-2015 chart
Figure 13 displays a continuum of min and max modulation values for all frequencies in the low-risk region shaded yellow. A correlation is applied to determine the precise risk level factor ranging from 0 to 1 using the min and max values through polynomial regressions [30]. The highest (1) risk level is near the top zone, shaded red, and lowest (0) near the bottom zone, shaded green. Waveforms that fall into the green or red zones are categorised as "no risk" or "high risk," respectively. Further correlations may be drawn between the red and green areas, but they are insignificant for this paper. Flicker-free lighting must be maintained in all facilities. If, on the other hand, the output falls into the red zone, an urgent corrective action plan with emergency funds must be implemented to eliminate the health risks associated with lighting flickers.
Fig. 13. Maximum and minimum correlation.
Therefore, the normalisation between 0 and 1 for the yellow zone in Fig. 13 is determined. The 0 and 1 normalisation output numbers can be adjusted to accommodate various facility risk-assessment rating levels.
\[ \frac{b-a}{c-a}=\ \frac{x-0}{1-0}\Longrightarrow\ x=\frac{b-a}{c-a} \]
where a is the Percent flicker (or modulation) at the intersection between dominant/operating frequency, f and borderline of low-risk (yellow zone) and no-risk (green zone), b is the calculated percent flicker (or modulation) for lighting space with dominant frequency, f, c is the percent flicker (or modulation) at the intersection between dominant/operating frequency, f and borderline of low-risk (yellow zone) and high-risk (red zone), and x is the refined flicker risk factor (in the marginal yellow zone).
4.1. Hardware
Figures 14 and 15 shows the pictures of the measurement apparatus built. They are placed in the middle of the empty room, as described in Fig. 11.
Fig. 14. Room setup & measurement apparatus.
Fig. 15. Raspberry Pi4 minicomputer & TCS34725 colour sensor.
4.2. Measurements
Table 7 tabulates the first and last five data extracted from the CSV file. The sampling time is estimated to be about 0.000514 seconds based on the time value in row number 2. The time interval corresponds to a frequency of 1945 Hz. However, from column "Time" in Table 7, the sampling time for each measurement is not constant. Therefore, Table 6 is used to calculate an average value of 2014 Hz (0.0004965 seconds). The sampling time is determined by the sensor's and minicomputer's efficiency, and it is affected by a variety of factors such as temperature, CPU degradation, and others.
Table 7. Raw data measurements.
In Table 7, the column "Value" has values in the range of 0 to 255 (digitised values). Specifically, they are the 8-bit data encoded during the analogue-to-digital conversion phase at the sensor. In other words, they represent the instantaneous lighting intensity. By observing the readings, it can be noticed that the lighting produces only small intensity differences. Meaning, the lighting waveform is relatively smooth.
4.3. Waveform outcome
Figure 16 (with noise unfiltered) and Fig. 17 (denoised) show the waveform for the lighting in the room. For digitised values, denoising the raw waveform is not recommended since the percent flicker increases by 0.3 percent, making the waveform more harmful when plotted in the IEEE 1789-2015 chart. Therefore, the unfiltered waveform is used in the subsequent steps.
Fig. 16. Room flicker waveform unfiltered for noise.
Fig. 17. Room flicker waveform (denoised).
Figure 16 shows the summary of the waveform in the lower-left corner. The zero-crossing method yields a frequency of about 99 Hz for the waveform. This value is consistent with the characteristics of a 50 Hz mains supply, in which flickers are typically seen at twice the main frequency. Percent flicker is calculated to be 2.0%. The flicker index is very low and has been rounded to zero. At the lower right corner of Fig. 16, the pass-fail criteria for the flicker standards are shown. The frequency of the waveform is confirmed to be 100 Hz when the FFT method is used. Whereby the result of the most dominant frequency for the waveform is determined as 99.96 Hz. Figure 18 shows the frequency domain analysis of the waveform.
Fig. 18. FFT of room lighting waveform.
In a nutshell, the room lighting waveform meets all three standard requirements summarised in the plot's lower-right text in Figs. 16 or 17. In cases of flicker rating which fails a specific standard, more so to IEEE 1789-2015 criteria, it is suggested that the building owners take corrective action plans to rectify the flicker issues in the lighting space. Corrective actions range from changing luminaries in the room to safer (safety certified) ones to vacating occupants from the room. These actions go back to building management teams and safety teams to assess and devise a strategy with management teams. In cases of needs to put forward proposals to finance or management teams, the results such as above may be used to substantiate budget requests for retrofitting lamps. In a later section, through refined risk level determination results, a precise figure can further validate the budget requests when flicker ratings are marginal.
4.4. IEEE 1789-2015 Chart
Figure 19 shows the room flicker rating values of 99 Hz at 2.0 percent flicker (modulation) plotted on the IEEE 1789-2015 chart. It is worth noting that the point's location is on the dividing line between low-risk and no-risk zones. This occurrence presents an opportunity to further fine-tuning the risk rating to decimal values.
Fig. 19. IEEE 1789-2015 flicker risk zones for room lighting.
4.5. Refined risk level output
Table 8 and Fig. 20 shows how the borderline between high-risk and low-risk (top border) is interpolated using regression to derive a polynomial function. Similarly, Fig. 21 shows the graphs for the bottom border.
Table 8. Regression parameters for border lines.
Fig. 20. Regression of top border of low-risk zone.
Fig. 21. Regression of bottom border of low-risk zone.
Therefore, the refined risk level factor for the room lighting, which was at 99 Hz, is determined using Eq. 10. Table 9 presents the outcome.
When looking at the point "room" in Fig. 19, the point appears to be in the middle of the yellow zone (low-risk region). However, the risk level factor is less than half because the chart is on a logarithmic scale.
4.6. Application of refined risk level factor as likelihood in risk assessment
The refined risk level factor may be used in many scenarios. As mentioned in Section: Refined Flicker Risk Level, one of the uses of the factor is to substantiate a budgetary request for corrective action plans. A test simulation using the refined risk level factor in a risk matrix assessment is presented in this section.
After measuring the flicker ratings and calculating the refined risk level factor in a built environment, the refined risk level factor is used as a likelihood variable in a risk assessment matrix, as shown in Table 10.
Table 10. Sample risk matrix for test simulation.
In Table 10, the red, yellow and green coloured cells represent high, low and no-risk levels.
The consequent (flicker effects on humans) of hazards are tabulated in Table 11 as follows (adapted from [4]).
Table 11. Hazard classification for test risk matrix simulation.
From the case of the "room" point in Fig. 19, the refined risk level, which was calculated to be 0.1369, falls in the lowest category of defined likehoods as per Table 10. Matching to symptoms that could occur to occupants in a building as per Table 11, the risk level can be pinpointed, followed by corrective action plans if needed. For example, if an occupant complains of having vomiting symptoms, the room lighting level is deemed low-risk. Although low-risk may sound not so severe for vomiting cases, the flicker rating is still within the acceptable range for lighting flicker assessments. There may be other non-lighting factors that need to be root-caused by building owners in particular facility and safety teams. However, on the same note, facility teams can start replacing lamps in the room due to the low-risk rating in phases by requesting replacement budgets to management teams. The refined risk level factor aids in putting forward a monetary figure as substantiation for replacement works. Table 12 summarises an example of a budget request.
Table 12. Budget request to include lighting replacement works in phases.
However, for high-risk ratings, immediate urgent corrective action plans should be taken. When risk ratings are none or no-risk, continuous monitoring is recommended.
5. Discussions
5.1. Sampling Time of TCS34725
There are a few sensor driver libraries available online, most notably from Adafruit Industries and Dexter Industries. The function to access the sensor memory for digital data of lighting intensity from the drivers takes about 0.0025 seconds (500 Hz). The low-frequency value is because the function performs read/write operations through the I2C protocol four times to collect data for four channels and reads other memory registers. The channels are Red, Green, Blue, and Clear. For flicker measurements, only the "clear" channel data is required. The driver library coding is modified offline to access only the particular channel. After modification, the sampling time improves to 0.0004965 seconds (~2000 Hz). Having a higher sampling rate gives a more accurate rendering of the flicker waveform. A slower sampling rate may distort the generated waveform as it represents the actual signal less. Some peaks and troughs may be missed or not captured by a low sampling rate.
The library driver also has a delay of 2.4 ms between consecutive readings. This delay, referred to as integration time in the library, is introduced to omit redundant measurements. Increased sensitivity at low light levels can be achieved by using longer integration times. However, the room is not dark, and the delay is not needed for flicker measurement and thus removed. Flicker measurements need high-resolution sampling.
5.2. Sensor gain
The gain parameter is yet another sensor configuration. Gain amplifies the signal to a level where the A/D converter can accurately scale it. Higher gain settings amplify noises too. However, without gain, the sensor would be unable to differentiate those signals from ambient noise. The sensor has a 3.8 million-to-one dynamic range for A/D conversion resolution. Signals with small amplitude change would only utilise few bits in the conversion range. Therefore, amplifying the signal increases the precision of measured values.
5.3. Building monitoring system (BMS) /building automation system (BAS) and internet of things (IoT)
Facility maintenance teams monitor and control building engineering parameters through BMS, BAS, and the likes. Most of the monitoring system deals with building air-conditioning system, fire protection system, electrical power statutes, process parameters, and others. Sensors or actuators attached to the engineering system communicates between the host (main server) and client (engineering system) through different types of communication protocols such as BACnet (Building Automation and Control (BAC) networks). BACnet protocol operates on serial communication and data exchange through a local network. The flicker monitoring system is similar to the sensor and host (RPi4) communicate through the I2C protocol (serial communication). RPi4 has network connectivity capabilities. There are also home automation libraries and software that utilise RPi4 as a host. Similar to the industry-standard BACnet, residential application is possible. Therefore, by linking RPi4 to the BMS/BAS server, data could be exchanged. This article only focuses on the flicker measurement and simulation procedures.
With the advent of automation technologies and Internet accessibility, measurements and sensor data could be monitored remotely, where in some cases, actions could be taken. These are the concept of the Internet of Things (IoT), in which a flicker monitoring system could be integrated for occupant's well-being.
In summary, this paper aimed to detect lighting space flickers. The concept of a lighting space flicker measurement and monitoring system was described in this article using a TCS34725 RGB colour sensor and an RPi4 minicomputer, which acts as a building automation server. The flicker measurement is achieved using the TCS34725 colour sensor to detect lighting intensity variations through its photodiodes. Changes in the photodiode readings were digitised through the sensor's internal ADC. Further, a minicomputer (RPi4) is used for waveform generation and analysis. The outcome of the system is a risk level analysis of lighting flickers. Output is in graphical form, whereupon analysis, a room test lighting was found to be complying with all three flicker standards, namely IEEE 1789-2015, JA8 and WELL standards. In addition, the room test light was found to be in the marginal low-risk zone of the IEEE 1789-2015 standard, which, when further refined through polynomial regressions, were able to produce a precise risk level factor. The precise risk level factor may be used in budgetary needs or used as a likelihood variable in typical risk matrices. The system can be integrated into smart building lighting applications, where building owners can be alerted during non-conformance of flicker parameters.
The authors would like to state their gratitude to the Malaysian Ministry of Health and University Sains Malaysia for providing research funding, data, equipment, and support to publish this article.
S. R. Perumal suggested the concept, tested it, gathered data, and oversaw the article's writing. F. Baharum supervised the experiment and provided data analysis, debate, and assistance in cross-checking perspectives to develop this article.
This paper did not disclose any private, personal, and confidential data of anyone or organisations. The images are self-developed for educational purposes only by the authors or from the stated sources (if any).
L. L. A. Price, M. Khazova, and J. B. O'Hagan, Human responses to lighting based on LED lighting solutions, CIBSE-Public Health of United Kingdom Report, United Kingdom, 2016.
P. Boyce, Human Factors in Lighting, 2nd Edition, Taylor & Francis, London, 2003. https://doi.org/10.1201/9780203426340
M. Rossi, Circadian Lighting Design in the LED Era, Springer International Publishing, Italy, 2019. https://doi.org/10.1007/978-3-030-11087-1
IEEE Power Electronics Society Standards Committee, IEEE Standard 1789-2015, IEEE Recommended Practices for Modulating Current in High-Brightness LEDs for Mitigating Health Risks to Viewers Sponsored by the Standards Committee, United States, 2015. https://doi.org/10.1109/ieeestd.2015.7118618
I. Chew, V. Kalavally, N. W. Oo, and J. Parkkinen, Design of an energy-saving controller for an intelligent LED lighting system, Energy & Building 120 (2016) 1–9. https://doi.org/10.1016/j.enbuild.2016.03.041
J. Bullough, K. Sweater Hickcox, T. Klein, and N. Narendran, Effects of flicker characteristics from solid-state lighting on detection, acceptability and comfort, Lighting Research & Technology 43 (2011) 337-348. https://doi.org/10.1177/1477153511401983
N. M. Miller, Flicker in Solid-State Lighting : Measurement Techniques, and Proposed Reporting and Application Criteria (2013).
S. Kitsinelis, G. Zissis, and L. Arexis, A study on the flicker of commercial lamps, Light and Engineering 20-3 (2012) 58-64.
H. Salama and F. Bendary, Light flicker Performance of Low power LED Units, in: 25th International Conference on Electricity Distribution, 2019, p 960 Spain. https://doi.org/10.34890/428
I. Azcarate, J. J. Gutierrez, P. Saiz, A. Lazkano, L. A. Leturiondo, and K. Redondo, Flicker characteristics of efficient lighting assessed by the IEC flickermeter, Electric Power Systems Research 107 (2014) 21-27. https://doi.org/10.1016/j.epsr.2013.09.005
The Illuminating Engineering Society of North America (IESNA) Light Sources Committee, IESNA TM-16-05 Technical Memorandum on Light Emitting Diode (LED) Sources and Systems, United States, 2005.
P. Iacomussi, M. Radis, G. Rossi, and L. Rossi, Visual Comfort with LED Lighting, Energy Procedia 78 (2015) 729-734. https://doi.org/10.1016/j.egypro.2015.11.082
The California Energy Commission, Appendix JA8 – Qualification Requirements for High Efficacy Light Sources, Title 24, Part 6, Build. Energy Efficient Standard 17-BSTD-02, no. 223245–9, United Stated, 2018.
WELL Building Institute Pbc, WELL Building Standard L07 Part2, 2018. https://v2.wellcertified.com/v/en/light/feature/7.
N. J. Duijm, Recommendations on the use and design of risk matrices, Safety Science 76 (2015) 21-31. https://doi.org/10.1016/j.ssci.2015.02.014
K. G. Lough, R. Stone, and I. Y. Tumer, Function Based Risk Assessment: Mapping Function to Likelihood, in: 17th International Conference on Design Theory and Methodology, 2005. https://doi.org/10.1115/detc2005-85053
C. Liu, W. Chen, Y. Hou, and L. Ma, A new risk probability calculation method for urban ecological risk assessment, Environmental Research Letters 15 (2020) 24016. https://doi.org/10.1088/1748-9326/ab6667
C. Guanquan and W. Jinhui, Study on probability distribution of fire scenarios in risk assessment to emergency evacuation, Reliability Engineering & System Safety 99 (2012) 24-32. https://doi.org/10.1016/j.ress.2011.10.014
TAOS Incorporated, Datasheet - TCS34725 - Color Light-to-Digital Converter with IR Filter, United States, 2012.
H. Surana, N. Agarwal, A. Udaykumar, and R. Darekar, Blackbox-Based Night Vision Camouflage Robot for Defence Applications, Advances in Intelligent Systems and Computing 810 (2018) 631-637. https://doi.org/10.1007/978-981-13-1513-8_64
Z. Zou, Y. Wang, and M. Zhou, Design and testing of an apple grading control system, in: 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), 2017, pp. 839–842. https://doi.org/10.1109/itoec.2017.8122471
T. DiCola and C. Nelson, TCS 34725 Driver Library, Software Module, United States, 2017, Available: https://github.com/adafruit/Adafruit_CircuitPython_TCS34725.
Dexter Industries, Python drivers for the TCS34725 light colour sensor, Software Module, 2017, available: https://github.com/DexterInd/DI_Sensors.
D. J. Norris, Beginning Artificial Intelligence with the Raspberry Pi. Apress, United States, 2017. https://doi.org/10.1007/978-1-4842-2743-5
RS-Components, Datasheet Raspberry Pi Model B, Raspberrypi.Org, United States, 2019.
A. Kurniawan, Smart Internet of Things Projects. Packt Publishing Limited, England, 2016, 9781786466518.
A. Peña-García and F. Salata, Indoor Lighting Customization Based on Effective Reflectance Coefficients: A Methodology to Optimise Visual Performance and Decrease Consumption in Educative Workplaces, Sustainability 13-1 (2020) 119. https://doi.org/10.3390/su13010119
P. Boyce and P. Raynham, The SLL Lighting Handbook, vol. 44. The Society of Light and Lighting, England, 2009. https://doi.org/10.3390/su13010119
A. E.F.Taylor, Illumination Fundamentals. Rensselaer Polytechnic Institute, United States, 2000.
P. Virtanen et al., SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods 17 (2020) 261–272. https://doi.org/10.1038/s41592-019-0686-2
C. R. Harris, K. J. Millman, S. J. van der Walt, et al., Array programming with NumPy, Nature 585 (2020) 357–362. https://doi.org/10.1038/s41586-020-2649-2
J. D. Hunter, Matplotlib: A 2D Graphics Environment, Computing in Science & Engineering 9 (2007) 90–95. https://doi.org/10.1109/MCSE.2007.55
G. Yeutter, Beautiful Flicker, Software Module in GitHub repository, United States, 2019, Available: https://github.com/yeutterg/beautiful-flicker.
Copyright © 2021 The Author(s). Published by solarlits.com.
Get PDF (8.7 MB)
Footer - Solarlits
About SolarLits
Copyright & Licensing
© Copyright 2014-2022 Solarlits.com. All Rights Reserved.
|
CommonCrawl
|
Synthetic Division
Implicit Differentiation Calculator
Partial Derivative Calculator
Home » Calculus » Double Integral Calculator
↗︎
↙︎
↖︎
dxdy
Switch Integral Order
√x
∙10x
1.) First solve the inner ...
Click here to unlock full solution steps
What do the full solution steps look like?
Click here for preview.
Voovers+ Full Solution Steps Preview:
$$\begin{align}& \hspace{2ex} \text{Solution Steps:} \hspace{55ex} \\ \\ & \hspace{2ex} \text{Solve the inputted double integral, given as:} \\ \\ & \hspace{5ex} \int \limits_{3}^{4} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} dy\\ \\ & \hspace{2ex} \text{To do this, we will:} \\ \\ & \hspace{5ex} \text{1) Solve just the inner portion of the integral, which is given as:} \\ \\ & \hspace{11ex} \int \limits_{1}^{2} \left(xy\right) \; dx\\ \\ & \hspace{5ex} \text{2) Insert the result of solving step 1 into the outer portion of the} \\ & \hspace{7ex} \text{integral (and solve this for the final answer), which is given as:} \\ \\ & \hspace{11ex} \int \limits_{3}^{4} \left( \text{Step 1 Result} \right) \; dy\\ \\ \\ & \hspace{2ex} \text{1) First, let's solve just the inner portion of the integral.}\\ \\ & \hspace{5ex} \text{1.1) Our full integral (with the inner portion shown inside the box)} \\ & \hspace{9ex} \text{is given as:} \\ \\ & \hspace{9ex} \int \limits_{3}^{4} \boxed{\int \limits_{1}^{2} \left(xy\right) \; dx}\hspace{1pt} dy\\ \\ & \hspace{9ex} \text{By removing the outer portion of the integral, we are left} \\ & \hspace{9ex} \text{with just the inner portion, which is given as:} \\ \\ & \hspace{12ex} \boxed{\int \limits_{1}^{2} \left(xy\right) \; dx}\\ \\ & \hspace{5ex} \text{1.2) Now we will take the antiderivative (also called an indefinite integral)} \\ & \hspace{9ex} \text{of this inner portion, but we will treat the variable }y\text{ as a constant } \\ & \hspace{9ex} \text{and take the antiderivative with respect to }x\text{. Doing so, we get:} \\ \\ & \hspace{9ex} \int_{}^{} \left(xy\right) \; dx\; = \; \boxed{\left(\frac{1}{2}\right) {x}^{2} y}\\ \\ & \hspace{5ex} \text{1.3) We can now finish evaluating this inner portion by plugging} \\ & \hspace{9ex} \text{in the limits of integration }x_{1} \text{ and }x_{2}\text{, and then simplifying} \\ & \hspace{9ex} \text{the resulting expression. Doing so, we get:} \\ \\ & \hspace{9ex} \left.\left(\frac{1}{2}\right) {x}^{2} y\: \right|_{1}^{2} \; = \;\left(\frac{1}{2}\right) {\left(2\right)}^{2} y - \left(\left(\frac{1}{2}\right) {\left(1\right)}^{2} y\right)\\ \\ & \hspace{12ex}\left(\frac{1}{2}\right) {\left(2\right)}^{2} y - \left(\left(\frac{1}{2}\right) {\left(1\right)}^{2} y\right)\; = \;2 y-\left(\frac{1}{2}\right) y\\ \\ & \hspace{15ex}2 y-\left(\frac{1}{2}\right) y\; = \; \boxed{\left(\frac{3}{2}\right) y}\\ \\ \\ & \hspace{2ex} \text{2) Now that we have solved the inner portion of the integral,} \\ & \hspace{4ex} \text{we can plug its result into the outer portion of the integral} \\ & \hspace{4ex} \text{and then solve that for the final answer.}\\ \\ & \hspace{5ex} \text{2.1) Our original, full integral (with the outer portion boxed)} \\ & \hspace{9ex} \text{is given as:} \\ \\ & \hspace{9ex} \boxed{\int \limits_{3}^{4}} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} \boxed{dy} \\ \\ & \hspace{9ex} \text{By removing the inner portion of the integral to isolate} \\ & \hspace{9ex} \text{the outer portion, and plugging in the evaluated} \\ & \hspace{9ex} \text{result of the inner portion, we end up with:} \\ \\ & \hspace{12ex} \boxed{\int \limits_{3}^{4} \left(\left(\frac{3}{2}\right) y\right) \; dy}\\ \\ & \hspace{5ex} \text{2.2) Now we will take the antiderivative (also called an indefinite} \\ & \hspace{9ex} \text{integral) of this outer portion. Doing so, we get:} \\ \\ & \hspace{9ex} \int_{}^{} \left(\left(\frac{3}{2}\right) y\right) \; dy\; = \; \boxed{\left(\frac{3}{4}\right) {y}^{2}}\\ \\ & \hspace{5ex} \text{2.3) We can now finish solving for the final answer by plugging} \\ & \hspace{9ex} \text{in the limits of integration }y_{1} \text{ and }y_{2}\text{, and then simplifying} \\ & \hspace{9ex} \text{the resulting expression. Doing so, we get:} \\ \\ & \hspace{9ex} \left.\left(\frac{3}{4}\right) {y}^{2}\: \right|_{3}^{4} \; = \;\left(\frac{3}{4}\right) {\left(4\right)}^{2} - \left(\left(\frac{3}{4}\right) {\left(3\right)}^{2}\right)\\ \\ & \hspace{12ex}\left(\frac{3}{4}\right) {\left(4\right)}^{2} - \left(\left(\frac{3}{4}\right) {\left(3\right)}^{2}\right)\; = \;12-\frac{27}{4}\\ \\ & \hspace{15ex}12-\frac{27}{4}\; = \; \boxed{\frac{21}{4} = 5.2500} \\ \\ & \hspace{9ex} \boxed{\boxed{ \int \limits_{3}^{4} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} dy\; = \;\frac{21}{4} = 5.2500}}\end{align}$$
Close Help Window
Double Integral Lesson
What is a Double Integral?
A double integral is a multiple integral of a function of two variables. It is called a double integral because we must perform a definite integral two times (one for each of the two variables).
To further our understanding, let's compare single integrals and double integrals:
The single integral of a function of one variable such as y = f(x) solves for the area under the function's 2-dimensional curve.
A double integral of a function of two variables such as z = f(x, y) solves for the volume under the function's 3-dimensional surface.
The double integral of the surface z = f(x, y) is the volume that extends from the surface to the x-y plane. The rectangular cross-section of the volume (seen from the top) is defined by the limits of integration x1, x2, y1, and y2.
A double integral for a function f(x, y) may be notated as:
$$\begin{align} & \int \limits_{y_{1}}^{y_{2}} \int \limits_{x_{1}}^{x_{2}} f(x, y) \; dx\hspace{1pt} dy \end{align}$$
Where x1 is the lower x limit of integration, x2 is the upper x limit of integration, y1 is the lower y limit of integration, y2 is the upper y limit of integration, f(x, y) is a function of x and y, dx indicates integration of the variable x, and dy indicates integration of the variable y.
Why do we Learn How to do Double Integrals?
Double integrals have many, many applications in the world of engineering, science, and statistics. But, let's hone in on just one of these applications: we can use a double integral to optimize the efficiency of a production car's engine so it can use less fuel during operation.
Most modern cars use computers to run their engines. This computer, often called an ECU (engine control unit), controls the airflow into the engine, injection of fuel into the engine, and the ignition of the air-fuel mixture inside the combustion chamber.
The exact timing of igniting the air-fuel mixture is extremely critical to combustion efficiency. If ignited too early in an engine cycle, the air-fuel mixture won't be compressed enough to fully burn. If ignited too late, the air-fuel mixture won't have enough time to fully burn.
We can run tests on an engine and collect data for combustion efficiency being a function of engine RPM (revolutions per minute) and ignition timing. If we plot the data, we will see a two-variable function's surface plot like the one below:
A surface plot of an engine's combustion efficiency function.
Since we have modeled the engine's efficiency as a function of two variables (engine RPM and ignition timing), we can easily calculate a double integral over a rectangular region on the engine RPM – ignition timing plane.
By calculating this double integral, we find the total volume between the surface and the engine RPM – ignition timing plane bound by the rectangular region we chose.
We can divide this volume by the area of our rectangular region. By doing so, we will end up with the average value of the combustion efficiency function over that rectangular region.
By knowing the average combustion efficiency of the engine over various engine operation parameters via our double integral, we can program the ECU to maximize power while minimizing fuel consumption!
$$\begin{align}& \hspace{2ex} \text{Calculate the double integral given as:} \\ \\ & \hspace{5ex} \int \limits_{3}^{4} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} dy\\ \\ & \hspace{2ex} \text{To do this, we will:} \\ \\ & \hspace{5ex} \text{1) Solve just the inner portion of the integral, which is given as:} \\ \\ & \hspace{11ex} \int \limits_{1}^{2} \left(xy\right) \; dx\\ \\ & \hspace{5ex} \text{2) Insert the result of solving step 1 into the outer portion of the} \\ & \hspace{7ex} \text{integral (and solve this for the final answer), which is given as:} \\ \\ & \hspace{11ex} \int \limits_{3}^{4} \left( \text{Step 1 Result} \right) \; dy\\ \\ \\ & \hspace{2ex} \text{1) First, let's solve just the inner portion of the integral.}\\ \\ & \hspace{5ex} \text{1.1) Our full integral (with the inner portion shown inside the box)} \\ & \hspace{9ex} \text{is given as:} \\ \\ & \hspace{9ex} \int \limits_{3}^{4} \boxed{\int \limits_{1}^{2} \left(xy\right) \; dx}\hspace{1pt} dy\\ \\ & \hspace{9ex} \text{By removing the outer portion of the integral, we are left} \\ & \hspace{9ex} \text{with just the inner portion, which is given as:} \\ \\ & \hspace{12ex} \boxed{\int \limits_{1}^{2} \left(xy\right) \; dx}\\ \\ & \hspace{5ex} \text{1.2) Now we will take the antiderivative (also called an indefinite integral)} \\ & \hspace{9ex} \text{of this inner portion, but we will treat the variable }y\text{ as a constant } \\ & \hspace{9ex} \text{and take the antiderivative with respect to }x\text{. Doing so, we get:} \\ \\ & \hspace{9ex} \int_{}^{} \left(xy\right) \; dx\; = \; \boxed{\left(\frac{1}{2}\right) {x}^{2} y}\\ \\ & \hspace{5ex} \text{1.3) We can now finish evaluating this inner portion by plugging} \\ & \hspace{9ex} \text{in the limits of integration }x_{1} \text{ and }x_{2}\text{, and then simplifying} \\ & \hspace{9ex} \text{the resulting expression. Doing so, we get:} \\ \\ & \hspace{9ex} \left.\left(\frac{1}{2}\right) {x}^{2} y\: \right|_{1}^{2} \; = \;\left(\frac{1}{2}\right) {\left(2\right)}^{2} y – \left(\left(\frac{1}{2}\right) {\left(1\right)}^{2} y\right)\\ \\ & \hspace{12ex}\left(\frac{1}{2}\right) {\left(2\right)}^{2} y – \left(\left(\frac{1}{2}\right) {\left(1\right)}^{2} y\right)\; = \;2 y-\left(\frac{1}{2}\right) y\\ \\ & \hspace{15ex}2 y-\left(\frac{1}{2}\right) y\; = \; \boxed{\left(\frac{3}{2}\right) y}\\ \\ \\ & \hspace{2ex} \text{2) Now that we have solved the inner portion of the integral,} \\ & \hspace{4ex} \text{we can plug its result into the outer portion of the integral} \\ & \hspace{4ex} \text{and then solve that for the final answer.}\\ \\ & \hspace{5ex} \text{2.1) Our original, full integral (with the outer portion boxed)} \\ & \hspace{9ex} \text{is given as:} \\ \\ & \hspace{9ex} \boxed{\int \limits_{3}^{4}} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} \boxed{dy} \\ \\ & \hspace{9ex} \text{By removing the inner portion of the integral to isolate} \\ & \hspace{9ex} \text{the outer portion, and plugging in the evaluated} \\ & \hspace{9ex} \text{result of the inner portion, we end up with:} \\ \\ & \hspace{12ex} \boxed{\int \limits_{3}^{4} \left(\left(\frac{3}{2}\right) y\right) \; dy}\\ \\ & \hspace{5ex} \text{2.2) Now we will take the antiderivative (also called an indefinite} \\ & \hspace{9ex} \text{integral) of this outer portion. Doing so, we get:} \\ \\ & \hspace{9ex} \int_{}^{} \left(\left(\frac{3}{2}\right) y\right) \; dy\; = \; \boxed{\left(\frac{3}{4}\right) {y}^{2}}\\ \\ & \hspace{5ex} \text{2.3) We can now finish solving for the final answer by plugging} \\ & \hspace{9ex} \text{in the limits of integration }y_{1} \text{ and }y_{2}\text{, and then simplifying} \\ & \hspace{9ex} \text{the resulting expression. Doing so, we get:} \\ \\ & \hspace{9ex} \left.\left(\frac{3}{4}\right) {y}^{2}\: \right|_{3}^{4} \; = \;\left(\frac{3}{4}\right) {\left(4\right)}^{2} – \left(\left(\frac{3}{4}\right) {\left(3\right)}^{2}\right)\\ \\ & \hspace{12ex}\left(\frac{3}{4}\right) {\left(4\right)}^{2} – \left(\left(\frac{3}{4}\right) {\left(3\right)}^{2}\right)\; = \;12-\frac{27}{4}\\ \\ & \hspace{15ex}12-\frac{27}{4}\; = \; \boxed{\frac{21}{4} = 5.2500} \\ \\ & \hspace{9ex} \boxed{\boxed{ \int \limits_{3}^{4} \int \limits_{1}^{2} \left(xy\right) \; dx\hspace{1pt} dy\; = \;\frac{21}{4} = 5.2500}}\end{align}$$
How the Calculator Works
The Double Integral Calculator is coded in HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and JS (JavaScript).
The HTML constructs the calculator's architecture. The calculator's frame, buttons, text, and other entities are all defined by the HTML code.
The CSS provides the graphical element of the calculator. The specific colors, shapes, and animations of the calculator's components are created and defined with the CSS code.
JS is what makes the calculator tick. When we click any of the buttons, the JS code handles the response to that click. Also, the mathematic procedure and solution steps creation are all performed by the JS code.
I don't want unlimited solutions
With any Voovers+ membership, you get all of these features:
+ Get UNLIMITED Solutions for
ALL Voovers Calculators
+ Remove Ads
+ Cancel At Any Time
Voovers+ Weekly - $4.97 - 7 Days
Unlimited solutions and solutions steps on all Voovers calculators for a week!
Voovers+ Monthly - $9.97 - 1 Month
Unlimited solutions and solutions steps on all Voovers calculators for a month!
Voovers+ Semester - $19.97 - 6 Months
Unlimited solutions and solutions steps on all Voovers calculators for 6 months!
Credit / Debit Card PayPal
Per 6 Months
Click here to tell us about your
experience and you'll get
HALF OFF Voovers+ monthly!
We know you care!
Click here to help your peers
by leaving a testimonial. 😃
|
CommonCrawl
|
Applied Water Science
September 2015 , Volume 5, Issue 3, pp 221–227 | Cite as
Ultrasonic removal of pyridine from wastewater: optimization of the operating conditions
M. A. Elsayed
In this study, a fundamental research had been carried out to explore the removal of pyridine in wastewater by ultrasound radiation. The effects of initial pyridine concentration, radiation time, pH, aeration, and the reaction temperature on the pyridine removal efficiency were investigated. The removal rates of pyridine at 180 min sonication time were found to decrease from 53 to 15 % with increasing the initial concentration from 10 to 100 mg/L. However, the total amount of pyridine degraded after 60 min at 100 mg/L was as much as three times larger than that degraded at 10 mg/L. The optimal pH was found to be 9 which resulted in 25 % pyridine removal after 180 min ultrasound radiation. By observing the change of pH value with the sonication time up to 60 min, it was observed that, pH of the sonicated pyridine aqueous mixture has decreased from 9.2 to 6.2 during the irradiation. The decrease in the pH may be attributed due to the formation of peroxy radicals in the solution and subsequently formation of oxygen free radicals. The simultaneous aeration could improve pyridine removal efficiency of ultrasound irradiation by 24 %. With increasing media temperatures, the removal efficiencies of pyridine increased in the temperature range in this study. In the end, it could be proposed that ultrasound radiation was an effective method for the removal of pyridine from wastewater.
Pyridine Ultrasound radiation Wastewater treatment pH Degradation
The increasingly acceptable use of chemicals in industries and households has resulted in the increasing generation of organic pollutants in effluents. These pollutants are potential health hazards. Enormous quantities of aromatic compounds as pollutants are being released into the environment by various industries (Jain et al. 2004), because of the broad range of applications of aromatic compounds among the top chemicals used in many industries (Gogate and Pandit 2004). Out of this aromatic heterocyclic compounds such as pyridine and its derivatives are of major concern as environmental pollutants due to their recalcitrant, toxic and teratogenic nature (Stapleton et al. 2006; Karthikeyan et al. 2012).
Pyridine was produced from coal tar and as a by-product of the coal gasification. However, increased demand for pyridine resulted in the development of more economical methods of synthesis from acetaldehyde and ammonia, and more than 20,000 tons per year are manufactured worldwide. Thus, researchers have long sought to develop effective, economically feasible techniques for cleaning the atmosphere of waste such as pyridine (Gupta et al. 2007a, b; Mittal et al. 2008; Saleh and Gupta 2012).
Various physicochemical methods for the wastewater treatment have been investigated (Goel et al. 2004; Mittal et al. 2010; Gupta et al. 2012). These include adsorption (Gupta et al. 2006, 2009, 2011), electrochemical (Gupta et al. 2007c), sorption using waste material (Gupta and Sharma 2003; Gupta et al. 2006, 2010; Mittal et al. 2010), biodegradation (Padoley et al. 2006; Mathur et al. 2008), and ozonation with biodegradation (Ince and Tezcanlí 2001; Agustina et al. 2005). After extensive search, a lack in literature reports on the removal of pyridine from wastewater by ultrasound radiation has been determined.
Nowadays, ultrasonic irradiation has received considerable interest as an advanced oxidation process because it leads to rapid degradation of chemical contaminants in water (Abbasi and Asl 2008; Yang et al. 2009). Ultrasound can enhance or promote chemical reactions and mass transfer and offer the potential for shorter reaction cycles, cheaper reagents, and less extreme physical conditions (Gong and Hart 1998; Goel et al. 2004). So far, ultrasound has been applied in studies of cleaning, organic synthesis, catalysis, extraction, emulsification, material processing, food processing, and wastewater treatment (Vinodgopal et al. 1998; Naffrechoux et al. 2000; Saleh and Gupta 2012).
In this study, an investigation was focused on the ultrasonic degradation of pyridine in aqueous media. A variety of different operating conditions were examined. The effects of different initial concentration, pH, aeration, and reaction temperatures on the degradation efficiency were investigated.
Pyridine standard solution was supplied by Fluka with purity better than 98.0 %. It was used to prepare a synthetic wastewater. Aqueous solutions were made using deionized water, which was prepared by an Elga B114 Deionizer using C114 cartridges, (EC 5 μS cm 5 °C and TDS 3.5 ppm). All other reagents were reagent grad obtained from Fluka and used as received.
Ultrasonic setup reactor
The degradation experiments were carried out in ultrasonic cleaner bath (Honda electronics PS-60, Capacity 15 L). The bath operates at 360 W and 40 kHz. Erlenmeyer flask was used as reaction vessel. The volume of the solution was 100 mL. The bath temperatures were maintained by proper recirculation of water. Solution temperature was also monitored regularly. The efficiency of a reaction vessel placed in an ultrasonic bath depends strongly on the distance of the bottom of the reaction vessel to the bottom of water bath. The distance was carefully measured through preliminary experiments, so that ultrasonic intensity reached maximum at the bottom of the flask. For ultrasonic frequency 40 kHz this distance value was 1 cm. The reactor was sealed with silicone stopper wrapped with an aluminium foil to ensure the minimum loss due to evaporation of the volatile compounds. The syringe needle was pierced through the septum of the stopper for sampling. In the present study, ultrasonic device provides indirect sonication, which will inevitably cause energy loss. A limited quantity of energy is transmitted into the reaction vessel. So, it should be kept in mind that the power used is not the real power of ultrasonic energy transmitting into the reaction mixture. All sonochemical experiments were conducted twice in parallel. The averages of the parallel experimental data were calculated and taken into account in analyses of sonochemical kinetics. The error of all parallel experiments was under 5 %.
Quantitative analysis experiments
Quantitative analysis of Pyridine concentration was determined by measuring its absorbance using Shimadzu UV–Visible spectrophotometer. Initially, tests were carried out by UV scans from a wavelength of 200–500 nm to determine the absorption maxima of the pyridine molecule.
Study and optimization of the operating conditions
Effect of initial pyridine concentration
The effect of solute concentration on the degradation of pyridine was investigated at pH 6.7, 40 kHz, 20 ± 1 °C, 360 W and initial concentration 10, 20, 60, 80 and 100 mg/L. It follows from the data obtained in Fig. 1 that degradation of pyridine depends upon sonication time. The removal rates of pyridine at 180 min sonication time were found to decrease from 53 to 15 % with increasing the initial concentration from 10 to 100 mg/L. This suggests that increasing the initial concentration of the solution would decrease the removal rates of pyridine. This is because the increment of initial concentration of the volatiles results in the weakening effect of cavitation reactions (Stapleton et al. 2007). However, the total amount of pyridine degraded after 60 min at 100 mg/L was as much as three times larger than that degraded at 10 mg/L.
Pyridine removal efficiencies versus time by ultrasound at various initial concentrations
The degradation rates can be expressed by the following equation:
$$\ln {\raise0.7ex\hbox{${C_{t} }$} \!\mathord{\left/ {\vphantom {{C_{t} } {C_{O } = - Kt}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${C_{O } = - Kt}$}}$$
where C0 and Ct are the initial and remaining concentrations of pyridine, respectively, k is the degradation rate constant, and t is the sonication time.
A typical ln Ct/C0 vs. t Fig. 2 show that, the reaction kinetics for the degradation followed first order rate laws (R2 > 93) at all initial concentrations. In addition to, as it is seen from the Fig. 3, the apparent first order rate constants decreased with an increasing initial concentration of the pyridine, indicating non-elementary nature of the sonochemical reactions. This dependence of reaction rate constants on initial concentration compared well with existing literature (Zechmeister and Magoon 1956; Stapleton et al. 2006, 2007).
The first order kinetic of pyridine degradation (40 kHz, 20 ± 1 °C and 360 W)
Apparent first order rate constants vs. initial concentration of pyridine (40 kHz, 20 ± 1 °C and 360 W)
Sonochemical degradation is based on physicochemical processes that produce in situ powerful free radical species, principally hydroxyl radicals (HO·), using chemical and/or other forms of energy, and have a high efficiency for organic matter oxidation;. It is highly powerful oxidizing agent having an oxidation potential of 2.33 V, which can undergo rapid and non-selective reaction with most organic and many inorganic pollutant. Hydroxyl radicals exhibit faster rates of oxidation reactions comparing to conventional oxidants such as H2O2 or KMnO4. Once generated, the hydroxyl radicals can attack organic chemicals by radical addition. The reaction conditions vary as the concentration of monocyclic aromatic compounds in aqueous solution changes. In our case, products formed by the degradation of pyridine should affect the reaction rate due to their influence on the cavitation temperature; however, it is difficult to evaluate and establish (De Visscher et al. 1996).
It seems that the major route for degradation of pyridine during ultrasonic irradiation alone without any additives is by pyrolytic reactions in the gas phase, and thus it shows a greater dependence on initial concentration. Ultrasonication not only promotes oxidative degradation of pyridine by hydroxyl radicals, but also provides a possible route for thermal decomposition in the gas phase (Naffrechoux et al. 2000).
Effect of initial pyridine solution pH
In this part of study, sonication experiments were repeated with pyridine solutions 100 ppm to study the effect of pH on the degradation reaction. The initial pH of the solution was adjusted by adding 1–3 drops of NaOH (0.1 M) and HCl (0.1 M). Figure 4 illustrates the removal of pyridine at different pH initial values. It is seen that pH value greatly affects the removal efficiencies for pyridine. The optimal pH was found to be 9 which resulted in 25 % pyridine removal after 180 min pyridine ultrasound (US) radiation. No significant increase of pyridine removal was observed when solution pH was further increased over 9. However, at acidic medium (pH < 6.5) the removal efficiency decreased considerably. This could be attributed to that, Pyridine is a heterocyclic nitrogenous compound and during its degradation, the N atom in the pyridine ring upon mineralization is released as ammonia which easily observed by its unpleasant odor. However, because pyridine contains an N atom, which is more electronegative than an SP2 hybridized C atom, it is suggested that at higher acidity values, the formation of pyridinium salt predominate, which is more stable in solution. In addition to, in alkaline medium, the anionic state of compound favors the ultrasonic absorption and production of more hydroxyl radical from hydroxyl ion (OH− → OH·), which causes the enhancement in degradation efficiency.
Effect of the pH on the removal efficiencies of pyridine by ultrasound (40 kHz, 20 ± 1 °C and 360 W)
On the other hand, by observing the change of pH value with the sonication time up to 60 min, it was observed that pH of the sonicated pyridine aqueous mixture has decreased from 9.2 to 6.2 during the irradiation (Fig. 5). Initially at zero time, pH of the aqueous mixture is found to be 9.2, which has reduced to 6.2 in the first 25 min of sonication. The decrease in the pH may be attributed due to the formation of peroxy radicals in the solution and subsequently formation of oxygen free radicals and the (H+) ions (Sistla 2005). Many researchers reported the possibility of release of [H+] ions after 10 min of irradiation in water. After 15 min of sonication, pH values obtained are nearing 7.2 (Stapleton et al. 2007; Sistla and Chintalapati 2008).
Variation of pyridine solution pH with sonication time (40 kHz, 20 ± 1 °C and 360 W)
The effect of pH value could be interpreted that pH value affects the distribution of the existing state of the pyridine and all kinds of organic compounds in the wastewater. It is well known that pH value can affect the physicochemical properties of substances in aqueous solution, and thus it is expected that pH value can affect the ultrasonic decomposition rates of substance s in the solution (Xu et al. 2005). This dependence of degradation on pH compared well with existing literature. Kotronarou et al. (1991) and Jain et al. (2004) investigated the influence of the changes in the initial pH of the pnitrophenol (PNP) solutions on the decay of PNP. PNP decayed exponentially with time at all pH values. The pseudo-first order rate constant decreased with the increase of pH (pH 5 to 8), and remained constant up to pH 10. At pH > 10, the pseudo-first order rate constant increased slightly because of the slow thermal reaction between PNP and OH radical/H2O2.
Furthermore, Drijvers et al. (1996) found that the degradation of trichloroethylene is fastest in basic solutions. However, no influence of the pH value of the aqueous solutions on the sonolysis of chlorobenzene was found (Drijvers et al. 1998).
In conclusion, at acidic medium, the formation of salts reduces the vapor pressure of the reactants to such an extent that they are unable to enter the bubbles present and are, hence, unaffected by the ultrasonic waves (Currell et al. 1963).
Effect of ultrasound (US) radiation with and without aeration
The cavitational effect of ultrasound causes the degassing of liquids. Therefore, many researchers deliberately bubble gas through a sonochemical reaction to facilitate uniform cavitation (Kotronarou et al. 1991). To determine the effect of dissolved gas, experiments were carried out in air saturated solutions. For these experiments, the test gas was introduced into the reactor during the sonication experiments 1.5 L min−1. Figure 6 illustrates the removal of pyridine with and without aeration for pyridine concentration 100 ppm, pH 9.1. It could be seen that the removal was enhanced by aeration to some extent. Aeration brought a lot of air bubbles into the solution. This might result in turbulence and agitation. Therefore, the mass transfer in the solution was enhanced, which benefited the volatile of molecular decomposition product. So, the simultaneous aeration could improve pyridine removal efficiency of ultrasound irradiation by 24 %. Though the oxidation of organic pollutants by air is minimal, aeration could disturb pyridine strongly and may even break the surface continuity of pyridine, thus resulting in more cavitation nuclei in pyridine and much more efficient mass transfer and so more degradation of organic pollutants. When ultrasound irradiation is combined with aeration, aeration increases the concentration of cavitation bubbles in solution, enhancing the effective utilization of ultrasound energy, accelerating cavitation and pyrolysis. And at the same time, the cavitation bubbles are broken into 'mini-bubbles'. The total surface area of mini-bubbles is more than 103–104 times higher than that of cavitation bubbles (Xu et al. 2005). So the interfacial area between air and water was increased, and pyridine removal increased.
Effect of dissolved gas on the degradation of 100 ppm pyridine solution (40 kHz, 20 ± 1 °C, pH 9.1 and 360 W)
On the other hand, If the solution is saturated with oxygen, additional reactions occur as a consequence of combination of molecular oxygen with hydrogen atoms and the thermal decomposition of oxygen in the gas phase according to Eqs. (2)–(8) (Pang et al. 2011). In these reactions ")))" denotes the US irradiation. These reactions produce higher hydroxyl radical concentration, which cause more degradation of pyridine
$$\text{H}_{2} \text{O} \, + ))) \to \cdot \text{OH} \, + \, \cdot \text{H}$$
$$\text{O}_{2} \left( {\text{dissolved} } \right) \, + ))) \to 2 \cdot \text{O}$$
$$\text{O}_{2} + \, \cdot \text{H} \, \to \, \cdot \text{O}_{2} \text{H}$$
$$\text{O} \, + \, \cdot \text{O}_{2} \text{H} \, \to \, \cdot \text{OH} \, + \, \text{O}_{2}$$
$${\text{O}}_{ 2} + {\text{O}} \to {\text{ O}}_{ 3}$$
$$\text{O} + \text{H}_{2} \text{O} \, \to \, {2}\,\, {\cdot}\text{OH}$$
$$\cdot \text{O}_{2} \text{H} \, + \, \cdot \text{O}_{2} \text{H} \, \to \, \text{H}_{2} \text{O}_{2} + \, \text{O}_{2}$$
Furthermore, many investigators have extensively stated that dissolved gases are essential for the sonochemical reaction. Griffing (2004) noted that the rate of the hydrolysis of CCl4 is strongly dependent on the dissolved gas. Weissler et al. (1950) confirmed the effect of dissolved gases (O2, N2, He, CO2, vacuum) on the iodine yields during the ultrasonic irradiation of KI solutions in the absence of CCl4 and with a large excess of CCl4.
Effect of sonochemical reaction temperature
The effect of media temperature on pyridine degradation was investigated at an initial pyridine concentration of 100 ppm, pH of 9.1, air flow at 1.5 L min −1, and power of 360 W. Three different temperatures were utilized to investigate the influence of operating temperature on the cavitation reactions. The media temperature in solution gradually increases when ultrasound starts to operate, due to the thermal energy from ultrasound irradiation; therefore a water circulating system was needed to keep the operating temperature stable. Actual temperatures measured inside of the reaction container were 20, 30, and 40 °C. Temperature error was manipulated within 2 °C. The results are illustrated in Fig. 7. With increasing media temperatures, the removal efficiencies of pyridine increased in the temperature range in this study. Generally speaking, the chemical reaction rate usually increases as temperature rises, the liquid temperature and applied pressure dramatically affect the sonochemical reaction rates. The bulk temperature and static pressure first affect the vapor pressure, gas solubility, components in cavities, and thermal activation. Thereby, the intensity of collapse and the secondary reaction rate are influenced (Naffrechoux et al. 2000).
Effect of media temperature on the degradation of the pyridine (pyridine concentration 100 ppm, air flow 1.5 L min −1, pH 9.1, 40 kHz and 360 W)
On the other hand, the effective maximum temperature generated during the cavitational collapse is inversely proportional to the vapor pressure. A bubble contains not only the gas that is dissolved in the liquid, but also vapor from the liquid itself. The amount of vapor in the bubble depends on the vapor pressure of the liquid, which is strongly dependent on the temperature of the bulk liquid. Combining all of the above, the effect of temperature on sonochemical degradation rate is complicated. Thus, there is no consistent report on the impact of temperature on the degradation of organic compounds in literature. Bhatnagar and Cheung (1994) reported that the degradation of trichloroethylene and carbon tetrachloride remained constant between −7 to 20 and 20–60 °C, respectively. In contrast, Destaillats et al. (2000) indicated that the sonochemical degradation of chlorobenzene and TCE, respectively, increased with increasing temperature. In this study, slightly higher degradation is achieved at higher temperatures.
Ultrasonic irradiation has the potential for use in environmental decontamination due to the production of high concentrations of oxidizing species such as ·OH and H2O2, in the solution and localized transient high temperatures and pressures. It does not require the addition of chemical additives to achieve viable degradation rates. However, by carefully adjusting the operating conditions, the degradation efficiency can be significantly increased. In this study, sonochemical degradation of pyridine under different process parameters was conducted. Effects of different process variables such as initial concentration, pH, aeration, and media temperature were tested. The reaction rate was observed to be a function of the initial concentration of the pyridine. It decreases with the increase in initial concentration. The degradation efficiency of pyridine increases with the increasing of the reaction temperature, the initial pH, and aeration. The research has shown that it is technically feasible to decompose pyridine compound by sonolysis. The advantage of ultrasonic degradation lies in the amount of energy stored in the microbubbles. If properly utilized, it can be a truly useful technology for large scale water treatment. The process is easier to operate and there are practically no hazards associated with it.
Abbasi M, Asl NR (2008) Sonochemical degradation of Basic Blue 41 dye assisted by nanoTiO2 and H2O2. J Hazard Mater 153(3):942–947. doi: 10.1016/j.jhazmat.2007.09.045CrossRefGoogle Scholar
Agustina TE, Ang HM, Vareek VK (2005) A review of synergistic effect of photocatalysis and ozonation on wastewater treatment. J Photochem Photobiol C Photochem Rev 6(4):264–273. doi: 10.1016/j.jphotochemrev.2005.12.003CrossRefGoogle Scholar
Bhatnagar A, Cheung HM (1994) Sonochemical destruction of chlorinated C1 and C2 volatile organic compounds in dilute aqueous solution. Environ Sci Technol 28(8):1481–1486CrossRefGoogle Scholar
Currell DL, Wilheim G, Nagy S (1963) The effect of certain variables on the ultrasonic cleavage of phenol and of pyridine. J Am Chem Soc 85(2):127–130. doi: 10.1021/ja00885a002CrossRefGoogle Scholar
De Visscher A, Van Eenoo P, Drijvers D, Van Langenhove H (1996) Kinetic model for the sonochemical degradation of monocyclic aromatic compounds in aqueous solution. J Phys Chem 100(28):11636–11642. doi: 10.1021/jp953688oCrossRefGoogle Scholar
Destaillats H, Colussi A, Joseph JM, Hoffmann MR (2000) Synergistic effects of sonolysis combined with ozonolysis for the oxidation of azobenzene and methyl orange. J Phys Chem A 104(39):8930–8935CrossRefGoogle Scholar
Drijvers D, De Baets R, De Visscher A, Van Langenhove H (1996) Sonolysis of trichloroethylene in aqueous solution: volatile organic intermediates. Ultrason Sonochem 3(2):S83–S90CrossRefGoogle Scholar
Drijvers D, Van Langenhove H, Vervaet K (1998) Sonolysis of chlorobenzene in aqueous solution: organic intermediates. Ultrason Sonochem 5(1):13–19CrossRefGoogle Scholar
Goel M, Hongqiang H, Mujumdar AS, Ray MB (2004) Sonochemical decomposition of volatile and non-volatile organic compounds: a comparative study. Water Res 38(19):4247–4261. doi: 10.1016/j.watres.2004.08.008CrossRefGoogle Scholar
Gogate PR, Pandit AB (2004) A review of imperative technologies for wastewater treatment I: oxidation technologies at ambient conditions. Adv Environ Res 8(3–4):501–551. doi: 10.1016/S1093-0191(03)00032-7CrossRefGoogle Scholar
Gong C, Hart DP (1998) Ultrasound induced cavitation and sonochemical yields. J Acousti Soc Am 104(5):2675–2682. doi: 10.1121/1.423851CrossRefGoogle Scholar
Griffing V (2004) The chemical effects of ultrasonics. J Chem Phys 20(6):939–942CrossRefGoogle Scholar
Gupta VK, Sharma S (2003) Removal of zinc from aqueous solutions using bagasse fly ash-a low cost adsorbent. Ind Eng Chem Res 42(25):6619–6624CrossRefGoogle Scholar
Gupta VK, Mittal A, Kurup L, Mittal J (2006) Adsorption of a hazardous dye, erythrosine, over hen feathers. J Colloid Interface Sci 304(1):52–57CrossRefGoogle Scholar
Gupta V, Jain R, Mittal A, Mathur M, Sikarwar S (2007a) Photochemical degradation of the hazardous dye Safranin-T using TiO2 catalyst. J Colloid Interface Sci 309(2):464–469CrossRefGoogle Scholar
Gupta VK, Ali I, Saini VK (2007b) Defluoridation of wastewaters using waste carbon slurry. Water Res 41(15):3307–3316CrossRefGoogle Scholar
Gupta VK, Jain R, Varshney S (2007c) Electrochemical removal of the hazardous dye Reactofix Red 3 BFN from industrial effluents. J Colloid Interface Sci 312(2):292–296CrossRefGoogle Scholar
Gupta VK, Mittal A, Malviya A, Mittal J (2009) Adsorption of carmoisine A from wastewater using waste materials—bottom ash and deoiled soya. J Colloid Interface Sci 335(1):24–33CrossRefGoogle Scholar
Gupta VK, Rastogi A, Nayak A (2010) Adsorption studies on the removal of hexavalent chromium from aqueous solution using a low cost fertilizer industry waste material. J Colloid Interface Sci 342(1):135–141CrossRefGoogle Scholar
Gupta V, Gupta B, Rastogi A, Agarwal S, Nayak A (2011) A comparative investigation on adsorption performances of mesoporous activated carbon prepared from waste rubber tire and activated carbon for a hazardous azo dye—Acid Blue 113. J Hazard Mater 186(1):891–901CrossRefGoogle Scholar
Gupta VK, Mittal A, Jhare D, Mittal J (2012) Batch and bulk removal of hazardous colouring agent Rose Bengal by adsorption techniques using bottom ash as adsorbent. RSC Adv 2(22):8381–8389CrossRefGoogle Scholar
Ince NH, Tezcanlí G (2001) Reactive dyestuff degradation by combined sonolysis and ozonation. Dyes Pigments 49(3):145–153. doi: 10.1016/S0143-7208(01)00019-5CrossRefGoogle Scholar
Jain AK, Gupta VK, Jain S, Suhas (2004) Removal of chlorophenols using industrial wastes. Environ Sci Technol 38(4):1195–1200CrossRefGoogle Scholar
Karthikeyan S, Gupta V, Boopathy R, Titus A, Sekaran G (2012) A new approach for the degradation of high concentration of aromatic amine by heterocatalytic Fenton oxidation: kinetic and spectroscopic studies. J Mol Liq 173:153–163CrossRefGoogle Scholar
Kotronarou A, Mills G, Hoffmann MR (1991) Ultrasonic irradiation of p-nitrophenol in aqueous solution. J Phys Chem 95(9):3630–3638CrossRefGoogle Scholar
Mathur AK, Majumder CB, Chatterjee S, Roy P (2008) Biodegradation of pyridine by the new bacterial isolates S. putrefaciens and B. sphaericus. J Hazard Mater 157(2–3):335–343. doi: 10.1016/j.jhazmat.2007.12.112CrossRefGoogle Scholar
Mittal A, Gupta V, Malviya A, Mittal J (2008) Process development for the batch and bulk removal and recovery of a hazardous, water-soluble azo dye (Metanil Yellow) by adsorption over waste materials (Bottom Ash and De-Oiled Soya). J Hazard Mater 151(2):821–832CrossRefGoogle Scholar
Mittal A, Mittal J, Malviya A, Kaur D, Gupta V (2010) Adsorption of hazardous dye crystal violet from wastewater by waste materials. J Colloid Interface Sci 343(2):463–473CrossRefGoogle Scholar
Naffrechoux E, Chanoux S, Petrier C, Suptil J (2000) Sonochemical and photochemical oxidation of organic matter. Ultrason Sonochem 7(4):255–259. doi: 10.1016/S1350-4177(00)00054-7CrossRefGoogle Scholar
Padoley KV, Rajvaidya AS, Subbarao TV, Pandey RA (2006) Biodegradation of pyridine in a completely mixed activated sludge process. Bioresourc Technol 97(10):1225–1236. doi: 10.1016/j.biortech.2005.05.020CrossRefGoogle Scholar
Pang YL, Abdullah AZ, Bhatia S (2011) Review on sonochemical methods in the presence of catalysts and chemical additives for treatment of organic pollutants in wastewater. Desalination 277(1–3):1–14. doi: 10.1016/j.desal.2011.04.049CrossRefGoogle Scholar
Saleh TA, Gupta VK (2012) Column with CNT/magnesium oxide composite for lead (II) removal from water. Environ Sci Pollut Res 19(4):1224–1228CrossRefGoogle Scholar
Sistla S (2005) Degradation of Pyridine by ultrasound: a common refractory pollutant in wastewater effluents. Asian J Water Environ Pollut 2(2):89–93Google Scholar
Sistla S, Chintalapati S (2008) Sonochemical degradation of Congo red. Int J Environ Waste Manage 2(3):309–319CrossRefGoogle Scholar
Stapleton DR, Emery RJ, Mantzavinos D, Papadaki M (2006) Photolytic destruction of halogenated pyridines in wastewaters. Process Saf Environ Protect 84(4):313–316. doi: 10.1205/psep.05164CrossRefGoogle Scholar
Stapleton DR, Mantzavinos D, Papadaki M (2007) Photolytic (UVC) and photocatalyic (UVC/TiO2) decomposition of pyridines. J Hazard Mater 146(3):640–645. doi: 10.1016/j.jhazmat.2007.04.067CrossRefGoogle Scholar
Vinodgopal K, Peller J, Makogon O, Kamat PV (1998) Ultrasonic mineralization of a reactive textile azo dye, remazol black B. Water Res 32(12):3646–3650. doi: 10.1016/S0043-1354(98)00154-7CrossRefGoogle Scholar
Weissler A, Cooper HW, Snyder S (1950) Chemical effect of ultrasonic waves: oxidation of potassium iodide solution by carbon tetrachloride. J Am Chem Soc 72(4):1769–1775CrossRefGoogle Scholar
Xu J, Jia J, Wang J (2005) Ultrasonic decomposition of ammonia–nitrogen and organic compounds in coke plant wastewater. J Chin Chem Soc 52(1):59–65CrossRefGoogle Scholar
Yang S, Wang P, Yang X, Wei G, Zhang W, Shan L (2009) A novel advanced oxidation process to degrade organic pollutants in wastewater: Microwave-activated persulfate oxidation. J Environ Sci 21(9):1175–1180. doi: 10.1016/S1001-0742(08)62399-2CrossRefGoogle Scholar
Zechmeister L, Magoon EF (1956) On the ultrasonic cleavage of the pyridine ring. J Am Chem Soc 78(10):2149–2150. doi: 10.1021/ja01591a031CrossRefGoogle Scholar
This article is published under license to BioMed Central Ltd. Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
1.Egyptian Armed ForcesCairoEgypt
Elsayed, M.A. Appl Water Sci (2015) 5: 221. https://doi.org/10.1007/s13201-014-0182-x
Publisher Name Springer Berlin Heidelberg
King Abdulaziz City for Science and Technology
Not logged in Not affiliated 3.84.139.101
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.