text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
"Cavin has done an amazing job in all aspects of his life. Overcoming the horrific life threatening accident, and then going on to do whatever he can to help others with his contagious wonderful attitude. This book is an easy to understand fact filled manual for anyone, but especially those who are or are caregivers for a loved one with tbi. I also highly recommend his podcast series."
"In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them!
At small effects like d=0.07, a nontrivial chance of negative effects, and an unknown level of placebo effects (this was non-blinded, which could account for any residual effects), this strongly implies that LLLT is not doing anything for me worth bothering with. I was pretty skeptical of LLLT in the first place, and if 167 days can't turn up anything noticeable, I don't think I'll be continuing with LLLT usage and will be giving away my LED set. (Should any experimental studies of LLLT for cognitive enhancement in healthy people surface with large quantitative effects - as opposed to a handful of qualitative case studies about brain-damaged people - and I decide to give LLLT another try, I can always just buy another set of LEDs: it's only ~$15, after all.)
Some work has been done on estimating the value of IQ, both as net benefits to the possessor (including all zero-sum or negative-sum aspects) and as net positive externalities to the rest of society. The estimates are substantial: in the thousands of dollars per IQ point. But since increasing IQ post-childhood is almost impossible barring disease or similar deficits, and even increasing childhood IQs is very challenging, much of these estimates are merely correlations or regressions, and the experimental childhood estimates must be weakened considerably for any adult - since so much time and so many opportunities have been lost. A wild guess: $1000 net present value per IQ point. The range for severely deficient children was 10-15 points, so any normal (somewhat deficient) adult gain must be much smaller and consistent with Fitzgerald 2012's ceiling on possible effect sizes (small).
Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics"
The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer.
As for newer nootropic drugs, there are unknown risks. "Piracetam has been studied for decades," says cognitive neuroscientist Andrew Hill, the founder of a neurofeedback company in Los Angeles called Peak Brain Institute. But "some of [the newer] compounds are things that some random editor found in a scientific article, copied the formula down and sent it to China and had a bulk powder developed three months later that they're selling. Please don't take it, people!"
Similar to the way in which some athletes used anabolic steroids (muscle-building hormones) to artificially enhance their physique, some students turned to smart drugs, particularly Ritalin and Adderall, to heighten their intellectual abilities. A 2005 study reported that, at some universities in the United States, as many as 7 percent of respondents had used smart drugs at least once in their lifetime and 2.1 percent had used smart drugs in the past month. Modafinil was used increasingly by persons who sought to recover quickly from jet lag and who were under heavy work demands. Military personnel were given the same drug when sent on missions with extended flight times.
Regarding other methods of cognitive enhancement, little systematic research has been done on their prevalence among healthy people for the purpose of cognitive enhancement. One exploratory survey found evidence of modafinil use by people seeking cognitive enhancement (Maher, 2008), and anecdotal reports of this can be found online (e.g., Arrington, 2008; Madrigal, 2008). Whereas TMS requires expensive equipment, tDCS can be implemented with inexpensive and widely available materials, and online chatter indicates that some are experimenting with this method.
Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners.
Lebowitz says that if you're purchasing supplements to improve your brain power, you're probably wasting your money. "There is nothing you can buy at your local health food store that will improve your thinking skills," Lebowitz says. So that turmeric latte you've been drinking everyday has no additional brain benefits compared to a regular cup of java.
Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain.
Even if you eat foods that contain these nutrients, Hogan says their beneficial effects are in many ways cumulative—meaning the brain perks don't emerge unless you've been eating them for long periods of time. Swallowing more of these brain-enhancing compounds at or after middle-age "may be beyond the critical period" when they're able to confer cognitive enhancements, he says.
He used to get his edge from Adderall, but after moving from New Jersey to San Francisco, he says, he couldn't find a doctor who would write him a prescription. Driven to the Internet, he discovered a world of cognition-enhancing drugs known as nootropics — some prescription, some over-the-counter, others available on a worldwide gray market of private sellers — said to improve memory, attention, creativity and motivation.
Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007).
The Stroop task tests the ability to inhibit the overlearned process of reading by presenting color names in colored ink and instructing subjects to either read the word (low need for cognitive control because this is the habitual response to printed words) or name the ink color (high need for cognitive control). Barch and Carter (2005) administered this task to normal control subjects on placebo and d-AMP and found speeding of responses with the drug. However, the speeding was roughly equivalent for the conditions with low and high cognitive control demands, suggesting that the observed facilitation may not have been specific to cognitive control.
CDP-Choline is also known as Citicoline or Cytidine Diphosphocholine. It has been enhanced to allow improved crossing of the blood-brain barrier. Your body converts it to Choline and Cytidine. The second then gets converted to Uridine (which crosses the blood-brain barrier). CDP-Choline is found in meats (liver), eggs (yolk), fish, and vegetables (broccoli, Brussels sprout).
It looks like the overall picture is that nicotine is absorbed well in the intestines and the colon, but not so well in the stomach; this might be the explanation for the lack of effect, except on the other hand, the specific estimates I see are that 10-20% of the nicotine will be bioavailable in the stomach (as compared to 50%+ for mouth or lungs)… so any of my doses of >5ml should have overcome the poorer bioavailability! But on the gripping hand, these papers are mentioning something about the liver metabolizing nicotine when absorbed through the stomach, so…
The stop-signal task has been used in a number of laboratories to study the effects of stimulants on cognitive control. In this task, subjects are instructed to respond as quickly as possible by button press to target stimuli except on certain trials, when the target is followed by a stop signal. On those trials, they must try to avoid responding. The stop signal can follow the target stimulus almost immediately, in which case it is fairly easy for subjects to cancel their response, or it can come later, in which case subjects may fail to inhibit their response. The main dependent measure for stop-signal task performance is the stop time, which is the average go reaction time minus the interval between the target and stop signal at which subjects inhibit 50% of their responses. De Wit and colleagues have published two studies of the effects of d-AMP on this task. De Wit, Crean, and Richards (2000) reported no significant effect of the drug on stop time for their subjects overall but a significant effect on the half of the subjects who were slowest in stopping on the baseline trials. De Wit et al. (2002) found an overall improvement in stop time in addition to replicating their earlier finding that this was primarily the result of enhancement for the subjects who were initially the slowest stoppers. In contrast, Filmore, Kelly, and Martin (2005) used a different measure of cognitive control in this task, simply the number of failures to stop, and reported no effects of d-AMP.
The truth is that, almost 20 years ago when my brain was failing and I was fat and tired, I did not know to follow this advice. I bought $1000 worth of smart drugs from Europe, took them all at once out of desperation, and got enough cognitive function to save my career and tackle my metabolic problems. With the information we have now, you don't need to do that. Please learn from my mistakes!
The majority of smart pills target a limited number of cognitive functions, which is why a group of experts gathered to discover a formula which will empower the entire brain and satisfy the needs of students, athletes, and professionals. Mind Lab Pro® combines 11 natural nootropics to affect all 4 areas of mental performance, unlocking the full potential of your brain. Its carefully designed formula will provide an instant boost, while also delivering long-term benefits.
It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties.
Neuroplasticity, or the brain's ability to change and reorganize itself in response to intrinsic and extrinsic factors, indicates great potential for us to enhance brain function by medical or other interventions. Psychotherapy has been shown to induce structural changes in the brain. Other interventions that positively influence neuroplasticity include meditation, mindfulness , and compassion.
MPH was developed more recently and marketed primarily for ADHD, although it is sometimes prescribed off label or used nonmedically to increase alertness, energy, or concentration in conditions other than ADHD. Both MPH and AMP are on the list of substances banned from sports competitions by the World Anti-Doping Agency (Docherty, 2008). Both also have the potential for abuse and dependence, which detracts from their usefulness and is the reason for their classification as Schedule II controlled substances. Although the risk of developing dependence on these drugs is believed to be low for individuals taking them for ADHD, the Schedule II classification indicates that these drugs have a high potential for abuse and that abuse may lead to severe dependence.
Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine.
One might suggest just going to the gym or doing other activities which may increase endogenous testosterone secretion. This would be unsatisfying to me as it introduces confounds: the exercise may be doing all the work in any observed effect, and certainly can't be blinded. And blinding is especially important because the 2011 review discusses how some studies report that the famed influence of testosterone on aggression (eg. Wedrifid's anecdote above) is a placebo effect caused by the folk wisdom that testosterone causes aggression & rage!
Another interpretation of the mixed results in the literature is that, in some cases at least, individual differences in response to stimulants have led to null results when some participants in the sample are in fact enhanced and others are not. This possibility is not inconsistent with the previously mentioned ones; both could be at work. Evidence has already been reviewed that ability level, personality, and COMT genotype modulate the effect of stimulants, although most studies in the literature have not broken their samples down along these dimensions. There may well be other as-yet-unexamined individual characteristics that determine drug response. The equivocal nature of the current literature may reflect a mixture of substantial cognitive-enhancement effects for some individuals, diluted by null effects or even counteracted by impairment in others.
So what about the flip side: a drug to erase bad memories? It may have failed Jim Carrey in Eternal Sunshine of the Spotless Mind, but neuroscientists have now discovered an amnesia drug that can dull the pain of traumatic events. The drug, propranolol, was originally used to treat high blood pressure and heart disease. Doctors noticed that patients given the drug suffered fewer signs of stress when recalling a trauma.
A television advertisement goes: "It's time to let Focus Factor be your memory-fog lifter." But is this supplement up to task? Focus Factor wastes no time, whether paid airtime or free online presence: it claims to be America's #1 selling brain health supplement with more than 4 million bottles sold and millions across the country actively caring for their brain health. It deems itself instrumental in helping anyone stay focused and on top of his game at home, work, or school. Learn More...
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
Phenotropil is an over-the-counter supplement similar in structure to Piracetam (and Noopept). This synthetic smart drug has been used to treat stroke, epilepsy and trauma recovery. A 2005 research paper also demonstrated that patients diagnosed with natural lesions or brain tumours see improvements in cognition. Phenylpiracetam intake can also result in minimised feelings of anxiety and depression. This is one of the more powerful unscheduled Nootropics available.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.)
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more.
Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors.
Too much caffeine may be bad for bone health because it can deplete calcium. Overdoing the caffeine also may affect the vitamin D in your body, which plays a critical role in your body's bone metabolism. However, the roles of vitamin D as well as caffeine in the development of osteoporosis continue to be a source of debate. Significance: Caffeine may interfere with your body's metabolism of vitamin D, according to a 2007 Journal of Steroid Biochemistry & Molecular Biology study. You have vitamin D receptors, or VDRs, in your osteoblast cells. These large cells are responsible for the mineralization and synthesis of bone in your body. They create a sheet on the surface of your bones. The D receptors are nuclear hormone receptors that control the action of vitamin D-3 by controlling hormone-sensitive gene expression. These receptors are critical to good bone health. For example, a vitamin D metabolism disorder in which these receptors don't work properly causes rickets.
Increasing incidences of chronic diseases such as diabetes and cancer are also impacting positive growth for the global smart pills market. The above-mentioned factors have increased the need for on-site diagnosis, which can be achieved by smart pills. Moreover, the expanding geriatric population and the resulting increasing in degenerative diseases has increased demand for smart pills
Analgesics Anesthetics General Local Anorectics Anti-ADHD agents Antiaddictives Anticonvulsants Antidementia agents Antidepressants Antimigraine agents Antiparkinson agents Antipsychotics Anxiolytics Depressants Entactogens Entheogens Euphoriants Hallucinogens Psychedelics Dissociatives Deliriants Hypnotics/Sedatives Mood Stabilizers Neuroprotectives Nootropics Neurotoxins Orexigenics Serenics Stimulants Wakefulness-promoting agents | CommonCrawl |
A hybrid method for stiff reaction–diffusion equations
On the $ L^p $ regularity of solutions to the generalized Hunter-Saxton system
December 2019, 24(12): 6367-6385. doi: 10.3934/dcdsb.2019143
Dynamics of a stochastic hepatitis C virus system with host immunity
Tao Feng 1, , Zhipeng Qiu 1,, and Xinzhu Meng 2,
Department of Applied Mathematics, Nanjing University of Science and Technology, Nanjing 210094, China
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
* Corresponding author: Zhipeng Qiu
Received September 2018 Revised December 2018 Published December 2019 Early access July 2019
Fund Project: T. Feng is supported by the Scholarship Foundation of China Scholarship Council grant No. 201806840120, the Postgraduate Research & Practice Innovation Program of Jiangsu Province grant No. KYCX18_0370 and the Fundamental Research Funds for the Central Universities grant No. 30918011339, Z. Qiu is supported by the National Natural Science Foundation of China (NSFC) grant No. 11671206, X. Meng is supported by the Research Fund for the Taishan Scholar Project of Shandong Province of China and the SDUST Research Fund (2014TDJH102).
Figure(7) / Table(1)
In this paper, stochastic differential equations that model the dynamics of a hepatitis C virus are derived from a system of ordinary differential equations. The stochastic model incorporates the host immunity. Firstly, the existence of a unique ergodic stationary distribution is derived by using the theory of Hasminskii. Secondly, sufficient conditions are obtained for the destruction of hepatocytes and the convergence of target cells. Moreover based on realistic parameters, numerical simulations are carried out to show the analytical results. These results highlight the role of environmental noise in the spread of hepatitis C viruses. The theoretical work extend the results of the corresponding deterministic system.
Keywords: Hepatitis C virus model, stochastic noise, stationary distribution and ergodicity, extinction, invariant measure.
Mathematics Subject Classification: Primary: 92B05, 92D30; Secondary: 60H10.
Citation: Tao Feng, Zhipeng Qiu, Xinzhu Meng. Dynamics of a stochastic hepatitis C virus system with host immunity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6367-6385. doi: 10.3934/dcdsb.2019143
L. Allen, B. Bolker, Y. Lou and A. Nevai, Asymptotic profiles of the steady states for an sis epidemic patch model, SIAM Journal on Applied Mathematics, 67 (2007), 1283-1309. doi: 10.1137/060672522. Google Scholar
S. Banerjee, R. Keval and S. Gakkhar, Modeling the dynamics of hepatitis c virus with combined antiviral drug therapy: Interferon and ribavirin, Mathematical Biosciences, 245 (2013), 235-248. doi: 10.1016/j.mbs.2013.07.005. Google Scholar
M. Barczy and G. Pap, Portmanteau theorem for unbounded measures, Statistics and Probability Letters, 76 (2006), 1831-1835. doi: 10.1016/j.spl.2006.04.025. Google Scholar
B. Berrhazi, M. E. Fatini, T. Caraballo and R. Pettersson, A stochastic siri epidemic model with lévy noise, Discrete and Continuous Dynamical Systems-B, 23 (2018), 2415-2431. doi: 10.3934/dcdsb.2018057. Google Scholar
G. Blé, L. Esteva and A. Peregrino, Global analysis of a mathematical model for hepatitis c considering the host immune system, Journal of Mathematical Analysis and Applications, 461 (2018), 1378-1390. doi: 10.1016/j.jmaa.2018.01.050. Google Scholar
T. Britton and A. Traoré, A stochastic vector-borne epidemic model: Quasi-stationarity and extinction, Mathematical Biosciences, 289 (2017), 89-95. doi: 10.1016/j.mbs.2017.05.004. Google Scholar
Y. Cai, Y. Kang, M. Banerjee and W. Wang, A stochastic sirs epidemic model with infectious force under intervention strategies, Journal of Differential Equations, 259 (2015), 7463-7502. doi: 10.1016/j.jde.2015.08.024. Google Scholar
T. Caraballo, M. E. Fatini, R. Pettersson and R. Taki, A stochastic siri epidemic model with relapse and media coverage, Discrete and Continuous Dynamical Systems-B, 23 (2018), 3483-3501. doi: 10.3934/dcdsb.2018250. Google Scholar
Z. Chang, X. Meng and T. Zhang, A new way of investigating the asymptotic behaviour of a stochastic sis system with multiplicative noise, Applied Mathematics Letters, 87 (2019), 80-86. doi: 10.1016/j.aml.2018.07.014. Google Scholar
N. Dalal, D. Greenhalgh and X. Mao, A stochastic model for internal hiv dynamics, Journal of Mathematical Analysis and Applications, 341 (2008), 1084-1101. doi: 10.1016/j.jmaa.2007.11.005. Google Scholar
N. T. Dieu, D. H. Nguyen, N. H. Du and G. Yin, Classification of asymptotic behavior in a stochastic sir model, SIAM Journal on Applied Dynamical Systems, 15 (2016), 1062-1084. doi: 10.1137/15M1043315. Google Scholar
N. M. Dixit, J. E. Layden-Almer, T. J. Layden and A. S. Perelson, Modelling how ribavirin improves interferon response rates in hepatitis c virus infection, Nature, 432 (2004), 922-924. doi: 10.1038/nature03153. Google Scholar
T. Feng, Z. Qiu and X. Meng, Analysis of a stochastic recovery-relapse epidemic model with periodic parameters and media coverage, Journal of Applied Analysis and Computation, 9 (2019), 1-15. doi: 10.11948/2156-907X.20180231. Google Scholar
T. Feng and Z. Qiu, Global dynamics of deterministic and stochastic epidemic systems with nonmonotone incidence rate, International Journal of Biomathematics, 11 (2018), Paper No. 1850101, 24 pp. doi: 10.1142/S1793524518501012. Google Scholar
T. Feng and Z. Qiu, Global analysis of a stochastic tb model with vaccination and treatment, Discrete and Continuous Dynamical Systems-B, 24 (2019), 2923-2939. doi: 10.3934/dcdsb.2018292. Google Scholar
T. Feng, Z. Qiu, X. Meng and L. Rong, Analysis of a stochastic hiv-1 infection model with degenerate diffusion, Applied Mathematics and Computation, 348 (2019), 437-455. doi: 10.1016/j.amc.2018.12.007. Google Scholar
Z. Feng and H. Thieme, Endemic models with arbitrarily distributed periods of infection ii: Fast disease dynamics and permanent recovery, SIAM Journal on Applied Mathematics, 61 (2000), 983-1012. doi: 10.1137/S0036139998347846. Google Scholar
D. J. Higham, An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM Review, 43 (2001), 525-546. doi: 10.1137/S0036144500378302. Google Scholar
S. Jerez, S. Díaz-Infante and B. Chen, Fluctuating periodic solutions and moment boundedness of a stochastic model for the bone remodeling process, Mathematical Biosciences, 299 (2018), 153-164. doi: 10.1016/j.mbs.2018.03.006. Google Scholar
J. Jiang and Z. Qiu, The complete classification for dynamics in a nine-dimensional west nile virus model, SIAM Journal on Applied Mathematics, 69 (2009), 1205-1227. doi: 10.1137/070709438. Google Scholar
J. Jiang, Z. Qiu, J. Wu and H. Zhu, Threshold conditions for west nile virus outbreaks, Bulletin of Mathematical Biology, 71 (2009), 627-647. doi: 10.1007/s11538-008-9374-6. Google Scholar
R. Khasminskii, Stochastic Stability of Differential Equations, Stochastic Modelling and Applied Probability, 66. Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-23280-0. Google Scholar
D. Li, J. Cui, M. Liu and S. Liu, The evolutionary dynamics of stochastic epidemic model with nonlinear incidence rate, Bulletin of Mathematical Biology, 77 (2015), 1705-1743. doi: 10.1007/s11538-015-0101-9. Google Scholar
M. Liu and C. Bai, Analysis of a stochastic tri-trophic food-chain model with harvesting, Journal of Mathematical Biology, 73 (2016), 597-625. doi: 10.1007/s00285-016-0970-z. Google Scholar
X. Mao, G. Marion and E. Renshaw, Environmental brownian noise suppresses explosions in population dynamics, Stochastic Processes and their Applications, 97 (2002), 95-110. doi: 10.1016/S0304-4149(01)00126-0. Google Scholar
X. Meng, S. Zhao, T. Feng and T. Zhang, Dynamics of a novel nonlinear stochastic sis epidemic model with double epidemic hypothesis, Journal of Mathematical Analysis and Applications, 433 (2016), 227-242. doi: 10.1016/j.jmaa.2015.07.056. Google Scholar
A. U. Neumann, N. P. Lam, H. Dahari, D. R. Gretch, T. E. Wiley, T. J. Layden and A. S. Perelson, Hepatitis c viral dynamics in vivo and the antiviral efficacy of interferon-$\alpha$ therapy, Science, 282 (1998), 103-107. Google Scholar
Z. Qiu, M. Y. Li and Z. Shen, Global dynamics of an infinite dimensional epidemic model with nonlocal state structures, Journal of Differential Equations, 265 (2018), 5262-5296. doi: 10.1016/j.jde.2018.06.036. Google Scholar
L. Rong, R. M. Ribeiro and A. S. Perelson, Modeling quasispecies and drug resistance in hepatitis c patients treated with a protease inhibitor, Bulletin of Mathematical Biology, 74 (2012), 1789-1817. doi: 10.1007/s11538-012-9736-y. Google Scholar
I. Rusyn and S. M. Lemon, Mechanisms of hcv-induced liver cancer: What did we learn from in vitro and animal studies?, Cancer Letters, 345 (2014), 210-215. doi: 10.1016/j.canlet.2013.06.028. Google Scholar
S. Sengupta, P. Das and D. Mukherjee, Stochastic non-autonomous holling type- prey-predator model with predator's intra-specific competition, Discrete and Continuous Dynamical Systems-B, 23 (2018), 3275-3296. doi: 10.3934/dcdsb.2018244. Google Scholar
C. W. Shepard, L. Finelli and M. J. Alter, Global epidemiology of hepatitis c virus infection, The Lancet Infectious Diseases, 5 (2005), 558-567. doi: 10.1016/S1473-3099(05)70216-4. Google Scholar
A. Skorokhod, Asymptotic Methods in the Theory of Stochastic Differential Equations, Translations of Mathematical Monographs, 78. American Mathematical Society, Providence, RI, 1989. Google Scholar
B. Stephenson, C. Lanzas, S. Lenhart and J. Day, Optimal control of vaccination rate in an epidemiological model of clostridium difficile transmission, Journal of Mathematical Biology, 75 (2017), 1693-1713. doi: 10.1007/s00285-017-1133-6. Google Scholar
Q. Yang, D. Jiang, N. Shi and C. Ji, The ergodicity and extinction of stochastically perturbed sir and seir epidemic models with saturated incidence, Journal of Mathematical Analysis and Applications, 388 (2012), 248-271. doi: 10.1016/j.jmaa.2011.11.072. Google Scholar
S. Zhang, X. Meng, T. Feng and T. Zhang, Dynamics analysis and numerical simulations of a stochastic non-autonomous predator-prey system with impulsive effects, Nonlinear Analysis: Hybrid Systems, 26 (2017), 19-37. doi: 10.1016/j.nahs.2017.04.003. Google Scholar
Y. Zhang, K. Fan, S. Gao and S. Chen, A remark on stationary distribution of a stochastic sir epidemic model with double saturated rates, Applied Mathematics Letters, 76 (2018), 46-52. doi: 10.1016/j.aml.2017.08.002. Google Scholar
Y. Zhao, S. Yuan and J. Ma, Survival and stationary distribution analysis of a stochastic competitive model of three species in a polluted environment, Bulletin of Mathematical Biology, 77 (2015), 1285-1326. doi: 10.1007/s11538-015-0086-4. Google Scholar
C. Zhu and G. Yin, Asymptotic properties of hybrid diffusion systems, SIAM Journal on Control and Optimization, 46 (2007), 1155-1179. doi: 10.1137/060649343. Google Scholar
Figure 1. Trajectories of the system (2) and its deterministic system (1)
Figure 2. Density distribution of the system (2)
Figure 7. Trajectories of the solution of the system (2) and its deterministic system (1)
Table 1. Variables and parameters for HCV spread
Initial values
$ H_s $ concentration of target cells 1000$ \; \rm{mm}^{-3} $
$ H_i $ concentration of infected liver cells 0
$ V $ concentration of viral load $ 10^{-2}\; \rm{mm}^{-3} $
$ T $ concentration of T killer cells 0
$ \beta_s $ produce rate of target cells $ 20\; \rm{day}^{-1}\; \rm{mm}^{-3} $
$ k $ scaled transmission rate between target cells
and infected liver cells $ 3.5\times10^{-5}\; \rm{mm}^{3}\; \rm{day}^{-1} $
$ \mu_s $ death rate of target cells $ 0.03\; \rm{day}^{-1} $
$ \delta $ destroy rate of T cells to infected liver cells $ 2.5\times10^{-5}\; \rm{mm}^{3}\; \rm{day}^{-1} $
$ \mu_i $ death rate of infected liver cells $ 0.02\; \rm{day}^{-1} $
$ p $ $ 0.003\; \rm{day}^{-1} $
$ \beta_T $ reproduction rate of T killer cells $ 3.0\times10^{-5}\; \rm{mm}^{3}\; \rm{day}^{-1} $
$ T_{max} $ the maximum of T killer cells in the body $ 2000\; \rm{mm}^{-3} $
$ \mu_T $ death rate of T killer cells $ 0.01\; \rm{day}^{-1} $
Elamin H. Elbasha. Model for hepatitis C virus transmissions. Mathematical Biosciences & Engineering, 2013, 10 (4) : 1045-1065. doi: 10.3934/mbe.2013.10.1045
Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3121-3135. doi: 10.3934/cpaa.2019140
Tadas Telksnys, Zenonas Navickas, Miguel A. F. Sanjuán, Romas Marcinkevicius, Minvydas Ragulskis. Kink solitary solutions to a hepatitis C evolution model. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4427-4447. doi: 10.3934/dcdsb.2020106
Yanan Zhao, Yuguo Lin, Daqing Jiang, Xuerong Mao, Yong Li. Stationary distribution of stochastic SIRS epidemic model with standard incidence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (7) : 2363-2378. doi: 10.3934/dcdsb.2016051
Steffen Eikenberry, Sarah Hews, John D. Nagy, Yang Kuang. The dynamics of a delay model of hepatitis B virus infection with logistic hepatocyte growth. Mathematical Biosciences & Engineering, 2009, 6 (2) : 283-299. doi: 10.3934/mbe.2009.6.283
Jonathan C. Mattingly, Etienne Pardoux. Invariant measure selection by noise. An example. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4223-4257. doi: 10.3934/dcds.2014.34.4223
Li Zu, Daqing Jiang, Donal O'Regan. Persistence and stationary distribution of a stochastic predator-prey model under regime switching. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2881-2897. doi: 10.3934/dcds.2017124
Mario Lefebvre. A stochastic model for computer virus propagation. Journal of Dynamics & Games, 2020, 7 (2) : 163-174. doi: 10.3934/jdg.2020010
Shang Wu, Pengfei Xu, Jianhua Huang, Wei Yan. Ergodicity of stochastic damped Ostrovsky equation driven by white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1615-1626. doi: 10.3934/dcdsb.2020175
Xiaoling Zou, Dejun Fan, Ke Wang. Stationary distribution and stochastic Hopf bifurcation for a predator-prey system with noises. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1507-1519. doi: 10.3934/dcdsb.2013.18.1507
Ying-Cheng Lai, Kwangho Park. Noise-sensitive measure for stochastic resonance in biological oscillators. Mathematical Biosciences & Engineering, 2006, 3 (4) : 583-602. doi: 10.3934/mbe.2006.3.583
Tianlong Shen, Jianhua Huang. Ergodicity of the stochastic coupled fractional Ginzburg-Landau equations driven by α-stable noise. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 605-625. doi: 10.3934/dcdsb.2017029
Hongfu Yang, Xiaoyue Li, George Yin. Permanence and ergodicity of stochastic Gilpin-Ayala population model with regime switching. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3743-3766. doi: 10.3934/dcdsb.2016119
Miljana JovanoviĆ, Marija KrstiĆ. Extinction in stochastic predator-prey population model with Allee effect on prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2651-2667. doi: 10.3934/dcdsb.2017129
Shangzhi Li, Shangjiang Guo. Permanence and extinction of a stochastic SIS epidemic model with three independent Brownian motions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2693-2719. doi: 10.3934/dcdsb.2020201
Yunhua Zhou. The local $C^1$-density of stable ergodicity. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2621-2629. doi: 10.3934/dcds.2013.33.2621
Badr-eddine Berrhazi, Mohamed El Fatini, Tomás Caraballo, Roger Pettersson. A stochastic SIRI epidemic model with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2415-2431. doi: 10.3934/dcdsb.2018057
Xiang Lv. Existence of unstable stationary solutions for nonlinear stochastic differential equations with additive white noise. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021133
Tao Feng Zhipeng Qiu Xinzhu Meng | CommonCrawl |
BMC Pregnancy and Childbirth
December 2019 , 19:229 | Cite as
Nutritional service needs of pregnant and lactating adolescent girls in Trans-Mara East Sub-County, Narok County: focus on access and utilization of nutritional advice and services
David Omondi Okeyo
Sussy Gumo
Elly O. Munde
Charles O. Opiyo
Zablon O. Omungo
Maureen Olyaro
Rachel K. Ndirangu
Nanlop Ogbureke
Sophie Efange
Collins Ouma
Maternal health and pregnancy
An understanding of the association between adolescent nutrition, adolescent pregnancy and their quest for healthcare services may elucidate a basis for intervention and formulation of programs that enhance post-partum and increase the lifespan of the newborn, improve the quality of life and bridge morbidity, mortality and healthcare-associated cost. However, the nutritional needs of pregnant and lactating adolescent girls aged below 10 years resident in Trans Mara East Sub-County, Kenya remained unestablished. The objective of this study was to assess the nutritional needs of pregnant and lactating adolescent girls (under 19) when accessing and utilizing nutritional advice and services in Trans-Mara East Sub-County, Narok County.
The study adopted a cross-sectional approach that employed mixed methods with both quantitative and qualitative research approaches. Cochran formula was applied to arrive at a minimum of 291 households. Probability proportionate to size sampling techniques using cluster and simple random methods were used to practically access adolescents who are pregnant or lactating. Data was collected using questionnaires, in-depth interview and Focus Group Discussion. Quantitative data was analyzed descriptively using frequencies and inferentially using odds ratio and z-test. Framework analysis was employed to analyze qualitative data. p ≤ 0.05 was considered statistically significant.
The study revealed that access of pieces of nutritional-related advice represented by 67.8% was significantly higher than expected frequency of 50%. Nutrition supplementation, food fortification or blending and complementary feeding were significantly below the expectant frequency (p < 0.01) of 50%. Nutrition service areas such as provision and collection of vitamin A and IFAS were significantly lower than expected frequency (p < 0.01).
The most widely utilized were nutrition services that falls within the preventive-focused services followed by curative-focused services. Nutritionist and nurse more likely to increase overall utilization of nutrition services.
Nutritional needs Adolescent Lactating Pregnant Kenya
Community Health Volunteers
FGDs
Focus Group Discussions
Iron and Folic Acid Supplementation
ITNs
Insecticide-Treated Nets
KDHS
Kenya Demographic and Health Survey
KNDI
Kenya Nutritionists and Dieticians Institute
Ready-to-use-Supplementary-Foods
RUTFs
Ready-to-use-Therapeutic-Foods
Adolescence is a stage where puberty sets and a huge window of opportunity opens up in which they require increased nutritional needs. There are hormonal changes due to the onset of puberty, increased protein, energy, iron, and calcium requirements. It is apparent that children normally gain up to 50% of their respective adult weight, skeletal mass and acquire close to 20% of their height during adolescence [1].
Optimal nutrition becomes essential to attain full growth potential. Having a nutritional deficiency at this formative stage of life can be detrimental to the individual's future health and even the offspring. For instance, failure to partake a nutritious diet at this interval can lead to retarded sexual maturation and slowed physical growth [1, 2]. When adolescence is confronted with pregnancy, the nutrition requirements further becomes more demanding. Pregnancy presents another special stage in life that has the potential to positively impact on maternal health and that of the preceding generation. Adequate nutrition is imperative to meet the added demands of nutrients for the mother's body, that of the growing fetus and instills a strong biological basis for the present and coming health, productivity and well-being of the mother [3].
Other investigators further demonstrate that devastating effects of poor nutrition status at adolescent into motherhood, maternal body conformation, altered metabolism and supply of nutrients to the placenta can enhance positively or negatively the fetal development, growth and is interrelated to pregnancy outcome. Moreover, the relationship between nutrition status and pregnancy at adolescent is complex and attributed to different biological, economic, demographic and social factors which vary widely depending on the population in the interplay [3].
An understanding of the association between adolescent nutrition, adolescent pregnancy and their quest for healthcare services may elucidate a basis for intervention and formulation of programs that enhance post-partum and increase the lifespan of the newborn, improve the quality of life and bridge morbidity, mortality and healthcare-associated costs [4]. Whereas these windows of opportunities are critical, it turns out to be vital that adolescent girls should be singled-out to be able to halt the rotational cycle of malnutrition. This is extremely critical for the projected 10 million girls below 18 years that get married every year [5] and the 16 million adolescent girls who give birth each year [6]. In Kenya, the available data from Kenya Demographic Health Survey (KDHS) 2014 does not disaggregate data and thus do not necessarily provide specific information on adolescents aged 10–19 years. Yet, these adolescents face many risks and challenges as pertaining to their nutrition, health and education status.
In Narok County, 40% of girls aged 15–19 years have begun child bearing, almost two times higher than the Kenyan national level (18%), yet to date, no study has established the unmet needs of pregnant and lactating adolescent girls (aged 10–19 years) in accessing and utilizing nutritional advice and services in Trans Mara East Sub-County within Narok County, Kenya. In this region of Trans Mara East Sub-County, there is increased malnutrition due to rising population (attributable to high teenage pregnancies and low education levels) with minimal access to the existing nutritional services. The nutrition services and needs of adolescent is not yet fully explored to lay foundation for interventions. It is against this background that the current study focused on elucidating the needs of pregnant and lactating adolescent girls (aged 10–19 years) in accessing and utilizing nutritional advice and services in Trans Mara East Sub-County within Narok County, Kenya.
Study setting and research design
This study was conducted with within Narok County where 40% of girls aged 15–19 years have begun child bearing, a figure almost two times higher than the national level (18%). Specifically, 7.4% are pregnant with their first child and 33% have ever given birth as compared to the national levels of 3.4 and 14.7%, respectively. These statistics are supported by the risks facing adolescents in Kenya which include but not limited to: high HIV infections, particularly among girls (16% of people living with HIV are aged 10–24 years); high teenage pregnancies (18%); early marriages (11%) for older adolescents (15–19 years); persistent female genital mutilation (11%); high rates of anaemia (41%) among pregnant adolescents; high number of adolescents exposed to sexual violence (11%) and physical violence (50%) as well as low secondary school attendance with a net ratio of 47%. All these risks perpetuate further the vulnerability of this age group to a healthy life.
The study was carried out in Trans Mara East Sub-County within Narok County. Trans Mara East Sub-County was purposively selected since it is the smallest in size (275.4 km2), among the four sub-counties in Narok County and had the highest prevalence of teenage pregnancies based on previous survey (Christian Aid, 2018 unpublished data). To achieve the objectives of this formative study, a cross-sectional study employing concurrent mixed methods approaches with both quantitative and qualitative research techniques was applied.
Study population and sampling technique
Population of study
The primary study population comprised of all pregnant and lactating adolescent girls (aged 10–19 years old) resident in Trans Mara East Sub-County, assuming that the prevalence of pregnant and lactating mothers was 50% within the entire Trans Mara East Sub-County, from which a sample was drawn. Pregnant adolescent would be eligible for the study when they were three months pregnant and lactating adolescent girls would be eligible if they had children ≤24 months.
Sample size determination for quantitative approach
Sample size was determined using the Cochran formula [7], which allowed for calculation of an ideal sample size given a desired level of precision, desired confidence level, and the estimated proportion of the attribute present in the population. The following formula was applied;
$$ n=\frac{(pq){z}^2}{e^2} $$
Where: n = minimum sample size (for population > 10,000) required.
Z = the standard normal deviate at the required confidence level, (set at 1.96 corresponding to 95%, Confidence level adopted for this study).
p = population proportion estimated to pregnant/ lactating is 50%.
q = 1-p.
e = the degree of accuracy required (usually set at 0.05).
$$ n=\frac{\left(0.5\times 0.8283\right){1.96}^2}{0.05^2} $$
n = 384 adolescents + 10% non-response.
Since the Z-value was set at 1.645 corresponding to 90% confidence level, the minimum sample size was:
$$ \frac{\left(0.5\times 0.5\right){1.645}^2}{0.05^2} $$
=292 adolescents.
The final sample size obtained was 337 adolescents who were either pregnant or lactating. Proportionate distribution was done across 25 clusters equivalent to villages and by adolescent status (i.e. pregnant or lactating).
Test for sample size adequacy
Based on the above formula, the minimum sample size at 90% confidence was 292 pregnant and lactating adolescents. However, given the nature of the questionnaire where 90% of key variable measures were based on 5 point-Likert scale, descriptive test for sample size adequacy using Kaiser-Mayor Olkin and Batt-test of sphericity generated by principle axis factoring, was applied to test for sample size adequacy giving way for subsequent statistical tests.
Sampling procedure
Cluster sampling was appropriate under the assumption given the existing wards and villages. Probability sampling techniques using cluster and simple random methods was used to practically access adolescents who were either pregnant or lactating. An enumerator covered at least one village in a day to administer at least 8 questionnaires at random. Each enumerator moved to the center of a village selected for the day and began by facing North direction. After that, eight papers representing North, North East, North West, East, South East, South, South West and West, were randomized and one picked to inform the direction to walk. Once direction was picked, an enumerator walked on a straight line to the next household until he/she reached a household with an eligible adolescent. Once the first adolescent was interviewed the enumerator again stood at the door of the just completed house facing North again and picked a direction from the pieces of papers randomized. The enumerator again walked to the next household. This process was repeated throughout the day until all eight adolescents were interviewed. An enumerator that reached the end of the village before completing the numbers required would go back to the center of the village and randomly select a new direction to walk to. In case an enumerator double-selected the previous household, that household was passed until another eligible adolescent was reached. Each time an enumerator strived to interview at least one adolescent who is pregnant or lactating in the ratio of 3:7 interchangeable along the walk.
Methods of data collection tools and process
Quantitative data was collected using adolescent questionnaire targeting critical indicators of access and utilization (See Additional file 1). The questionnaire included indicators of access such as Advice on Healthy diet/diet diversity; exclusive breastfeeding; nutrient supplementation; food fortification and blending; and appropriate complementary feeding. Services targeted in the questionnaire included, Provision and collection of IFAS; Vitamin A supplementation for the child; sexual and reproductive health sensitive to nutrition e.g. family planning; Basic environmental hygiene, and disease prevention e.g. provision of ITNs; Basic personal hygiene; Regular nutrition assessment both at antenatal and post-natal; Child growth monitoring at post-natal care; Nutrition referral for critical malnutrition episodes; Nutrition support e.g. mother-to-mother support; Nutrition supplements e.g. ready to use therapeutic/Supplementary foods RUTS/RUSF; Regular follow-ups on utilization of services e.g. through community strategy programmes; and lactation management and processes e.g. normally done using lactation charts pathways.
The questionnaire was administered to each respondent by an enumerator for a period of about 45 min. Both open-ended questions and closed-ended questions were used. The questionnaire was administered to adolescents aged 10–19 years who were either pregnant or lactating. The questionnaire Interview method was employed to gather data on the needs of pregnant and lactating adolescent girls (aged 10–19 years) in Trans Mara East Sub-County in accessing and utilizing nutritional advice and services and mapping out current needs of pregnant and lactating adolescent girls (aged 10–19 years) in Trans Mara East Sub-County in Narok County and how they are currently met.
Qualitative data was collected using Focus Group Discussions (FGD) guide and in-depth interview method. Three focus groups targeting Community Health Volunteers, Parents and Mother-to-Mother Support Group were conducted to understand issues surrounding nutrition needs of the adolescent girls who are pregnant or lactating. Each target group was made up of 6–10 members to engage in free discussions. In-depth interview was conducted to get in-depth information from adolescents who are pregnant and lactating. This method assisted in gathering data on the needs of pregnant and lactating adolescent girls (aged 10–19 years) in Trans Mara East Sub-County, Narok County in accessing and utilizing nutritional advice and services and to map out current needs of pregnant and lactating adolescent girls (aged 10–19 years) in Trans Mara East Sub-County, Narok County. In addition, information on the intervention measures that need to be considered for the needs to be met within the context of the Sub-County, was also collected. The approach was appropriate for getting more information that may not be shared in groups. In-depth interviews were conducted until saturation. At least 6 cases of lactating and 6 cases of pregnant adolescents were selected by convenience among 25 clusters, and were engaged for in-depth interview. Major questions of FGDs and in-depth interview included who provides nutrition advice and services for adolescent pregnant/lactating for adolescent pregnant/lactating at the health facility. A mention of some of the facilities nutrition pieces of advice/services provided for adolescent pregnant/lactating mothers; How these facilities are accessible and how the nutrition advise information is conveyed to the adolescent pregnant/lactating whenever they visit a facility to seek services, and finally we established the level of satisfaction with the nutrition and health information provided to the adolescents.
Quantitative data analysis adopted use of descriptive and inferential statistics. Descriptive statistics was used to characterize different frequencies. Z-test for single proportions was used to test for significant difference between the actual frequencies and expected frequency. Expected frequency was set at 50% for dichotomized data and 100/n percent for data that had more than two options. Principal Axis Factoring was used to establish the access pattern as well as generating Batt-scores for further modeling especially for indicators that were fitted into access and utilization models to determine cause and effect. p ≤ 0.05 was considered statistically significant.
Qualitative Data Analysis on the other hand adopted the use of Framework analysis for both in-depth interviews and Focus Group Discussions. One key advantage with this framework analysis is that although it uses a thematic approach, it allows themes to develop both from the research questions and from the narratives of research participants. The process of data analysis began during the data collection, by skillfully facilitating the discussion and generating rich data from the interviews and FGDs, while complementing them with the observational notes and typing the recorded information. This stage was followed by familiarization with the data, which was achieved by listening to voice records, reading the transcripts in their entirety several times and reading the observational notes taken during and after the interview and/or FGDs. The aim was to immerse in the details and get a sense of the interview as a whole before breaking it into parts. The next stage involved identifying a thematic framework, by writing narrative memos in the margin of the text in the form of short phrases, ideas or concepts arising from the texts and beginning to develop categories. At this stage, descriptive statements were formed and an analysis carried out on the data under the questioning route. The third stage, indexing, comprised sifting the data, highlighting and sorting out quotes and making comparisons both within and between cases. The fourth stage, charting, involved lifting the quotes from their original context and re-arranging them under the newly-developed appropriate thematic content.
Access and utilization of nutrition advise and services among adolescent who are lactating and pregnant
Prior to establishing access and utilization of nutrition advice and services, access of pieces of nutritional-related advice was assessed among pregnant and lactating adolescents. Results showed that majority received at least one piece of advice with a significant frequency of 67.8% higher than expected frequency of 50%. However, a focus on five specific advice domains revealed that pieces of advice to both pregnant and lactating adolescent were majorly surrounding healthy diet/diet dietary intake (48.4%) and exclusive breastfeeding (41.8%). The two domains were below the 50% expected frequency though at insignificant level (p > 0.05). Nutrition supplementation, food fortification or blending and complementary feeding were significantly below the expected frequency of 50% (p < 0.01) (Table 1).
Distribution of adolescent who are lactating and pregnant by frequency of nutrition advice received
Advise Need indicators (n = 335)
z-value at 50% expected frequency
Proportion who received nutrition advice in the past 3 months
< 0.05
Advice Domain
Healthy diet/diet diversity
Exclusive breastfeeding
Nutrition supplementation
68**
Food fortification and blending
Complementary feeding
* p < 0.05; ** p < 0.01 based on z-test effect size
A further screening of 14 regular services offered to pregnant and lactating adolescent mothers revealed that 77.0% received at least one service, which included nutrition education and counseling (53.4%) and regular nutrition assessment (50%). The child growth monitoring (48.1%) was slightly lower than expected frequency of 50%, however, this was not statistically significant (p > 0.05). Other critical nutrition-specific service areas screened for utilization included provision and collection of IFAS (45.7%), vitamin A supplementation (31.6%), efforts on regular follow ups on utilization of services (27.2%), deworming (22.7%), mother-to-mother support (22.4%), nutrition supplementation (11.6%), nutrition referral (5.1%) and lactation management and process (3.3%). These categories of services were significantly lower than expected frequency (p < 0.01 in all cases) (Table 2).
Distribution of adolescents who are lactating and pregnant by frequency of nutrition services received
Service Need indicators (n = 335)
Proportion who received any nutrition service in the past 3 months
Service domain
Provision and collection of IFAS
Nutrition education and counselling
Vitamin A supplementation for the child
Sexual and reproductive health sensitive to nutrition e.g. family planning
Basic personal hygiene
Regular nutrition assessment both at antenatal and postnatal
Child growth monitoring at postnatal care
Nutrition referral for critical malnutrition episodes
Basic environmental hygiene and disease prevention e.g. provision of ITNs
Nutrition support e.g. mother to mother support
Nutrition supplements e.g. ready to use therapeutic/supplementary foods RUTFs/RUSFs
−10.46
Regular follow-ups on utilization of services e.g. through community strategy programmes
Lactation management and processes e.g. normally done using lactation charts pathways
* p < 0.05; ** p < 0.01 ***; p < 0.001 based on z-test effect size
Discussions with adolescent mothers, CHVs and parents revealed ongoing nutrition advice and counseling within the community and at family level. The CHVs are trained on how adolescents who are lactating and pregnant should be taken care of nutritionally. It appeared in the discussions and in particular, with parents that they only knew about medical doctors and CHVs. This means that any health personnel who attended to the adolescent's needs were generally referred to as a doctor implying that the role of other cardres were improperly perceived. There is thus a possibility that a nutritionist or a nurse could easily be referred to as a 'doctor' as captured in the following FGD quote.
"R10-always when they come to hospital, we always do follow –up as CHVs……."
R3- as CHV we are always trained on how to help those adolescents lactating and pregnant …
R3- when we go to the hospital, we meet guiding and counselling doctor who assist us…
R1- I think the person who provides is 'Daktari', also CHVS always give them direction on where to access.
Facility-related factors in relation to access to pieces of advice and utilization of nutrition services
The study further focused on support systems for access to health services by exploring service provider pattern, facility sources, distance to source and methods of conveying information to adolescents (both pregnant and lactating). It appeared that in Trans-Mara East Sub-County, nurses are the main providers of nutrition advice needed (55.2%), followed by the CHVs (24.2%), then nutritionists (21.5%), clinical officers (17.6%), social worker (3.0%) and pharmacist (1.8%) in that order. A test of significance revealed that nutritionists who are supposed to be the key provider registered a significantly lower than expected frequency (p < 0.01) of 50% as a service provider frequently used by the adolescents. Nurses on the other hand, registered higher but non-significant proportion than expected frequency of 50% (p > 0.05).
Within facility-based sources of information, nutrition advises and services were significantly sought at dispensary level (67.2%) higher than expected frequency of 50% (p < 0.01), followed by school (22.1%). Further z-test indicated that other than dispensaries, other facilities recorded significantly lower than expected frequency of 50% as sources of nutrition information.
On matters of distance to facility, majority accessed them within 1-5 km distance (67.2%) which is significantly higher than 25% expected frequency (p < 0.01). Generally, the adolescents who are pregnant or lactating dominantly receive information on nutrition and health through face-to-face interaction with service provider (87.2%). Radio broadcasting or TV also become useful methods through which information is relayed at 32.8% (Table 3).
Distribution of adolescent who are lactating and pregnant by providers, sources distance and methods of conveying information on nutrition advice or services received
Nutrition advise and service providers (n = 335)
Clinical officer
CHVs
Community Development Social Worker
Facility type as Source of Nutrition advise and service providers (n = 335)
Public dispensaries
Private clinic
Public health centre
CBO and NGO health
FBO project
Distance to Source of Nutrition advise and service providers (n = 335)
< 1 km
1–5 km
5-10 km
> 10 km
Methods of conveying Nutrition advise by service providers (n = 335)
IEC materials e.g. brochures, leaflets etc. IEC materials e.g. brochures, leaflets etc.
Internet links
Social media e.g. WhatsApp and Facebook pages
Through radio broadcasting or TV
* p < 0.05; ** p < 0.01 based on z-test effect size; CBO Community-Based Organizations, NGO Non-Governmental Organization, FBO Faith-Based Organizations, IEC Information, Education and Communication materials, SMS short messaging systems
The discussants, especially the adolescent mothers, re-affirmed their frequent visit to clinics for their children to undergo growth monitoring. The CHVs also do follow-ups especially during immunizations to attend to adolescents who are mothers. Again, information relay was confirmed by CHVs as being through follow-ups at household level on one-on-one basis and through community forums mainly described as 'matangazo' (frequent community forum on matters of health). This is further captured in the following FGD:
R7-we as volunteers we usually go for follow –up to ensure they come to last schedule for immunizations….
R6-I go to visit clinic and they see the presentation of the child…
R7- At hospital the doctor assists also in weighing children and attending to their complications like diseases.
R4-the way they get information is through us CHVs because we always do in –depth follow–up in community and also the community forum is always given matangazo there.
Nutrition services utilization pattern
The study further assessed 7 [Iron and Folic Acid Supplements (IFAS), regular nutrition assessment, practice of diet quality, use of Insecticide-Treated Nets (ITNs) and regular visit for nutrition education and counseling, Ready-to-Use Therapeutic Supplements/Ready to Use Supplementary Feeds (RUTS/RUSF and vitamin A supplementation)] service areas of nutrition services based on adherence rating mechanisms to explore adherence pattern (Table 4). Based on principal axis factor loadings, the 7 adherence items generated two-factor loading categories both accounting for a total of 42.2% within the service domain.
Utilization of critical nutrition services based on factor loading
Adherence rating
Collection and use of IFAS
Regular Nutrition Assessment
Practice of quality diet
Use of RUTS/RUSF
Vitamin A supplementation for the child (Applicable For Lactating Mothers)
Use of ITNs
Regular visit for nutrition education and counseling
Variance based on Rotated sum of Squared Loading
Minimum Commonality at Eigen value of 1 = 0.4
Factor 1 which accounted for 29.9% of service domain loaded five items in the rotated matrix based on Eigen value = 1. These included collection of IFAS (cumulative, cum = 0.60), regular nutrition assessment (cum = 0.82), practice of diet quality (cum = 0.53), use of ITNs and regular visit for nutrition education (cum = 0.44) and counseling (cum = 0.68). This category of service adherence was labelled "preventive focused services" with nutrition assessment being the most utilized followed by regular visit for nutrition education and counseling, based on community coefficient.
Factor 2 which accounts for 12.3% of nutrition service adherence domain loaded 2 items in the rotated matrix. The two items included use of RUTS/RUSF (cum = 0.40) and vitamin A supplementation (cum = 0.73). This category of service adherence was labelled "curative focused services".
Access and utilization of nutrition services among adolescents is a complex phenomenon that is affected by a number of factors. Studies carried out in developing countries underscore the importance of socio-economic factors and the environment of service delivery as key determinants of utilization of nutrition services among the adolescents [4, 8, 9]. Other factors such as distance from the health facility, level of education, lack of autonomy and the power of decision-making, cultural norms, religion and the quality of service delivered have also been associated with the level of access and utilization of nutrition services among the adolescents [10, 11, 12]. The current study assessed the nutritional needs of pregnant and lactating adolescent girls (under 19) when accessing and utilizing nutritional advice and services in Trans-Mara East Sub-County, Narok County, Kenya.
Access to nutrition advice and services in trans Mara east Sub-County, Narok County
The study very significantly demonstrated low coverage of 337 adolescents for supplementation, exclusive breastfeeding, food fortification and blending as well as complementing feeding. This implies that delivery of nutrition advice to pregnant and lactating adolescent is weak in Trans Mara East Sub-County, Kenya. This scenario as previously noted, may lead to poor nutrition status for the adolescents and could interfere with productivity and well-being of the mother [3]. Inadequate access to good information on healthy diet for adolescents who are pregnant may also lead to preterm deliveries, low weight babies and anemia [13].
Low coverage of complementary feeding-focused-advice which was clearly evident in this study would pose a challenge to children during early stages of life especially within their first 1000 days [14]. This is because growth retardation may be experienced immediately after 6 months of exclusive breastfeeding and may continue for some time [15]. Pieces of advice on effective complementary feeding should be provided to all adolescents who are lactating to prevent any possible shock during initiation of complementary feeding or weaning. Failure to mitigate this aspect would create an immense risk to adolescents' proper utilization of services.
Supplementation, food fortification and blending also play a key role during pregnancy and lactation. This study revealed low coverage on pieces of advice given to promote supplements, fortification and blended food products. This could be a high risk to malnutrition for the mother and child. Fortification of commonly used foods serve as vehicles and presents opportunities for increasing nutrient intake for infants [16]. Lack of inadequate knowledge on this intervention mechanism may be a high risk to an adolescent who is pregnant or lactating, thus potentially perpetuating poor nutrition of children born to them. Increasing knowledge of a mother in supplementation programmes may therefore also contribute to child growth by proxy. Low coverage through advice lead to poor knowledge of the mother and hence heighten the children's risk to poor growth. Utilization of services that support optimal nutrition for the adolescent also displayed low coverage within Trans Mara East Sub-County. The most affected areas of services were Vitamin A supplementation, nutrition referrals, nutrition support, nutrition supplements, community follow ups and lactation management and processes. The performance of these indicators of access to services are critical for an adolescent nutrition situation. High coverage is needed to be certain that the adolescent received adequate nutrition-specific care to enjoy quality of life [4].
Utilization of nutrition services in trans Mara east Sub-County, Narok County
In the current study, utilization of services was based on common routine at facility level. Two categories of utilization demonstrated that prevention component had a higher weight than curative component of regular nutrition care process. This is true based on the fact that other treatments are in most cases anchored on assessment outcome. Nutrition assessments are often done to assess the body composition measures or adequacy of diet. Nutrition status of pregnant women is useful in determining pregnancy outcome and through this, a mother would be attended to on the basis of informed evidence with appropriate baseline information on nutritional needs.
Other preventive components such as collection of IFAS, practice of quality diet, nutrition education of counseling and disease prevention only follow with clear assessment outcome. Provision of IFAS is a preventive component of nutrition interventions that has a chance of reducing incident of anemia in mother and low birth weight in neonates [17]. Other service domains such as practice of regular nutrition education, practice of quality diet and use of ITNs are all enhanced in the outcome of assessment. A malnourished adolescent or child born to adolescent would necessitate serious intervention, which in many cases, is tailored in nutrition education, counseling and practice of quality diet. Every so often, disease prevention mechanisms are incorporated mainly to prevent infection as malnutrition lowers immunity, consequently augmenting their risk to infections.
Vitamin A supplementation for children and provision of RUSF are target intervention for high risk cases. These are therapeutic products that only target curative measures. Ready-to-use-Therapeutic-Foods (RUTFs) such as plumpy nuts and therapeutic milks are food alternatives for medical complications such as loss of appetite, severe dehydration, edema, high fever and anorexia. These are common problems in rural regions where RUTFs act as immediate remedy. Availability, access and knowledge on how to use these products can greatly reduce severity of malnutrition [18]. Vitamin A supplementation is significant for eye health, immune function and fetal growth and development. Vitamin A deficiencies result in visual loss exhibited through night blindness and, in children, may intensify the danger of illness and death from childhood infections, including measles and those diarrhea-causing pathogens. Pregnant women become susceptible to vitamin A deficiency especially during the third trimester. At this stage, it is paramount that pregnant girls and women are advised to consume an optimally nutritious diet to reduce chances of deficiency [19].
This study established that adolescent needs were being met with 67.8% reporting that they received nutrition advice in at least one area of service. However, coverages of critical service domains were still below average. This included advice on healthy diet/diet diversity, exclusive breast feeding, nutrition supplementation, food fortification and blending as well as complementary feeding. The last three domains were significantly below the average and would require much attention. On matters of utilization, the study isolated two domains of utilization of nutrition services. The most widely utilized were nutrition services that falls within the preventive-focused services such as collection of IFAS, regular nutrition visits for counseling and education, practice of quality diet and use of ITNs. The second level of utilization was curative-focused services which were characterized by vitamin A supplementation and use of RUTF/RUSF. Based on qualitative data analyses, nutritionist and nurse were more likely to increase overall utilization of nutrition services.
Since education support given to adolescents somehow showed it as a barrier to low utilization of nutrition services, nutrition programmes targeting pregnant and lactating adolescent should be set up in schools to provide relevant nutrition education, healthy meals and environments.
There is need to increase access to adolescents within the community set up through considerable scope for making better use of multiple avenues to reach adolescents, including school-based, health system-based and community-based approaches; marriage registries (where available) could be used to target newly-wed adolescents to improve coverage reaching adolescents with information at community.
We are grateful to the Anglican Church of Africa (ACK), Kilgoris for support. We are indebted to the study participants who took part in the study. We are also grateful to Tiphaine Valois, Paula Plaza and Laura Adams from Christian Aid-UK for their technical advice during the study.
DOO, SG, EOM, COO, ZOO, MO, RKN, NO, SE, and CO: designed, carried out the study in the rural population and participated in the drafting of the manuscript. COO, DOO, CO: performed statistical analyses and participated in the drafting of the manuscript. All authors read and approved the final manuscript.
Christian-Aid UK provided funds for logistical support as part of their community outreach programs in Kenya. However, the funding body did not participate in the design of the study, collection, analysis, and interpretation of data and in writing of the manuscript.
The study was approved by the Maseno University Ethics Review Committee. Informed written consent was obtained from the all study participants prior to carrying out the study. For those participants who were below the legal consenting age in Kenya (i.e. 18 years), they provided written assent, however their parents/guardians provided additional written informed consent.
12884_2019_2391_MOESM1_ESM.xlsx (166 kb)
Additional file 1: These are the dataset supporting the conclusions of this article which is provided as Additional file 1. (XLSX 166 kb)
Alam N, Roy SK, Ahmed T, Ahmed AM. Nutritional status, dietary intake, and relevant knowledge of adolescent girls in rural Bangladesh. J Health Popul Nutr. 2010;28(1):86–94.CrossRefGoogle Scholar
Morris JL, Rushwan H. Adolescent sexual and reproductive health: the global challenges. Int J Gynaecol Obstet. 2015;131(Suppl 1):S40–2.CrossRefGoogle Scholar
Catalano RF, Fagan AA, Gavin LE, Greenberg MT, Irwin CE Jr, Ross DA, Shek DT. Worldwide application of prevention science in adolescent health. Lancet. 2012;379(9826):1653–64.CrossRefGoogle Scholar
Singh AS, Mulder C, Twisk JW, van Mechelen W, Chinapaw MJ. Tracking of childhood overweight into adulthood: a systematic review of the literature. Obes Rev. 2008;9(5):474–88.CrossRefGoogle Scholar
WHO: The Partnership for Maternal, Newborn & Child Health (2012). 'Reaching child brides. Knowledge Summary #22, World Health Organization, www.who.int/pmnch/knowledge/publications/summaries/ks22.pdf (accessed 9 April 2019). 2012.
WHO: Every newborn: an action plan to end preventable deaths. 2014.Google Scholar
Cochran WG. Sampling Techniques. New York: Wiley; 1963.Google Scholar
Peters DH, Garg A, Bloom G, Walker DG, Brieger WR, Hafizur Rahman M. Poverty and Access to Health Care in Developing Countries. Annals of the New York Academy of Sciences. 2008;1136:161–71.CrossRefGoogle Scholar
Rurangirwa AA, Mogren I, Nyirazinyoye L, Ntaganira J, Krantz G. Determinants of poor utilization of antenatal care services among recently delivered women in Rwanda; a population-based study. BMC pregnancy and childbirth. 2017;17(1).Google Scholar
Ahmed S, Creanga AA, Gillespie DG, Tsui AO. Economic status, education and empowerment: implications for maternal health service utilization in developing countries. PLoS One. 2010;5(6):e11190.CrossRefGoogle Scholar
Singh L, Rai R, Singh P. Assessing the utilization of maternal and child health care among married adolescent women: evidence from India. J Biosoc Sci. 2012;44(1):1–26.CrossRefGoogle Scholar
Tarekegn SM, Lieberman LS, Giedraitis V. Determinants of maternal health service utilization in Ethiopia: analysis of the 2011 Ethiopian demographic and health survey. BMC pregnancy and childbirth. 2014;14(161).Google Scholar
UNICEF. Ending child marriage: Progress and prospects. New York: UNICEF; 2014.Google Scholar
Omondi DO. Impact of Food Fortification on Child Growth and Development during Complementary Feeding. Ann Nutr Metab. 2018;73(Suppl.1):6–7-13.Google Scholar
Waterlow JC. Post-neonatal mortality in the third world. Lancet. 1988;2(8623):1303.CrossRefGoogle Scholar
Angeles-Agdeppa I, Magsadia CR, Capanzana MV. Fortified juice drink improved iron and zinc status of schoolchildren. Asia Pac J Clin Nutr. 2011;20(4):535–43.PubMedGoogle Scholar
Imdad A, Bhutta ZA. Routine iron/folate supplementation during pregnancy: effect on maternal anaemia and birth outcomes. Paediatric and perinatal epidemiology. 2012;26(Suppl 1):168–77.CrossRefGoogle Scholar
Huybregts L, Houngbe F, Salpeteur C, Brown R, Roberfroid D, Ait-Aissa M, Kolsteren P. The effect of adding ready-to-use supplementary food to a general food distribution on child nutritional status and morbidity: a cluster-randomized controlled trial. PLoS Med. 2012;9(9):e1001313.CrossRefGoogle Scholar
Osrin D, Vaidya A, Shrestha Y, Baniya RB, Manandhar DS, Adhikari RK, Filteau S, Tomkins A, Costello AM. Effects of antenatal multiple micronutrient supplementation on birthweight and gestational duration in Nepal: double-blind, randomised controlled trial. Lancet. 2005;365(9463):955–62.CrossRefGoogle Scholar
1.Kenya Nutritionists and Dieticians InstituteNairobiKenya
2.School of Arts and Social Science, Department of Religion, Theology and PhilosophyMaseno UniversityMasenoKenya
3.Christian Aid-UKNairobiKenya
4.Christian Aid-UKLondonUK
5.School of Public Health and Community Development, Department of Biomedical Science and TechnologyMaseno UniversityMasenoKenya
Okeyo, D.O., Gumo, S., Munde, E.O. et al. BMC Pregnancy Childbirth (2019) 19: 229. https://doi.org/10.1186/s12884-019-2391-7
Received 14 January 2019 | CommonCrawl |
(Redirected from Trees)
Trees around a lake
The trembling aspen in its autumn colours
Strangler fig tree in Costa Rica. Locally known as Guanacaste
...and this shows how the strangler fig grows
A tree is a tall plant with a trunk and branches made of wood. Trees can live for many years. The oldest tree ever discovered is approximately 5,000 years old and the oldest tree from the UK is about 1,000 . The four main parts of a tree are the roots, the trunk, the branches, and the leaves.
The roots of a tree are usually under the ground. However, this is not always true. The roots of the mangrove tree are often under water or on the sides of cliffs.[1] A single tree has many roots. The roots carry nutrients and water from the ground through the trunk and branches to the leaves of the tree. They can also breathe in air.[1] Sometimes, roots are specialized into aerial roots, which can also provide support, as is the case with the banyan tree.
The trunk is the main body of the tree. The trunk is covered with bark which protects it from damage. Branches grow from the trunk. They spread out so that the leaves can get more sunlight. The trunk also sways slightly in the wind to prevent it from falling over.
The leaves of a tree are green most of the time, but they can come in many colors, shapes and sizes. The leaves take in sunlight and use water and food from the roots to make the tree grow, and to reproduce.
Trees and shrubs take in water and carbon dioxide and give out oxygen with sunlight to form sugars. This is the opposite of what animals do in respiration. Plants also do some respiration using oxygen the way animals do. They need oxygen as well as carbon dioxide to live.
1 Parts of trees
1.1 Growth of the trunk
1.2 Roots
1.3 Branches
1.4 Leaves
1.5 Exceptions
2 Classification
3.1 Height
3.2 Stoutest trees
3.3 Age of trees
4 Tree value estimation
5 Tree climbing
6 Damage
7 Trees in culture
7.1 Wishing trees
7.2 Tree worship
7.2.1 World tree
7.3 In literature
8 List of trees
9 Related pages
Parts of trees[change | change source]
Branches and twigs.
Beech leaves
Tree roots anchor the structure and provide water and nutrients. The ground has eroded away around the roots of this young pine tree
The dark lines between the centre and the bark are medullary rays, which allow nutrients to flow across the tree trunk
The parts of a tree are the roots, trunk(s), branches, twigs and leaves. Tree stems are mainly made of support and transport tissues (xylem and phloem). Wood consists of xylem cells, and bark is made of phloem and other tissues external to the vascular cambium.
Growth of the trunk[change | change source]
As a tree grows, it may produce growth rings as new wood is laid down around the old wood. In areas with seasonal climate, wood produced at different times of the year may alternate light and dark rings. In temperate climates, and tropical climates with a single wet-dry season alternation, the growth rings are annual, each pair of light and dark rings being one year of growth. In areas with two wet and dry seasons each year, there may be two pairs of light and dark rings each year; and in some (mainly semi-desert regions with irregular rainfall), there may be a new growth ring with each rainfall.[2]
In tropical rainforest regions, with constant year-round climate, growth is continuous. Growth rings are not visible and there is no change in the wood texture. In species with annual rings, these rings can be counted to find the age of the tree. This way, wood taken from trees in the past can be dated, because the patterns of ring thickness are very distinctive. This is dendrochronology. Very few tropical trees can be accurately dated in this manner.
Roots[change | change source]
The roots of a tree are almost always underground, usually in a ball shaped region centered under the trunk, and extending no deeper than the tree is high. Roots can also be above ground, or deep underground. Some roots are short, some are meters long.
Roots provide support for the parts above ground, holding the tree upright, and keeping it from falling over in high wind.
Roots take in water, and nutrients, from the soil. Without help from fungus for better uptake of nutrients, trees would be small or would die. Most trees have a favorite species of fungus that they associate with for this purpose.
Branches[change | change source]
Above ground, the trunk gives height to the leaf-bearing branches, competing with other plant species for sunlight. In all trees the shape of the branches improves the exposure of the leaves to sunlight. Branches start at the trunk, big and thick, and get progressively smaller the farther they grow from the trunk. Branches themselves split into smaller branches, sometime very many times, until at the end they are quite small. The small ends are called twigs.
Leaves[change | change source]
The leaves of a tree are held by the branches. Leaves are usually held at the ends of the branches. The, although some have leaves along the branches. The main functions of leaves are photosynthesis and gas exchange. A leaf is often flat, so it absorbs the most light, and thin, so that the sunlight can get to the green parts in the cells, which convert sunlight, carbon dioxide from the atmosphere, and water from the roots, into glucose and oxygen. Most of a tree's biomass comes from this process.
Most leaves have stomata, which open and close, and regulate carbon dioxide, oxygen, and water vapour exchange with the atmosphere.
Trees with leaves all year round are evergreens, and those that shed their leaves are deciduous. Deciduous trees and shrubs generally lose their leaves in autumn as it gets cold. Before this happens, the leaves change colour. The leaves will grow back in spring.
Exceptions[change | change source]
The word "tree" in English means a long lived plant having obvious main stem, and growing to a considerable height and size. Thus not all trees have all the organs or parts as mentioned above. For example, most (tree-like) palms are not branched, and tree ferns do not produce bark. There are also more exceptions.
Based on their general shape and size, all of these are nonetheless generally regarded as trees. Trees can vary a lot. A plant that is similar to a tree, but generally smaller, and may have multiple trunks, or have branches that arise near the ground, is called a "shrub", or a "bush". Since these are common English words there is no precise differentiation between shrubs and trees. Given their small size, bonsai plants would not technically be "trees", but are indeed "trees". Do not confuse the use of tree for a species of plant, with the size or shape of individual specimens. A spruce seedling does not fit the definition of a tree, but all spruces are trees.
Classification[change | change source]
A sweet chestnut tree in Ticino, Switzerland
A tree is a plant form that can be found in many different orders and families of plants. Trees show many growth forms, leaf type and shape, bark traits and organs.
The tree form has changed separately in classes of plants that are not related, in response to similar problems (for the tree). With about 100,000 types of trees, the number of tree types in the whole world might be one fourth of all living plant types.[3] Most tree species grow in tropical parts of the world and many of these areas have not been surveyed yet by botanists (they study plants), making species difference and ranges not well understood.[4]
The earliest trees were tree ferns, horsetails and lycophytes, which grew in forests in the Carboniferous period; tree ferns still survive, but the only surviving horsetails and lycophytes are not of tree form. Later, in the Triassic Period, conifers, ginkgos, cycads and other gymnosperms appeared, and subsequently flowering plants in the Cretaceous period. Most species of trees today are flowering plants (Angiosperms) and conifers.
A small group of trees growing together is called a grove or copse, and a landscape covered by a dense growth of trees is called a forest. Several biotopes are defined largely by the trees that inhabit them; examples are rainforest and taiga (see ecozones). A landscape of trees scattered or spaced across grassland (usually grazed or burned over periodically) is called a savanna. A forest of great age is called old growth forest or ancient woodland (in the UK). A very young tree is called a sapling.
Records[change | change source]
Height[change | change source]
Scientists in the UK and Malaysia say they have discovered the world's tallest tropical tree measuring more than 100m (328ft) high.[5]
A coast redwood: 115.85 metres (380.1 feet), in Redwood National Park, California had been measured as tallest, but may no longer be standing.[6]
The tallest trees in Australia are all eucalypts, of which there are more than 700 species. The so-called 'mountain ash'. with a slim, straight trunk, grows to over 300 feet.
Stoutest trees[change | change source]
The stoutest living single-trunk species in diameter is the African baobab: 15.9 m (52 ft), Glencoe baobab (measured near the ground), Limpopo Province, South Africa.[7] This tree split up in November 2009 and now the stoutest baobab could be Sunland Baobab (South Africa) with diameter 10.64 m and circumference of 33.4 m.
Some trees develop multiple trunks (whether from an individual tree or multiple trees) which grow together. The sacred fig is a notable example of this, forming additional 'trunks' by growing adventitious roots down from the branches, which then thicken up when the root reaches the ground to form new trunks; a single sacred fig tree can have hundreds of such trunks.
Age of trees[change | change source]
The life-span of trees is determined by growth rings. These can be seen if the tree is cut down or in cores taken from the edge to the center of the tree. Correct determination is only possible for trees which make growth rings, generally those which occur in seasonal climates. Trees in uniform non-seasonal tropical climates are always growing and do not have distinct growth rings. It is also only possible for trees which are solid to the center of the tree; many very old trees become hollow as the dead heartwood decays away. For some of these species, age estimates have been made on the basis of extrapolating current growth rates, but the results are usually little better than guesses or speculation. White proposed a method of estimating the age of large and veteran trees in the United Kingdom by correlation between a tree's stem diameter, growth character and age.[8]
The verified oldest measured ages are:
Great Basin bristlecone pine (Methuselah) Pinus longaeva: 4,844 years[9]
Alerce: 3,622 years[9]
Giant sequoia: 3,266 years[9]
Sugi: 3,000 years[10]
Huon-pine: 2,500 years[9]
Other species suspected of reaching exceptional age include European Yew Taxus baccata (probably over 2,000 years[11][12]) and western redcedar Thuja plicata. The oldest known European yew is the Llangernyw yew in the Churchyard of Llangernyw village in North Wales which is estimated to be between 4,000 and 5,000 years old.
The oldest reported age for an angiosperm tree is 2293 years for the Sri Maha Bodhi sacred fig (Ficus religiosa) planted in 288 BC at Anuradhapura, Sri Lanka; this is said to be the oldest human-planted tree with a known planting date.
Tree value estimation[change | change source]
Studies have shown that trees contribute as much as 27% of the appraised land value in certain markets.[13]
Basic tree values (varies by region)[14]
(1985 US$)
These most likely use diameter measured at breast height (dbh), 4.5 feet (140 cm) above ground—not the larger base diameter. A general model for any year and diameter is:
Value = 17.27939 × ( diameter ) 2 × 1.022 year − 1985 {\displaystyle {\text{Value}}=17.27939\times ({\text{diameter}})^{2}\times 1.022^{{\text{year}}-1985}}
assuming 2.2% inflation per year.[15]
Tree climbing[change | change source]
Tree climbing is an activity where one moves around in the crown of trees.[16]
A tree climber
Use of a rope, helmet, and harness are the minimum requirements to ensure the safety of the climber. Other equipment can also be used depending on the experience and skill of the tree climber. Some tree climbers take special hammocks called "Treeboats" and Portaledges with them into the tree canopies where they can enjoy a picnic or nap, or spend the night.
Tree climbing is an "on rope" activity that puts together many different tricks and gear originally derived from rock climbing and caving. These techniques are used to climb trees for many purposes, including tree care (arborists), animal rescue, recreation, sport, research, and activism.
Damage[change | change source]
El Grande, about 280 feet high, the most massive (though not the tallest) Eucalyptus regnans was accidentally killed by loggers burning-off the remains of legally loggable trees (less than 280 ft) that had been felled all around it
The three big sources of tree damage are biotic (from living sources), abiotic (from non-living sources) and deforestation (cutting trees down). Biotic sources would include insects which might bore into the tree, deer which might rub bark off the trunk, or fungi, which might attach themselves to the tree.[17]
Abiotic sources include lightning, vehicles impacts, and construction activities. Construction activities can involve a number of damage sources, including grade changes that prevent aeration to roots, spills involving toxic chemicals such as cement or petroleum products, or severing of branches or roots. People can damage trees also.
Both damage sources can result in trees becoming dangerous, and the term "hazard trees" is commonly used by arborists, and industry groups such as power line operators. Hazard trees are trees which due to disease or other factors are more susceptible to falling during windstorms, or having parts of the tree fall.
The process of finding the danger a tree presents is based on a process called the quantified tree risk assessment.[18]
Trees are similar to people. Both can take a lot of some types of damage and survive, but even small amounts of certain types of trauma can result in death. Arborists are very aware that established trees will not tolerate any appreciable disturbance of the root system.[19] Even though that is true, most people and construction professionals do not realize how easily a tree can be killed.
One reason for confusion about tree damage from construction involves the dormancy of trees during winter. Another factor is that trees may not show symptoms of damage until 24 months or longer after damage has occurred. For that reason, persons who do not know about caring for trees may not link the actual cause with the later damaged effect.
Various organizations have long recognized the importance of construction activities that impact tree health. The impacts are important because they can result in monetary losses due to tree damage and resultant remediation or replacement costs, as well as violation of government ordinances or community or subdivision restrictions.
As a result, protocols (standard ways) for tree management prior to, during and after construction activities are well established, tested and refined (changed). These basic steps are involved:
Review of the construction plans
Development of the related tree inventory
Application of standard construction tree management protocols
Assessment of potential for expected tree damages
Development of a tree protection plan (providing for pre-, concurrent, and post construction damage prevention and remediation steps)
Development of a tree protection plan
Development of a remediation plan
Implementation of tree protection zones (TPZs)
Assessment of construction tree damage, post-construction
Implementation of the remediation plan
Trees in culture[change | change source]
The tree has always been a cultural symbol. Common icons are the World tree, for instance Yggdrasil,[20] and the tree of life. The tree is often used to represent nature or the environment itself. A common mistake (wrong thing) is that trees get most of their mass from the ground.[21] In fact, 99% of a tree's mass comes from the air.[21]
Wishing trees[change | change source]
A Wish Tree (or wishing tree) is a single tree, usually distinguished by species, position or appearance, which is used as an object of wishes and offerings. Such trees are identified as possessing a special religious or spiritual value. By tradition, believers make votive offerings in order to gain from that nature spirit, saint or goddess fulfillment of a wish.
Tree worship[change | change source]
Tree worship refers to the tendency of many societies in all of history to worship or otherwise mythologize trees. Trees have played a very important role in many of the world's mythologies and religions, and have been given deep and sacred meanings throughout the ages. Human beings, seeing the growth and death of trees, the elasticity of their branches, the sensitiveness and the annual (every year) decay and revival of their foliage, see them as powerful symbols of growth, decay and resurrection. The most ancient cross-cultural symbolic representation of the universe's construction is the 'world tree'.
World tree[change | change source]
Yggdrasil, the World Ash (Norse)
The tree, with its branches reaching up into the sky, and roots deep into the earth, can be seen to dwell in three worlds - a link between heaven, the earth, and the underworld, uniting above and below. It is also both a feminine symbol, bearing sustenance; and a masculine, phallic symbol - another union.
For this reason, many mythologies around the world have the concept of the World tree, a great tree that acts as an Axis mundi, holding up the cosmos, and providing a link between the heavens, earth and underworld. In European mythology the best known example is the tree Yggdrasil from Norse mythology.[20]
The world tree is also an important part of Mesoamerican mythologies, where it represents the four cardinal directions (north, south, east, and west). The concept of the world tree is also closely linked to the motif of the Tree of life.
In literature[change | change source]
In literature, a mythology was notably developed by J.R.R. Tolkien, his Two Trees of Valinor playing a central role in his 1964 Tree and Leaf. William Butler Yeats describes a "holy tree" in his poem The Two Trees (1893).
List of trees[change | change source]
There are many types of trees. Here is a list of some of them:
Coconut palm
Wattezia is the earliest tree in the fossil record.
↑ 1.0 1.1 "Mangrove Trees". Naturia.per.sg.
↑ Mirov, N.T. 1967. The genus Pinus. Ronald Press.
↑ "TreeBOL project". Retrieved 2008-07-11.
↑ Friis, Ib, and Henrik Balslev. 2005. Plant diversity and complexity patterns: local, regional, and global dimensions : proceedings of an international symposium held at the Royal Danish Academy of Sciences and Letters in Copenhagen, Denmark, 25–28 May 2003. Biologiske skrifter, 55. Copenhagen: Royal Danish Academy of Sciences and Letters. pp 57-59.
↑ UK scientists discover world's tallest tropical tree BBC News Science & Environment, 2019. [1]
↑ "Sequoia sempervirens". Gymnosperm Database. Retrieved 2007-06-10.
↑ "List of Champion Trees published for comment, 2005, South African Department of Water Affairs and Forestry". Retrieved 2010-01-18.
↑ White J. 1990. Estimating the age of large and veteran trees in Britain. Forestry Commission Edinburgh.
↑ 9.0 9.1 9.2 9.3 Gymnosperm Database: How old is that tree?. Retrieved on 2008-04-17.
↑ Suzuki E. 1997. The dynamics of old Cryptomeria japonica forest on Yakushima Island. Tropics 6(4): 421–428. online
↑ Harte J. 1996. How old is that old yew? At the Edge 4: 1-9. Available online
↑ Kinmonth F. 2006. Ageing the yew - no core, no curve? International Dendrology Society Yearbook 2005: 41-46 ISSN 0307-332X
↑ "Protecting Existing Trees on Building Sites" p.4 published by the City of Raleigh, North Carolina, March 1989, Reprinted February 2000
↑ "How Valuable Are Your Trees" by Gary Moll, April, 1985, American Forests Magazine
↑ based on 1985 to 2009, using NASA inflation calculator
↑ "Benefits of Tree Climbing".
↑ Wiseman, P. Eric 2008. Integrated pest management tactics. Continuing Education Unit, International Arboricultural Society 17.
↑ Ellison M.J. 2005 Quantified tree risk assessment used in the management of amenity trees. Arboric. International Society of Arboriculture. 31:2 57-65
↑ Schoeneweiss D.F. Prevention and treatment of construction damage. Journal of Arborculture 8:169
↑ 20.0 20.1 Mountfort, Paul Rhys (2003). Nordic runes: understanding, casting, and interpreting the ancient Viking oracle. Inner Traditions / Bear & Company. p. 279. ISBN 978-0-89281-093-2.
↑ 21.0 21.1 Jonathan Drori on what we think we know | Video on TED.com
Wikimedia Commons has media related to Trees.
Global Trees Campaign website
Botanic Gardens Conservation International website
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Tree&oldid=6606078"
Basic English 850 words | CommonCrawl |
Statistical methods for testing X chromosome variant associations: application to sex-specific characteristics of bipolar disorder
William A. Jons1,
Colin L. Colby1,
Susan L. McElroy2,
Mark A. Frye3,
Joanna M. Biernacka1,3 &
Stacey J. Winham ORCID: orcid.org/0000-0002-8492-91021
Bipolar disorder (BD) affects both sexes, but important sex differences exist with respect to its symptoms and comorbidities. For example, rapid cycling (RC) is more prevalent in females, and alcohol use disorder (AUD) is more prevalent in males. We hypothesize that X chromosome variants may be associated with sex-specific characteristics of BD. Few studies have explored the role of the X chromosome in BD, which is complicated by X chromosome inactivation (XCI). This process achieves "dosage compensation" for many X chromosome genes by silencing one of the two copies in females, and most statistical methods either ignore that XCI occurs or falsely assume that one copy is inactivated at all loci. We introduce new statistical methods that do not make these assumptions.
We investigated this hypothesis in 1001 BD patients from the Genetic Association Information Network (GAIN) and 957 BD patients from the Mayo Clinic Bipolar Disorder Biobank. We examined the association of over 14,000 X chromosome single nucleotide polymorphisms (SNPs) with sex-associated BD traits using two statistical approaches that account for whether a SNP may be undergoing or escaping XCI. In the "XCI-informed approach," we fit a sex-adjusted logistic regression model assuming additive genetic effects where we coded the SNP either assuming one copy is expressed or two copies are expressed based on prior knowledge about which regions are inactivated. In the "XCI-robust approach," we fit a logistic regression model with sex, SNP, and SNP-sex interaction effects that is flexible to whether the region is inactivated or escaping XCI.
Using the "XCI-informed approach," which considers only the main effect of SNP and does not allow the SNP effect to differ by sex, no significant associations were identified for any of the phenotypes. Using the "XCI-robust approach," intergenic SNP rs5932307 was associated with BD (P = 8.3 × 10−8), with a stronger effect in females (odds ratio in males (ORM) = 1.13, odds ratio in females for a change of two allele copies (ORW2) = 3.86).
X chromosome association studies should employ methods which account for its unique biology. Future work is needed to validate the identified associations with BD, to formally assess the performance of both approaches under different true genetic architectures, and to apply these approaches to study sex differences in other conditions.
Although multiple genome-wide association studies have examined the genetic contributions to the risk of bipolar disorder (BD) [1, 2], few studies have examined the genetics of specific symptoms or comorbidities of BD. We previously identified several symptoms and comorbidities of BD that differ in prevalence by sex [3]. We found that rapid cycling (RC) and a lifetime history of a suicide attempt were more common for women than men and that men more frequently had a substance use disorder. Women are also more likely to have a comorbid eating disorder, particularly binge eating behavior (BE) [4]. The reason for these sex-specific differences in BD characteristics is unclear. However, many biological sex differences are thought to arise from either hormonal differences or from genetic differences (e.g., sex chromosomes). Brain development and function as well as psychiatric traits are influenced by sex hormone levels [5] and genetic factors [2]. For example, expression of the gene BDNF is influenced by estradiol, and a SNP within BDNF Val66Met has been shown to be associated with BD and other psychiatric traits [6]. The X chromosome contains many sex and reproductive genes influencing hormone levels, such as the androgen receptor (AR) [7]. Patients with X chromosome aneuploidies experience higher rates of various psychiatric disorders, including mood disorders [8]. Furthermore, X chromosome dosage and dosage compensation may be relevant for polygenic complex traits, such as BD [9].
Because males and females have different numbers of copies of the X chromosome, we hypothesize that X chromosome genetics might play a role in observed sex differences in BD. In particular, females carry two X chromosomes, while males carry only one, and the X chromosome in females (but not males) undergoes a process called X chromosome inactivation (XCI). This is an epigenetic process initiated by the long non-coding RNA XIST that triggers silencing of the inactive X, which results in males and females expressing similar levels of many X chromosome genes [10, 11]. The identity of the inactive X is random in humans [12], and the process is also tissue- and cell-specific [13, 14]. Furthermore, XCI does not affect all loci on the X chromosome. In fact, approximately 15% of X chromosome loci escape from XCI and are expressed from both X chromosomes in females [15], although these genes are not fully expressed from the inactive X. Escape genes include genes in the pseudoautosomal regions at the ends of the chromosome (PAR1 and PAR2), as well as gametologs (genes with homologous copies on X and Y, for which females have two copies on the X and males have one copy on X and one copy on the Y), and other genes escape variably [10]. The unique biology of the X chromosome means that applying approaches for analyzing autosomal genetic variants is not appropriate.
In this work, we develop a new approach for analyzing X chromosome genetic variants, which incorporates prior biological information on XCI status of various genes, and apply the approach to examine the role of X chromosome genetic variation in sex-specific symptoms of BD. Our approach combines existing approaches for testing marginal genetic associations within a logistic regression framework. We also consider a test that accounts for single nucleotide polymorphisms (SNP)-sex interactions to allow for different effects of X chromosome variants in males and females. We compare results across methods to enable assessment of potential strengths and limitations of each approach and report on our findings regarding the association of X chromosome variants with sex-specific symptoms and comorbidities of BD.
In this study, we examined whether X chromosome variants are associated with sex-associated symptoms and comorbidities of BD. We utilized two cohorts of individuals with BD, one from the Mayo Clinic Bipolar Disorder Biobank [16] and one from the Genetic Association Information Network (GAIN) Study of BD [17], and we employed two different X chromosome-specific statistical approaches to assess associations between SNP genotypes and phenotypes. Rather than using a discovery-validation approach, a meta-analysis was conducted in order to boost sample size and reproducibility by combining the results derived from both cohorts (GAIN and Mayo).
Mayo cohort
Individuals with BD (N = 969) from the Mayo Clinic Bipolar Disorder Biobank [16] (Mayo Bipolar Biobank) that had previously undergone genome-wide genotyping on the Illumina® Human OmniExpress BeadChip (Illumina®, Inc. San Diego, CA, USA) were included in this study. Control subjects (N = 777) that did not have BD or a psychiatric illness themselves or a first-degree relative with BD were selected from the Mayo Clinic Biobank [18]. This case/control set was previously analyzed [19] and was included in a large genome-wide association study conducted by the Psychiatric Genomics Consortium [2].
Phenotyping
Symptoms and comorbidities of BD were assessed through patient and clinical questionnaires [16]. Variables analyzed in this study included the symptom of rapid cycling (RC), comorbidities of binge eating behavior (BE), lifetime history of suicide attempt, and whether the individual had an alcohol use disorder (AUD), defined in the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition (DSMIV) as a diagnosis of alcohol dependence or abuse [20]. Rapid cycling was defined as having four or more mood episodes within a year. Binge eating behavior was defined as an affirmative response to questions 5 and 6 of the Eating Disorders Diagnostic Scale [21]. These questions read "During the past 6 months have there been times when you felt you have eaten what other people would regard as an unusually large amount of food (e.g. a quart of ice cream) given the circumstances?" and "During the times when you ate an unusually large amount of food, did you experience a loss of control (feel you couldn't stop eating or control what or how much you were eating)?" [21].
Quality control (QC) and imputation of genotyping data were performed using standard procedures as previously described [22]. Genetic ancestry was estimated with STRUCTURE [23, 24] using 1000 Genomes Project reference panels and used to exclude individuals of non-European ancestry. Genome-wide principal components were calculated to allow for adjustment for population substructure. X chromosome SNPs were imputed using IMPUTE 2.2.2 [25] with the 1000 Genomes Project reference panel (phase 1 data, all populations). Analyses were limited to X chromosome SNPs that had minor allele frequency above 0.05 and imputation R2 above 0.8. SNPs in the pseudoautosomal region (PAR) defined by GrCh37 were excluded due to low genotyping call rate.
GAIN cohort
Cases with BD and controls without BD were recruited to the GAIN study and underwent phenotyping and genotyping as previously described [17] with data deposited in dbGaP [26] (accession number: phs000017.v3.p1). We used data from the subjects of European ancestry that passed genetic data QC (N = 1001 cases and N = 1034 controls).
A history of BE, RC, suicide attempt, or an AUD was assessed in cases using the Diagnostic Interview of Genetic Studies (DIGS) (versions 2–4) [27]. Binge eating behavior was defined based on having affirmative responses to questions that addressed overeating and loss of control: "Has there ever been a time in your life when you went on food binges (i.e., rapid consumption of a large amount of food in a discrete period of time, usually less than two hours)?" and "During these binges were you afraid you could not stop eating, or that your eating was out of control?". The presence of AUD was determined from the presence of any ICD 9 codes indicating DSMIII-R or DSMIV diagnoses of alcohol abuse (305.00; ICD-10 = F10.10) or alcohol dependence (303.90; ICD-10 = F10.20). Rapid cycling was defined as the presence of at least four mood episodes in a year.
Genotyping was performed using an Affymetrix™ Genome-Wide Human SNP Array 6.0 (Thermo Fisher Sientific, Inc., Waltham, MA, USA). Quality control was performed as previously described [17]. Imputation was performed as previously described [28]. SNPs analyzed were limited to those with MAF above 0.05 and imputation R2 above 0.8. SNPs in the PAR (defined by GrCh37) were excluded.
Because of the unique biology of the X chromosome, testing associations between X chromosome genetic variants and phenotypes requires different approaches than for autosomes. Previous work has used a logistic regression framework but coded the SNP variable differently depending upon the approach applied (Table 1). The coding approach historically implemented in the PLINK software [29] codes female genotypes as 0, 1, or 2 copies of the alternate allele and male genotypes as 0 or 1 copies of the alternate allele. This genotype coding ignores that XCI occurs and assumes that variants on both copies of the X chromosome are expressed in females (i.e., escape from XCI); this implicitly assumes that the effect of a change of a single allele has the same effect in females and males. As this is not true when a SNP is in a region that is inactivated, an alternate approach is to treat all SNPs as subject to XCI, using an approach originally proposed by Clayton [30]. Male genotypes are coded as 0 or 2 copies of the alternate allele, assuming that these male genotypes have the same effect as the respective homozygotes in females. Assuming that XCI is random across cells within a woman and random across women, female heterozygotes are viewed as an intermediate genotype, coded as 1. However, this also may not be optimal as 15% of X chromosome genes are expressed from both the active and inactive X chromosome. Given prior information regarding whether a region undergoes X chromosome inactivation, it is reasonable to consider this biological information when evaluating X chromosome associations.
Table 1 Different coding schemes for the SNP variable reflect different assumptions regarding XCI status
In this study, we employed two X chromosome-specific approaches that allow for modeling SNP effects depending on XCI status (inactivation vs. escape). In the first approach, we used biological data on which regions are likely to experience XCI to model SNP effects differently for regions subject to and escaping from XCI; this approach assumes that under a given coding scheme, the SNP effect is the same in males and females. Specifically, in regions believed to undergo XCI, we used the Clayton coding of male genotypes (0/2) and test the SNP effect while assuming that the minor allele in males has the same effect as two copies of the minor allele in females (ORM = ORW2). On the other hand, in regions believed to escape XCI, we used the PLINK coding of male genotypes (0/1) and test the SNP effect while assuming that the minor allele in males has the same effect as one copy of the minor allele in females (ORM = ORW1; Table 1). In the second approach, we fit a more flexible regression model that can model SNPs that are either subject to or escaping from XCI, without the need for prior biological knowledge of the XCI status. This approach also allows for SNP effects to differ in males and females. These approaches are compared in the context of investigating the genetics of BD-related traits.
Approach 1: XCI-informed approach
Deriving a presumed XCI status for each X chromosome SNP
Previous work by Balaton et al. [31] derived a "consensus" inactivation status across multiple studies and multiple tissue types for approximately 400 genes on the X chromosome. To infer XCI status at a SNP level, we used the presumed XCI status for each gene (as given in "Additional file 1: Table S1." from Balaton et al. [31]). Start and stop positions of all genes are per the transcription start and stop sites. Any SNPs overlapped by only "subject" genes (category: Subject) or only "escape" genes (categories: PAR and escape) were assigned the corresponding XCI status ("subject" or "escape"); SNPs lying between genes of the same type were also assigned the corresponding XCI status. SNPs between "subject" and "escape" genes or overlapping both "subject" and "escape" genes were assigned an XCI status of "unknown."
Using an XCI status informed approach for testing associations between X chromosome SNPs and phenotype
To test association with each phenotype, a sex-adjusted logistic regression model (Eq. 1) was used:
$$ \mathrm{logit}\;\left(\mathrm{pheno}\right)={\upbeta}_0+{\upbeta}_1\;\left(\mathrm{SNP}\right)+{\upbeta}_2\;\left(\mathrm{sex}\right) $$
Sex was coded as 0 for females and 1 for males. Irrespective of presumed XCI status, the SNP variable in females was set equal to the number of copies of the minor allele. However, in males, the SNP variable's coding depended on the presumed XCI status and hence the coding scheme chosen (Clayton or PLINK coding; Table 1). SNPs of unknown XCI status were modeled under both coding schemes (Clayton and PLINK), and Akaike information criterion (AIC) was used to determine which XCI status led to the better fitting model in each cohort (lower AIC indicates better model fit).
When XCI status at a SNP was unknown and the cohorts gave discordant presumed XCI statuses, the coding used for generating the cohort-specific summary statistics for the meta-analysis was the Clayton coding, since most of the X chromosome is subject to XCI.
Approach 2: XCI-robust approach
In this second approach, a logistic regression model with a SNP-sex interaction term (Eq. 2) was employed, where the SNP variable was the count of copies of the minor allele, and a likelihood ratio test with two degrees-of-freedom (df) was used to jointly assess the significance of the SNP and SNP-sex interaction terms. Sex was coded as 1 for males and 0 for females.
$$ \mathrm{logit}\;\left(\mathrm{pheno}\right)={\upbeta}_0+{\upbeta}_1\;\left(\mathrm{SNP}\right)+{\upbeta}_2\;\left(\mathrm{sex}\right)+{\upbeta}_3\left(\mathrm{SNP}\right)\left(\mathrm{sex}\right) $$
To facilitate the interpretation of the top SNP effects in males and females, sex-stratified logistic regression analyses were conducted in Mayo and GAIN.
For all analyses, a chromosome-wide Bonferroni-corrected significance threshold was set by dividing 0.05 by the number of SNPs passing QC in the GAIN set prior to imputation (P = 0.05/26,662 = 1.88 × 10−6). Regression analyses were performed in R using the "glm" function. Analyses incorporated additional covariates for genetic ancestry as assessed by principal components, DIGS questionnaire version (for GAIN), and enrollment site (Mayo Clinic cohort only) when necessary. For the XCI-informed approach, meta-analysis of results from the Mayo and GAIN cohorts was conducted in METAL by weighting observations from each study inversely proportional to their standard errors [32]. For the XCI-robust approach, the P values from the 2df test in the Mayo and GAIN cohorts were combined by Fisher's method to derive a joint P value implemented in R [29]. Meta-analyses of sex-stratified results from the Mayo and GAIN cohorts were performed using inverse-variance weighting using METAL [32] to estimate SNP effects in men and women separately for each phenotype.
Candidate SNP study
Previously, Jancic et al. [33] analyzed the association of X chromosome SNPs with risk of suicide attempt in individuals with BD (983 suicide attempters, 1143 non-attempters), which included the individuals from the GAIN sample analyzed here. We attempted to replicate the top ten SNPs from that paper in the independent Mayo sample. The original work used the PLINK coding and sex-adjusted logistic regression to identify top SNPs. We applied the two X chromosome-specific approaches described here to the Mayo data. As all ten SNPs reported by Jancic lay in a region subject to XCI, the "XCI-informed approach" used Clayton coding for all of these SNPs.
Annotation of lead SNPs
All lead SNPs reported in this paper were annotated to the nearest gene (not counting pseudo-genes or lncRNAs) using BioR [34] and Gr37Chp5 or by visual inspection in the University of California Santa Cruz (UCSC) Genome Browser. The GTEx database [35] was used to verify if any of the top SNPs are expression quantitative trait locus (eQTLs) in any tissue (FDR < 0.05) or are splice quantitative trait locus (sQTLs) (FDR < 0.05).
All characteristics of BD examined (RC, suicide attempt, BE, and AUD) were relatively common in both Mayo Clinic and GAIN datasets (Table 2). For both Mayo Clinic and GAIN, women were more likely than men to engage in BE or to have attempted suicide, and men were at greater risk for having an AUD. Additionally, RC was significantly more common for female cases (P = 0.004) for Mayo, although this was not true for GAIN (P = 0.580).
Table 2 Characteristics of bipolar disorder cases
X chromosome-wide results for all phenotypes under both the "XCI-informed approach" and the "XCI-robust approach" are displayed in Fig. 1 and Additional files 1, 2, 3, and 4. Using the "XCI-informed approach," which examines marginal SNP effects, no SNPs were identified that were significantly associated with BD or any of its sex-specific symptoms and comorbidities (Additional file 5: Table S1). However, using the "XCI-robust approach," which considers SNP-sex interactions, the SNP rs5932307 was significantly associated with BD (P = 8.31E−8; Table 3). The minor A allele was associated with higher odds of BD, with a stronger effect in females (ORW2 = 3.86 vs. ORM = 1.13). This SNP is downstream of the ACTRT1 gene, which has the highest gene expression in the testes [35] and encodes a beta actin-like protein that is suggested to be important for spermatid formation [36]. It has not been identified as an expression quantitative trait locus (eQTL) in any tissues or spliceQTL. However, we should note that this SNP marginally deviates from Hardy-Weinberg Equilibrium in female controls in the GAIN sample (P = 1.2E−4), but not in the Mayo sample (P > 0.05).
Association of X chromosome variants with BD. Top row denotes results from XCI-informed approach. Bottom row denotes results from XCI-robust approach. Green line denotes the study-wide significance threshold of 3.36 × 10−6. Domains as shown in the colored bars beneath the Manhattan plots for XCI-informed approach denote whether SNPs fall into regions experiencing (red) or escaping (blue) from X chromosome inactivation. Grey denotes regions for which a domain (subject or escaping) could not be assigned based on the paper by Balaton et al. [31]. SNPs are colored by the chosen XCI status used in the meta-analysis
Table 3 Top SNPs under "XCI-robust" approach
Top SNPs for suicide attempt and AUD under the "XCI-robust approach," although not significant after Bonferroni correction, were single-tissue eQTLs (Table 3). The SNP most strongly associated with suicide attempt was rs5975146, an eQTL of the gene X-prolyl aminopeptidase 2 (XPNPEP2) in both tibial nerve and adipose tissue. The meta-analysis of results from Mayo and GAIN under the "XCI-robust approach," which allows SNP effects to differ by sex, suggests that the minor A allele of rs5975146 may be associated with greater risk of suicide attempt, but only among females (ORW1 = 1.40, P2df = 1.5E−5). Additionally, the SNP most associated with AUD (rs145649722) was an eQTL of CLCN5 in the skin. The results from the meta-analysis suggest that the minor G allele of rs145649722 may be associated with greater odds of AUD, primarily in males (ORM = 3.20, ORW2 = 0.55, P2df = 4.1E−4).
We analyzed ten SNPs most strongly associated with suicide attempt in prior work [33] in the Mayo Clinic cohort. None of these SNPs was even nominally associated (P < 0.05) with the risk of suicide attempt in the independent Mayo Clinic sample (Additional file 6: Table S2). When the GAIN data was analyzed using the XCI-informed and XCI-robust methods, only two SNPs were nominally associated (rs5909133, Pinformed = 0.0037, Probust = 0.014; rs695214, Pinformed = 0.00052, Probust = 0.0013); this cannot be considered an independent replication, as the prior study included the GAIN data.
In this study, we examined the association of X chromosome SNPs with sex-associated characteristics of BD using two different X chromosome-specific analysis approaches. These approaches consider the sex-specific nature of the X chromosome and the process of XCI and allow for a more flexible interpretation of the findings.
The sex associations of the BD characteristics are as expected based on prior work, including higher rates of RC, lifetime history of suicide attempt, and greater prevalence of BE in women, as well as greater prevalence of AUDs in men.
The SNP rs5932307 was significantly associated with BD under the "XCI-robust approach" (P = 8.3 × 10−8), even with a conservative, Bonferroni-corrected significance threshold of P = 1.88 × 10−6. This contrasts with results from a recent GWAS that employed a two-stage methodology with independent discovery (7467 cases/27,303 controls) and replication samples (2313 cases/3489 controls); in that study, despite the larger sample size of the discovery cohort, no X chromosome SNPs passed the threshold (P = 1 × 10−6) to advance to testing in the replication sample [1]. However, this may be because different approaches to association testing were employed. In the previous study, the association test used the Clayton coding, which assumes that the minor allele in males has the same effect as two copies of the minor allele in females. However, the approach that yielded the significant result for our analysis was the "XCI-robust approach," which allowed the effect of the SNP to differ by sex. The potential importance of allowing SNP effects to differ by sex is highlighted by the fact that for this SNP, sex-stratified analyses suggest that the minor allele is more strongly associated with BD for females (ORW2 = 3.86, 95% CI 2.19–6.78) than for males (ORM = 1.13, 95% CI 0.82–1.56). However, this result should be interpreted cautiously given that this SNP showed some deviation from Hardy Weinberg equilibrium in one of the analyzed datasets.
Although not significant after multiple testing correction, the SNP most strongly associated with suicide attempt (rs5975146) was an eQTL of the X-prolyl aminopeptidase 2 (XPNPEP2) in both tibial nerve and adipose tissue, and the SNP most associated with AUD (rs145649722) was an eQTL of CLCN5 in the skin. The gene CLCN5 encodes the protein chloride channel 5 (Clc-5), and one study found the gene CLCN5 to be differentially methylated in brain tissue from obsessive-compulsive disorder subjects and controls [34].
Candidate SNPs most significantly associated with risk of suicide attempt in a prior study in a BD population of which the GAIN data was a subset [33] were not significantly associated with suicide attempt within our Mayo cohort, regardless of coding or approach, with most OR estimates close to one. This may have been due to differences in methodology, as most of these SNPs were also not associated in our analysis of the GAIN data, with the exception of rs695214.
Importantly, correct interpretation of X chromosome association results depends on the statistical model that was fit and genotype coding that was used, which reflect assumptions that were made. When interpreting effect size for X chromosome SNPs, multiple ORs are informative. Whereas for autosomes ORs are commonly reported for the change of one allele copy (assuming an additive model for allele effects), it is less clear what is most appropriate to report for X chromosome variants, because the effect of the SNP varies with sex. Under the "XCI-informed approach," for SNPs lying in regions that escape from XCI, the assumption is that the OR in males (ORM) is the same as females for a change of one allele copy (ORW1). However, for SNPs lying in regions experiencing XCI, the effect of a change of one allele copy in males (ORM) is expected to be comparable to a change of two copies in females (ORW2). These assumptions are implicit in the "XCI-informed approach," which assumes a log-additive effect of SNP in females.
While the "XCI-robust approach" that includes SNP-sex interactions also assumes that the effects of SNPs are log-additive in females, it is more flexible because the effect of a SNP can vary by sex. ORM is not constrained to equal the effect of the SNP in females (ORW1 or ORW2), which even allows for a SNP to exhibit a protective effect in one sex and to be a risk factor for the other sex. It is worth noting that the "XCI-informed approach" and the "XCI-robust approach" are designed to detect different genetic effects on the phenotype. The "XCI-informed approach" examines the main effect of the SNP variable on the phenotype, whereas the "XCI-robust approach" with the 2df test reflects the joint importance of the SNP and SNP-sex interaction terms, and hence is sensitive not only to the main effects but also to differences in the SNP effect between sexes.
The importance of allowing for this flexibility in the model can be seen by looking at top SNPs for each phenotype under the more restrictive "XCI-informed approach." All of these SNPs are in a region subject to XCI, which would lead one to predict those SNPs have the same effect for one allele in males as two copies in females (i.e., ORM = ORW2). However, examining the sex-stratified ORs for those SNPs (Additional file 5: Table S1) shows that many of those SNPs potentially have SNP effects that do not follow the expected theoretical pattern. For example, the top SNP for AUD under the "XCI-informed approach," rs62587381, has an estimated OR in males that is much greater than in females (ORM = 4.32 versus ORW2 = 1.85).
One might be concerned that the increased flexibility of the model might come at the expense of reduced power to detect genetic differences. However, this does not appear to be a major concern, at least in our study. For three of the five top SNPs for each phenotype under the "XCI-informed approach," we observed a P value within an order of magnitude for the "XCI-robust approach." Additionally, only the "XCI-robust approach" resulted in a significant finding for any of the phenotypes studied. However, a disadvantage of the "XCI-robust approach" when used across datasets that are subsequently meta-analyzed is that it relies on a two degrees-of freedom likelihood ratio test statistic that does not retain the directionality of the SNP effect, which can lead to difficulties in interpreting meta-analysis results.
Selection of prior gene-level XCI states is necessary for the "XCI-informed approach." We used the XCI consensus states described in Balaton et al. [31], because they were assessed across multiple studies and multiple tissue types and could be considered generally applicable, and it is unclear which tissue type might best inform BD risk. Because XCI patterns are known to be tissue-specific, a tissue-specific XCI source could be used for conditions with clearly defined normal tissue types, if it exists [13]. Failing to properly account for tissue-specific patterns could possibly lead to a reduction in power for the "XCI-informed approach" if the wrong XCI state is modeled. An advantage of the "XCI-robust approach" is that it does not rely on specification of the tissue-specific XCI pattern. Furthermore, the "XCI-robust approach" can also accommodate the phenomenon of partial or incomplete escape from XCI, which is not accounted for in the "XCI-informed approach."
Neither the "XCI-informed" or the "XCI-robust" approaches directly account for genes that are homologous across the X and Y chromosome (gametologs), as they do not incorporate Y chromosome data from males. The "XCI-informed" approach treats SNPs within these genes as escaping from XCI, whereas the "XCI-robust" approach does not make any assumptions about XCI status. This suggests that development of methods that incorporate X and Y data for studying these regions would be valuable.
Strengths of our work include the investigation of the role of X chromosome genetic variants to multiple symptoms and comorbidities of BD with known sex-differences in prevalence, and the use of two methods of analysis that can model the effect of SNPs both subject to and escaping from XCI. Importantly, we developed a new approach for analyzing X chromosome genetic variants that incorporates prior biological information on XCI status. However, our study also has limitations. The biological relevance of our observed associations is unclear, and laboratory validation required to establish biological associations is beyond the scope of this work, as is a comparison of the genetic versus hormonal influences on sex differences in BD. The relatively small sample size limited statistical power and makes interpretation of the significance of our findings difficult. Additionally, our cohorts were composed solely of individuals of European ancestry. Future work in more ethnically diverse cohorts or larger cohorts such as the Psychiatric Genomics Consortium might allow us to discover new X chromosome genetic variants that are important to BD risk and allow for findings with greater generalizability.
This work provides a basis for future methodological studies. Future work should extend both approaches to incorporate data from the Y chromosome in males for the XY gametolog genes. The relative merits of the two approaches should be more rigorously assessed by simulation studies assessing type I error and statistical power, as well as comparison to other existing approaches [37]. Alternate approaches could be explored, such as prioritizing SNPs in sex-biased genes or using Bayesian methods or model averaging [38], which could reflect the uncertainty that exists about a locus' XCI status. In addition, statistical approaches to determine the likely genetic architecture by which genotypes alter phenotypes (e.g., additivity vs. dominance of allelic effects) could also be pursued; additionally, information about the genetic architecture may also imply XCI status. Finally, the versatility and relative ease of implementation of our approach should encourage its broad application, particularly in conditions where X chromosome involvement is suggested, but few if any specific genes have been identified.
Perspectives and significance
In conclusion, we employed two different approaches to the analysis of X chromosome genetic variants that are able to model SNPs both subject to and escaping from XCI. In the "XCI-informed approach," we used biological information regarding what regions of the X chromosome undergo XCI to code the SNP variable differently for regions believed to undergo versus escape from inactivation. In the "XCI-robust approach," a more flexible model with a SNP-sex interaction term was fit that allowed for SNPs both in regions of inactivation and escape, without the need for prior knowledge as to the true XCI status. We also describe how the SNP effect sizes can be interpreted for each sex based on the model that was fit.
Neither approach identified SNPs that were significantly associated with sex-specific symptoms of BD, although the interaction approach identified a SNP (rs5932307) associated with risk of BD (P = 8.31 × 10−8). Future work in larger, independent cohorts is needed to replicate this finding, but our work highlights the importance of applying X chromosome-specific methods and careful interpretation of the results when analyzing phenotypes with known sex differences.
The datasets generated and/or analyzed for the GAIN cohort during the current study are available and were collected in previous work [17] and deposited in the dbGaP repository [26] (accession #: phs000017.v3.p1). Datasets generated and/or analyzed for the Mayo cohort contain protected health information and will not be shared to protect patient privacy
AIC:
Akaike information criterion
AUD:
Binge eating behavior
df:
Degrees-of-freedom
DIGS:
Diagnostic Interview of Genetic Studies
eQTL:
Expression quantitative trait locus
GAIN:
Genetic Association Information Network
MAF:
Minor allele frequency
ORM :
Odds ratio in males for a change of 1 allele copy
ORW1 :
Odds ratio in females for a change of 1 allele copy
Odds ratio in females for a change of 2 allele copies
Pseudoautosomal region
QC:
QTL:
Quantitative trait locus
RC:
SNP:
Hou L, Bergen SE, Akula N, et al. Genome-wide association study of 40,000 individuals identifies two novel loci associated with bipolar disorder. Human molecular genetics. 2016;25(15):3383–94.
Stahl EA, Breen G, Forstner AJ, et al. Genome-wide association study identifies 30 loci associated with bipolar disorder. Nature genetics. 2019;51(5):793–803.
Erol A, Winham SJ, McElroy SL, et al. Sex differences in the risk of rapid cycling and other indicators of adverse illness course in patients with bipolar I and II disorder. Bipolar disorders. 2015;17(6):670–6.
McElroy SL, Crow S, Blom TJ, et al. Clinical features of bipolar spectrum with binge eating behaviour. Journal of affective disorders. 2016;201:95–8.
Marrocco J, McEwen BS. Sex in the brain: hormones and sex differences. Dialogues Clin Neurosci. 2016;18(4):373–83.
Munkholm K, Vinberg M, Kessing LV. Peripheral blood brain-derived neurotrophic factor in bipolar disorder: a comprehensive systematic review and meta-analysis. Mol Psychiatry. 2016;21(2):216–28.
Saifi GM, Chandra HS. An apparent excess of sex- and reproduction-related genes on the human X chromosome. Proc Biol Sci. 1999;266(1415):203–9.
Green T, Flash S, Reiss AL. Sex differences in psychiatric disorders: what we can learn from sex chromosome aneuploidies. Neuropsychopharmacology. 2019;44(1):9–21.
Sidorenko J, Kassam I, Kemper KE, et al. The effect of X-linked dosage compensation on complex trait variation. Nature Commun. 2019;10(1):3009.
Ross MT, Grafham DV, Coffey AJ, et al. The DNA sequence of the human X chromosome. Nature. 2005;434(7031):325–37.
Brown CJ, Ballabio A, Rupert JL, et al. A gene from the region of the human X inactivation centre is expressed exclusively from the inactive X chromosome. Nature. 1991;349(6304):38–44.
Heard E, Disteche CM. Dosage compensation in mammals: fine-tuning the expression of the X chromosome. Genes Dev. 2006;20(14):1848–67.
Tukiainen T, Villani AC, Yen A, et al. Landscape of X chromosome inactivation across human tissues. Nature. 2017;550(7675):244–8.
Cotton AM, Price EM, Jones MJ, et al. Landscape of DNA methylation on the X chromosome reflects CpG density, functional chromatin state and X-chromosome inactivation. Hum Mol Genet. 2015;24(6):1528–39.
Carrel L, Willard HF. Heterogeneous gene expression from the inactive X chromosome: an X-linked gene that escapes X inactivation in some human cell lines but is inactivated in others. Proc Natl Acad Sci U S A. 1999;96(13):7364–9.
Frye MA, McElroy SL, Fuentes M, et al. Development of a bipolar disorder biobank: differential phenotyping for subsequent biomarker analyses. Int J Bipolar Disord. 2015;3(1):30.
Smith EN, Bloss CS, Badner JA, et al. Genome-wide association study of bipolar disorder in European American and African American individuals. Mol Psychiatry. 2009;14(8):755–63.
Olson JE, Ryu E, Johnson KJ, et al. The Mayo Clinic Biobank: a building block for individualized medicine. Mayo Clin Proc. 2013;88(9):952–62.
Cuellar-Barboza AB, Winham SJ, McElroy SL, et al. Accumulating evidence for a role of TCF7L2 variants in bipolar disorder with elevated body mass index. Bipolar Disord. 2016;18(2):124–35.
American Psychiatric Association, Frances A. Diagnostic and statistical manual of mental disorders DSM-IV. 4th Edition ed. American Psychiatric Association, Widiger T, editors. Washington D.C. 1997. 886 p.
Stice E, Telch CF, Rizvi SL. Development and validation of the Eating Disorder Diagnostic Scale: a brief self-report measure of anorexia, bulimia, and binge-eating disorder. Psychol Assess. 2000;12(2):123–31.
McElroy SL, Winham SJ, Cuellar-Barboza AB, et al. Bipolar disorder with binge eating behavior: a genome-wide association study implicates PRR5-ARHGAP8. Transl Psychiatry. 2018;8(1):40.
Pritchard JK, Stephens M, Donnelly P. Inference of population structure using multilocus genotype data. Genetics. 2000;155(2):945–59.
Porras-Hurtado L, Ruiz Y, Santos C, et al. An overview of STRUCTURE: applications, parameter settings, and supporting software. Front Genet. 2013;4:98.
Howie B, Fuchsberger C, Stephens M, et al. Fast and accurate genotype imputation in genome-wide association studies through pre-phasing. Nat Genet. 2012;44(8):955–9.
Mailman MD, Feolo M, Jin Y, et al. The NCBI dbGaP database of genotypes and phenotypes. Nat Genet. 2007;39(10):1181–6.
Nurnberger JI Jr, Blehar MC, Kaufmann CA, et al. Diagnostic interview for genetic studies. Rationale, unique features, and training. NIMH Genetics Initiative. Arch Gen Psychiatry. 1994;51(11):849–59 discussion 63-4.
Winham SJ, Cuellar-Barboza AB, Oliveros A, et al. Genome-wide association study of bipolar disorder accounting for effect of body mass index identifies a new risk allele in TCF7L2. Mol Psychiatry. 2014;19(9):1010–6.
Purcell S, Neale B, Todd-Brown K, et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am J Human Genet. 2007;81(3):559–75.
Clayton D. Testing for association on the X chromosome. Biostatistics (Oxford, England). 2008;9(4):593–600.
Balaton BP, Cotton AM, Brown CJ. Derivation of consensus inactivation status for X-linked genes from genome-wide studies. Biol Sex Differ. 2015;6:35.
Willer CJ, Li Y, Abecasis GR. METAL: fast and efficient meta-analysis of genomewide association scans. Bioinformatics. 2010;26(17):2190–1.
Jancic D, Seifuddin F, Zandi PP, et al. Association study of X chromosome SNPs in attempted suicide. Psychiatry Res. 2012;200(2-3):1044–6.
Yue W, Cheng W, Liu Z, et al. Genome-wide DNA methylation analysis in obsessive-compulsive disorder patients. Sci Rep. 2016;6:31333.
GTEx Consortium. Human genomics. The Genotype-Tissue Expression (GTEx) pilot analysis: multitissue gene regulation in humans. Science (New York, NY). 2015;348(6235):648–60.
O'Leary NA, Wright MW, Brister JR, et al. Reference sequence (RefSeq) database at NCBI: current status, taxonomic expansion, and functional annotation. Nucleic Acids Res. 2016;44(D1):D733–45.
Wang P, Xu S-Q, Wang B-Q, et al. A robust and powerful test for case–control genetic association study on X chromosome. Stat Methods Med Res. 2019;28(10-11):3260–72.
Chen B, Craiu R, Sun L. Bayesian model averaging for the X-chromosome inactivation dilemma in genetic association study. Biostatistics. 2018. https://www.ncbi.nlm.nih.gov/pubmed/30247537.
Funding support for the Whole Genome Association Study of Bipolar Disorder was provided by the National Institute of Mental Health (NIMH), and the genotyping of samples was provided through the Genetic Association Information Network (GAIN). The datasets used for the analyses described in this manuscript were obtained from the database of Genotypes and Phenotypes (dbGaP) found at http://www.ncbi.nlm.nih.gov/gap through dbGaP accession number phs000017.v3.p1. Samples and associated phenotype data for the Collaborative Genomic Study of Bipolar Disorder were provided by the NIMH Genetics Initiative for Bipolar Disorder. Data and biomaterials were collected in four projects that participated in the NIMH Bipolar Disorder Genetics Initiative. From 1991 to 1998, the Principal Investigators and Co-Investigators were Indiana University, Indianapolis, IN, U01 MH46282, John Nurnberger, M.D., Ph.D., Marvin Miller, M.D., and Elizabeth Bowman, M.D.; Washington University, St. Louis, MO, U01 MH46280, Theodore Reich, M.D., Allison Goate, Ph.D., and John Rice, Ph.D.; Johns Hopkins University, Baltimore, MD U01 MH46274, J. Raymond DePaulo, Jr., M.D., Sylvia Simpson, M.D., MPH, and Colin Stine, Ph.D.; NIMH Intramural Research Program, Clinical Neurogenetics Branch, Bethesda, MD, Elliot Gershon, M.D., Diane Kazuba, B.A., and Elizabeth Maxwell, M.S.W. Data and biomaterials were collected as part of ten projects that participated in the NIMH Bipolar Disorder Genetics Initiative. From 1999-03, the Principal Investigators and Co-Investigators were: Indiana University, Indianapolis, IN, R01 MH59545, John Nurnberger, M.D., Ph.D., Marvin J. Miller, M.D., Elizabeth S. Bowman, M.D., N. Leela Rau, M.D., P. Ryan Moe, M.D., Nalini Samavedy, M.D., Rif El-Mallakh, M.D. (at University of Louisville), Husseini Manji, M.D. (at Wayne State University), Debra A. Glitz, M.D. (at Wayne State University), Eric T. Meyer, M.S., Carrie Smiley, R.N., Tatiana Foroud, Ph.D., Leah Flury, M.S., Danielle M. Dick, Ph.D., Howard Edenberg, Ph.D.; Washington University, St. Louis, MO, R01 MH059534, John Rice, Ph.D, Theodore Reich, M.D., Allison Goate, Ph.D., Laura Bierut, M.D.; Johns Hopkins University, Baltimore, MD, R01 MH59533, Melvin McInnis M.D., J. Raymond DePaulo, Jr., M.D., Dean F. MacKinnon, M.D., Francis M. Mondimore, M.D., James B. Potash, M.D., Peter P. Zandi, Ph.D, Dimitrios Avramopoulos, and Jennifer Payne; University of Pennsylvania, PA, R01 MH59553, Wade Berrettini M.D., Ph.D.; University of California at Irvine, CA, R01 MH60068, William Byerley M.D., and Mark Vawter M.D.; University of Iowa, IA, R01 MH059548, William Coryell M.D., and Raymond Crowe M.D.; University of Chicago, IL, R01 MH59535, Elliot Gershon, M.D., Judith Badner Ph.D., Francis McMahon M.D., Chunyu Liu Ph.D., Alan Sanders M.D., Maria Caserta, Steven Dinwiddie M.D., Tu Nguyen, Donna Harakal; University of California at San Diego, CA, R01 MH59567, John Kelsoe, M.D., Rebecca McKinney, B.A.; Rush University, IL, R01 MH059556, William Scheftner M.D., Howard M. Kravitz, D.O., M.P.H., Diana Marta, B.S., Annette Vaughn-Brown, MSN, RN, and Laurie Bederow, MA; NIMH Intramural Research Program, Bethesda, MD, 1Z01MH002810-01, Francis J. McMahon, M.D., Layla Kassem, PsyD, Sevilla Detera-Wadleigh, Ph.D, Lisa Austin, Ph.D, Dennis L. Murphy, M.D.
The Genotype-Tissue Expression (GTEx) Project was supported by the Common Fund of the Office of the Director of the National Institutes of Health, and by NCI, NHGRI, NHLBI, NIDA, NIMH, and NINDS. The data used for the analyses described in this manuscript were obtained from the GTEx Portal on 5/31/2019.
This work was funded by the Marriott Foundation and Mayo Clinic Center for Individualized Medicine. WJ is supported by grant R25 GM075148 from the National Institutes of Health.
Department of Health Sciences Research, Mayo Clinic, Rochester, MN, 55905, USA
William A. Jons
, Colin L. Colby
, Joanna M. Biernacka
& Stacey J. Winham
Lindner Center of HOPE, University of Cincinnati College of Medicine, Mason, OH, 45040, USA
Susan L. McElroy
Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN, 55905, USA
Mark A. Frye
& Joanna M. Biernacka
Search for William A. Jons in:
Search for Colin L. Colby in:
Search for Susan L. McElroy in:
Search for Mark A. Frye in:
Search for Joanna M. Biernacka in:
Search for Stacey J. Winham in:
WAJ contributed to the design of the study, conducted the analyses, interpreted the results, and wrote the manuscript. CLC manages the data collection system database, provided assistance with data preparation, and conducted imputation and QC for datasets described. SJW contributed to the conception and design of the study, contributed to the conduct of the analysis, interpreted the results, contributed to the writing of the manuscript, and supervised this work. JMB contributed to the conception and design of the study, contributed to the conduct of the analysis, interpreted the results, contributed to the writing of the manuscript, supervised this work, and also served as co-PI for Mayo Clinic Individualized Medicine Biobank for Bipolar Disorder. MAF provided oversight for Mayo Clinic Individualized Medicine Biobank for Bipolar Disorder and assisted with patient recruitment and phenotyping. SLM is the principal investigator at the Lindner Center of HOPE/University of Cincinnati and participated in and supervised the patient recruitment and phenotyping. All authors reviewed, revised, and approved the final manuscript.
Correspondence to Stacey J. Winham.
The research in the Mayo cohort was approved under the title "Mayo Clinic Individualized Medicine Biobank for Bipolar Disorder" by the Mayo Clinic Institutional Review Board (IRB#: 08-008794).
SLMc has received research grants from Alkermes, AstraZeneca, Cephalon, Eli Lilly & Co., Forest, Marriott Foundation, Orexigen Therapeutics, Inc., Naurex, Pfizer, Shire, Takeda Pharmaceutical Company Ltd., and Transcept Pharmaceutical, Inc.; has been a consultant to or member of the scientific advisory boards of Alkermes, Bracket, Corcept, F. Hoffman La Roche, MedAvante, Naurex, Novo Nordisk, Shire, and Teva; and is also an inventor on US patent no. 6,323,236 B2, Use of Sulfamate Derivatives for Treating Impulse Control Disorders, and, along with the patent's assignee, University of Cincinnati, Cincinnati, OH, USA, has received payments from Johnson & Johnson, which has exclusive rights under the patent.
MAF has received grant support from Pfizer and Myriad and has served as an unpaid consultant for Allergan, Myriad, Sunovion, and Teva Pharmaceuticals.
WAJ, CLC, JMB, and SJW declare that they have no competing interests.
Additional file 1: Figure S1. Association of X chromosome genetic variants with RC. Top row denotes results from XCI-Informed Approach. Bottom row denotes results from XCI-Robust Approach. Green line denotes the study wide significance threshold of 3.36x10-6. Domains as shown in the colored bars beneath the Manhattan plots for XCI-Informed Approach denote whether SNPs fall into regions experiencing (red) or escaping (blue) from X chromosome inactivation. Grey denotes regions for which a domain (subject or escaping) could not be assigned based on the paper by Balaton et al [31]. SNPs are colored by the chosen XCI status used in the meta-analysis.
Additional file 2: Figure S2. Association of X chromosome genetic variants with attempted suicide. Top row denotes results from XCI-Informed Approach. Bottom row denotes results from XCI-Robust Approach. Green line denotes the study wide significance threshold of 3.36x10-6. Domains as shown in the colored bars beneath the Manhattan plots for XCI-Informed Approach denote whether SNPs fall into regions experiencing (red) or escaping (blue) from X chromosome inactivation. Grey denotes regions for which a domain (subject or escaping) could not be assigned based on the paper by Balaton et al [31]. SNPs are colored by the chosen XCI status used in the meta-analysis.
Additional file 3: Figure S3. Association of X chromosome genetic variants with BE. Top row denotes results from XCI-Informed Approach. Bottom row denotes results from XCI-Robust Approach. Green line denotes the study wide significance threshold of 3.36x10-6. Domains as shown in the colored bars beneath the Manhattan plots for XCI-Informed Approach denote whether SNPs fall into regions experiencing (red) or escaping (blue) from X chromosome inactivation. Grey denotes regions for which a domain (subject or escaping) could not be assigned based on the paper by Balaton et al [31]. SNPs are colored by the chosen XCI status used in the meta-analysis.
Additional file 4: Figure S4. Association of X Chromosome Genetic Variants with AUD. Top row denotes results from XCI-Informed Approach. Bottom row denotes results from XCI-Robust Approach. Green line denotes the study wide significance threshold of 3.36x10-6. Domains as shown in the colored bars beneath the Manhattan plots for XCI-Informed Approach denote whether SNPs fall into regions experiencing (red) or escaping (blue) from X chromosome inactivation. Grey denotes regions for which a domain (subject or escaping) could not be assigned based on the paper by Balaton et al [31]. SNPs are colored by the chosen XCI status used in the meta-analysis.
Additional file 5: Table S1. Top SNPs under "XCI-informed" Approach.
Additional file 6: Table S2. Candidate SNPs for Association with Suicide Attempt.
Jons, W.A., Colby, C.L., McElroy, S.L. et al. Statistical methods for testing X chromosome variant associations: application to sex-specific characteristics of bipolar disorder. Biol Sex Differ 10, 57 (2019). https://doi.org/10.1186/s13293-019-0272-4
X chromosome
Genetic association
X chromosome inactivation
X chromosome statistical analysis | CommonCrawl |
What is the evidence for 'billions of neutrinos pass through your body every second'?
This statement is repeated so often that it has become somewhat of a cliche: 'billions of neutrinos pass through your body every second'. For example see 1, 2, 3, 4, 5, 6.
What is the evidence for it, especially considering that we have never detected even a hundred neutrinos in a second through one detector?
particle-physics standard-model neutrinos elementary-particles particle-detectors
Ritesh Singh
Ritesh SinghRitesh Singh
$\begingroup$ Related: physics.stackexchange.com/q/303858/44126 $\endgroup$ – rob♦ Feb 23 '20 at 17:44
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – Chris♦ Feb 27 '20 at 23:14
$\begingroup$ I'm disappointed that no-one refers to xkcd yet: Lethal Neutrinos $\endgroup$ – Ooker Feb 29 '20 at 7:28
As others have noted, the neutrinos come from the sun. Given that, there are two broad ways of estimating the flux of neutrinos: one is theoretical, and the other is experimental.
The theoretical way is based on the Standard Solar Model. This is a well understood model with solid experimental validation, and astronomers and astrophysicists therefore have great faith in it. According to this model, the solar neutrino flux is dominated by proton-proton fusion reactions, which generate an electron neutrino flux of approximately $6 \times 10^{10}\ \mathrm{cm^{-2}\ s^{-1}}$ at $1\ \mathrm{AU}$.
The experimental way is to build neutrino detectors and measure the flux. Because of the extremely small cross section, it is difficult to build a detector that can collect enough data to reduce the statistical and systematic uncertainty to qualify for a precision measurement. Nevertheless, a lot of work has been performed in this area and results have been obtained with uncertainties to within a percent or so.
The experimental results and theoretical predictions did not agree with each other; they were off by a factor of three, which was the so-called Solar Neutrino Problem. This was resolved by hypothesizing, and then experimentally verifying, that the electron neutrinos produced in the sun "oscillated" into other flavors of neutrinos (muon, tau) by the time they were detected on earth, so now the experimental neutrino flux measurements agree with the theoretical predictions.
Richter65Richter65
$\begingroup$ To play devil's advocate for a moment: how can you experimentally tell the difference between, say, 1 billion neutrinos / second of which 10/million are detected, and 10 billion neutrinos / second of which 1/million are detected? $\endgroup$ – TLW Feb 26 '20 at 3:04
$\begingroup$ @TLW: Just from the detected neutrino flux alone, you can't tell. But if theoretical calculation A, about the sun's output, says we should expect 1 billion neutrinos/sec, theoretical calculation B, about the detector, says we should expect to detect 10 per million of these, and we do indeed detect 10,000/s, then it's less likely that both the theoretical calculations are wrong in ways that cancel out perfectly — especially when there are also other theoretical calculations C, D, and E that also agree with this, and experiments X, Y, Z that test other predictions of the theory. $\endgroup$ – PLL Feb 26 '20 at 8:51
$\begingroup$ This is a great answer that abides by scientific principles rather than being sprinkled through with phrases like "we know that...". $\endgroup$ – Asteroids With Wings Feb 27 '20 at 1:07
$\begingroup$ Thank you @Asteroids! @TLW, to follow up on what PLL said (which was correct), there are various ways of experimentally measuring the fraction of neutrinos that get detected when they pass through a detector. The modern and most direct way is to create a neutrino beam of known intensity using a particle accelerator, and measuring the fraction of the beam that gets detected. A summary of these measurements can be found in the "bible of particle physics", the PDG: pdg.lbl.gov/2019/reviews/rpp2018-rev-nu-cross-sections.pdf $\endgroup$ – Richter65 Feb 27 '20 at 14:40
Those neutrinos come from the Sun. Fusion converts protons to neutrons, so that must produce neutrinos. One can calculate the number of nuclear reactions necessary for the power output, and get a number for the neutrino flux.
One can also estimated the flux from the cross section of the detector.
The two rates differ by a factor of about three. That was resolved by the neutrino oscillations between the three flavors (electron, muon and tauon neutrinos).
PieterPieter
$\begingroup$ To expand a bit on @Pieter : If we know, for example, that 99.9999999% of any neutrinos that pass through a detector won't be detected, and our detector actually detects one per day, then we know that approximately $1/(1-0.999999999)$ neutrinos per day have passed through the detector. "Cross section" relates to the fraction of neutrinos passing throuth the detector will be detected. $\endgroup$ – S. McGrew Feb 23 '20 at 17:51
$\begingroup$ @S.McGrew Indeed. And different detectors (with different nuclei and different detection probabilities) measured the same flux. Which was about a factor three smaller than expected. So the experimental uncertainties are not large at all. $\endgroup$ – Pieter Feb 23 '20 at 18:00
$\begingroup$ Since they differ from each other, "The two rates differ by a factor of about three" is more clear. "Both differ" makes it sound like there is some third value that they both differ from. $\endgroup$ – Acccumulation Feb 24 '20 at 5:54
$\begingroup$ @S.McGrew - being the devil's advocate for a moment: how can we know that "99.9999999% of any neutrinos that pass through a detector won't be detected"? $\endgroup$ – TLW Feb 26 '20 at 3:01
$\begingroup$ @TLW I don't actually know, but my best guess would be that cross sections were experimentally determined using particle accelerator experiments. You can generate neutrinos in well controlled conditions so you know exactly how many to expect, and then see how many pass through your detector undetected. Edit: see Anna's answer for more about particle accelerators $\endgroup$ – craq Feb 26 '20 at 9:21
The existence of the neutrinos was established using energy and momentum conservation in neutron decays. There have been experiments with neutrino and antineutrino beams both at Cern and Brookhaven and have established their interaction crossection with matter. To get one neutrino interacting in the detector it means that thousands have passed without interacting, according to the theoretical calculations. Your "that we have never detected even a hundred neutrinos in a second through one detector? " is misleading, because the one we do detect, mathematically means that the calculated beam flux is correct according to the theory
There exists a solid theory that can estimate the number of neutrinos given certain assumptions of what the cosmic charged particle background is.
For example, we measure the muon flux at sea level, and muons decay into electrons and a muon neutrino and an electron antineutrino, so we know from the kinematics what the muon induced flux of neutrinos is at sea level. ( an average flux of about 1 muon per square centimeter per minute. far from billions)
There are detectors detecting solar neutrinos and those also agree with the mainstream theory of weak interactions. Those fulfill the billions recipe,
The flux of solar neutrinos at the earth's surface is on the order of $10^{11}$ per square centimeter per second.
Theory says that there should be cosmic relic neutrinos, coming from their decoupling in the Big Bang model, similar to the Cosmic Microwave Background, this would add orders of magnitude to very low energy background neutrinos, but this is still now a theoretical prediction.
anna vanna v
$\begingroup$ Technically .. our sun alone produces enough neutrinos that PER second per square CENTIMETER about 60 billion neutrinos reach earth (roughly the area covered by your thumbnail) .. [american billion] $\endgroup$ – eagle275 Feb 24 '20 at 8:31
$\begingroup$ and they keep going at night, passing through the planet with practically no losses. $\endgroup$ – dlatikay Feb 24 '20 at 19:01
$\begingroup$ This is a helpful explanation; +1. 'Your "that we have never detected even a hundred neutrinos in a second through one detector?" is misleading, because the one we do detect, mathematically means that the calculated beam flux is correct according to the theory.' I would disagree that the statement is misleading. It's a legitimate question, with a good answer. $\endgroup$ – LarsH Feb 25 '20 at 14:32
Not the answer you're looking for? Browse other questions tagged particle-physics standard-model neutrinos elementary-particles particle-detectors or ask your own question.
With estimates of mass constraints on magnetic monopoles, how likely is one to be found by the LHC(MoEDAL)?
Measuring nucleons using electron beams
Could electrons be a form of antimatter?
Identification of particles and anti-particles
How do neutrinos pass through the sun so quickly?
Are neutrinos diffused or defracted by the moon?
Is it possible to know the efficiency of a particle detector without assuming the truth of Theory (e.g.,, Quantum Theory) | CommonCrawl |
On the nonlocal curvatures of surfaces with or without boundary
CPAA Home
Approximation of a nonlinear fractal energy functional on varying Hilbert spaces
March 2018, 17(2): 671-707. doi: 10.3934/cpaa.2018036
Convergent approximation of non-continuous surfaces of prescribed Gaussian curvature
Brittany Froese Hamfeldt
Department of Mathematical Sciences, New Jersey Institute of Technology, University Heights, Newark, NJ 07102, USA
Received January 2017 Revised August 2017 Published March 2018
Fund Project: This work was partially supported by NSF DMS-1619807.
Full Text(HTML)
Figure(4) / Table(1)
We consider the numerical approximation of surfaces of prescribed Gaussian curvature via the solution of a fully nonlinear partial differential equation of Monge-Ampère type. These surfaces need not be continuous up to the boundary of the domain and the Dirichlet boundary condition must be interpreted in a weak sense. As a consequence, sub-solutions do not always lie below super-solutions, standard comparison principles fail, and existing convergence theorems break down. By relying on a geometric interpretation of weak solutions, we prove a relaxed comparison principle that applies only in the interior of the domain. We provide a general framework for proving existence and stability results for consistent, monotone finite difference approximations and modify the Barles-Souganidis convergence framework to show convergence in the interior of the domain. We describe a convergent scheme for the prescribed Gaussian curvature equation and present several challenging examples to validate these results.
Keywords: Gaussian curvature, elliptic partial differential equations, Monge-Ampère equations, Gaussian curvature, viscosity solutions, finite difference methods.
Mathematics Subject Classification: Primary:35D40, 35J15, 35J25, 35J60, 65N06, 65N22;Secondary:35J66, 35J67, 35J70, 35J96, 53A99.
Citation: Brittany Froese Hamfeldt. Convergent approximation of non-continuous surfaces of prescribed Gaussian curvature. Communications on Pure & Applied Analysis, 2018, 17 (2) : 671-707. doi: 10.3934/cpaa.2018036
G. Alberti and L. Ambrosio, A geometrical approach to monotone functions in $\mathbb{R}^n$, Math. Z., 230 (1999), 259-316. Google Scholar
I. J. Bakelman, Generalized elliptic solutions of the Dirichlet problem for n-dimensional Monge-Ampère equations, In Nonlinear Functional Analysis and its Applications, volume 45 of P. Symp. Pure Math., pages 73-102. AMS, 1986. Google Scholar
I. J. Bakelman, Convex Analysis and Nonlinear Geometric Elliptic Equations Springer Science & Business Media, 2012. Google Scholar
M. Bardi and I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations Springer Science & Business Media, 2008. Google Scholar
M. Bardi and P. Mannucci, Comparison principles and Dirichlet problem for fully nonlinear degenerate equations of Monge-Ampère type, Forum Math., 25 (2013), 1291-1330. Google Scholar
G. Barles and P. E. Souganidis, Convergence of approximation schemes for fully nonlinear second order equations, Asymptotic Anal., 4 (1991), 271-283. Google Scholar
J.-D. Benamou, F. Collino and J.-M. Mirebeau, Monotone and consistent discretization of the Monge-Ampere operator, Mathematics of computation, 85 (2016), 2743-2775. Google Scholar
Z. Blocki, On the Darboux equation, Zeszyty Naukowe Uniwersytetu Jagiello{\'n}skiego. Universitatis Iagellonicae Acta Mathematica, 1255 (2001), 87-90. Google Scholar
[9] J. M. Borwein and J. D. Vanderwerff, Convex Functions: Constructions, Characterizations and Counterexamples, Cambridge University Press, 2010. Google Scholar
S. C. Brenner, T. Gudi, M. Neilan and L.-Y. Sung, $C^0$ penalty methods for the fully nonlinear Monge-Ampère equation, Math. Comp., 80 (2011), 1979-1995. Google Scholar
L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second-order elliptic equations i. Monge-Ampère equation, Comm. Pure Appl. Math., 37 (1984), 369-402. Google Scholar
L. Caffarelli, L. Nirenberg and J. Spruck, The Dirichlet problem for nonlinear second order elliptic equations, Ⅲ: Functions of the eigenvalues of the Hessian, Acta Mathematica, 155 (1985), 261-301. Google Scholar
Y. Chen and J. W. L. Wan, Monotone mixed narrow/wide stencil finite difference scheme for Monge-Ampère equation, https://arxiv.org/pdf/1608.00644.pdf, 2016. Google Scholar
M. G. Crandall, H. Ishii and P.-L. Lions, User's guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc. (N.S.), 27 (1992), 1-67. Google Scholar
E. J. Dean and R. Glowinski, Numerical methods for fully nonlinear elliptic equations of the Monge-Ampère type, Comput. Methods Appl. Mech. Engrg., 195 (2006), 1344-1386. Google Scholar
M. Elsey and S. Esedoḡlu, Analogue of the total variation denoising model in the context of geometry processing, Multiscale Model. Simul., 7 (2009), 1549-1573. Google Scholar
X. Feng and M. Neilan, Vanishing moment method and moment solutions for fully nonlinear second order partial differential equations, J. Sci. Comput., 38 (2009), 74-98. Google Scholar
J. M. Finn, G. L. Delzanno and L. Chacón, Grid generation and adaptation by Monge-Kantorovich optimization in two and three dimensions, In Proc. 17th Int. Meshing Roundtable, pages 551-568,2008. Google Scholar
B. D. Froese, A numerical method for the elliptic Monge-Ampère equation with transport boundary conditions, SIAM J. Sci. Comput., 34 (2012), A1432-A1459. Google Scholar
B. D. Froese, Meshfree finite difference approximations for functions of the eigenvalues of the Hessian, Numer. Math. doi: 10.1007/s00211-017-0898-2, 2017. Google Scholar
B. D. Froese and A. M. Oberman, Convergent finite difference solvers for viscosity solutions of the elliptic Monge-Ampère equation in dimensions two and higher, SIAM J. Numer. Anal., 49 (2011), 1692-1714. Google Scholar
B. D. Froese and A. M. Oberman, Convergent filtered schemes for the Monge-Ampère partial differential equation, SIAM J. Numer. Anal., 51 (2013), 423-444. Google Scholar
[23] D. Gilbarg and N. S. Trudinger, Elliptic partial differential equations of second order, volume 224 of Grundlehren Math. Wiss, 2nd edition, Springer-Verlag, 1983. Google Scholar
C. E. Gutiérrez, The Monge-Ampère Equation, volume 44 of Progr. Nonlinear Differential Equations Appl., Springer Science & Business Media, 2001. Google Scholar
B. Hamfeldt and T. Salvador, Higher-order adaptive finite difference methods for fully nonlinear elliptic equations, J. Sci. Comput. in press. Google Scholar
Q. Han and J. -X. Hong, Isometric Embedding of Riemannian Manifolds in Euclidean Spaces volume 130, American Mathematical Society Providence, 2006. Google Scholar
H. Ishii and P.-L. Lions, Viscosity solutions of fully nonlinear second-order elliptic partial differential equations, J. Diff. Eq., 83 (1990), 26-78. Google Scholar
J. B. Kruskal, Two convex counterexamples: A discontinuous envelope function and a nondifferentiable nearest-point mapping, Proc. Amer. Math. Soc., (1969), 697-703. Google Scholar
P.-L. Lions, Two remarks on Monge-Ampere equations, Ann. Mat. Pura Appl., 142 (1985), 263-275. Google Scholar
G. Loeper and F. Rapetti, Numerical solution of the Monge-Ampère equation by a Newton's algorithm, C. R. Math. Acad. Sci. Paris, 340 (2005), 319-324. Google Scholar
J.-M. Mirebeau, Discretization of the 3d Monge-Ampere operator, between wide stencils and power diagrams, ESAIM: Mathematical Modelling and Numerical Analysis, 49 (2015), 1511-1523. Google Scholar
A. Oberman, The convex envelope is the solution of a nonlinear obstacle problem, Proc. Amer. Math. Soc., 135 (2007), 1689-1694. Google Scholar
A. M. Oberman, Convergent difference schemes for degenerate elliptic and parabolic equations: Hamilton--Jacobi equations and free boundary problems, SIAM J. Numer. Anal., 44 (2006), 879-895. Google Scholar
A. M. Oberman, Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 221-238. Google Scholar
V. Oliker, Embedding $S^n$ into $R^{n+1}$ with given integral Gauss curvature and optimal mass transport on $S^n$, Advances in Mathematics, 213 (2007), 600-620. Google Scholar
V. I. Oliker and L. D. Prussner, On the numerical solution of the equation $(\partial^2z/\partial x^2)(\partial^2z/\partial y^2)-(\partial^2z/\partial x\partial y)^2=f$ and its discretizations, I, Numer. Math., 54 (1988), 271-293. Google Scholar
S. Osher and J. A. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations, J. Comput. Phys., 79 (1988), 12-49. Google Scholar
[38] G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2006. Google Scholar
L.-P. Saumier, M. Agueh and B. Khouider, An efficient numerical algorithm for the L2 optimal transport problem with periodic densities, IMA J. Appl. Math., 80 (2015), 135-157. Google Scholar
M. Sulman, J. F. Williams and R. D. Russell, Optimal mass transport for higher dimensional adaptive grid generation, J. Comput. Phys., 230 (2011), 3302-3330. Google Scholar
N. S. Trudinger and J. I. E. Urbas, The Dirichlet problem for the equation of prescribed Gauss curvature, Bull. Aust. Math. Soc., 28 (1983), 217-231. Google Scholar
N. S. Trudinger and X. -J. Wang, The Monge-Ampère equation and its geometric applications, In Handbook of Geometric Analysis, volume 7 of Adv. Lect. Math., pages 467--524. Int. Press, 2008. Google Scholar
J. I. E. Urbas, The generalized Dirichlet problem for equations of Monge-Ampere type, Annales de l'IHP Analyse non linéaire, 3 (1986), 209-228. Google Scholar
C. Villani, Topics in optimal transportation volume 58 of Graduate Studies in Mathematics AMS, Providence, RI, 2003. Google Scholar
H. Zhao, A fast sweeping method for Eikonal equations, Math. Comp., 74 (2005), 603-627. Google Scholar
Figure 1. (a) A viscosity solution with constant Gaussian curvature that does not achieve the Dirichlet boundary conditions and (b) a sub-solution that lies above this viscosity solution
Figure Options
Download as PowerPoint slide
Figure 2. A finite difference stencil chosen from a point cloud (a) in the interior and (b) near the boundary
Figure 3. Computational point cloud with $h=2^{-3}$
Figure 4. Computed approximations ($h=2^{-7}$) to solutions that (a) are Lipschitz continuous (6.4.1), (b) have an unbounded gradient (6.4.2), and (c) do not achieve the Dirichlet data (6.4.3). (d) Error in discontinuous solution
Table 1. Error in computed solutions
$C^{0, 1}$ $C^0$ Non-continuous
$h$ $\|u-u^h\|_\infty$ $\|u-u^h\|_\infty$ $\|u-u^h\|_\infty$ $\|u-u^h\|_\infty$
$2^{-3}$ $9.45\times10^{-2}$ $1.94\times10^{-1}$ $3.55\times10^{-1}$ $2.12\times10^{-1}$
Cristian Enache. Maximum and minimum principles for a class of Monge-Ampère equations in the plane, with applications to surfaces of constant Gauss curvature. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1347-1359. doi: 10.3934/cpaa.2014.13.1347
Fan Cui, Huaiyu Jian. Symmetry of solutions to a class of Monge-Ampère equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1247-1259. doi: 10.3934/cpaa.2019060
Adam M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 221-238. doi: 10.3934/dcdsb.2008.10.221
Alessio Figalli, Young-Heon Kim. Partial regularity of Brenier solutions of the Monge-Ampère equation. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 559-565. doi: 10.3934/dcds.2010.28.559
Barbara Brandolini, Carlo Nitsch, Cristina Trombetti. Shape optimization for Monge-Ampère equations via domain derivative. Discrete & Continuous Dynamical Systems - S, 2011, 4 (4) : 825-831. doi: 10.3934/dcdss.2011.4.825
Limei Dai, Hongyu Li. Entire subsolutions of Monge-Ampère type equations. Communications on Pure & Applied Analysis, 2020, 19 (1) : 19-30. doi: 10.3934/cpaa.2020002
Jiakun Liu, Neil S. Trudinger. On Pogorelov estimates for Monge-Ampère type equations. Discrete & Continuous Dynamical Systems, 2010, 28 (3) : 1121-1135. doi: 10.3934/dcds.2010.28.1121
Juhua Shi, Feida Jiang. The degenerate Monge-Ampère equations with the Neumann condition. Communications on Pure & Applied Analysis, 2021, 20 (2) : 915-931. doi: 10.3934/cpaa.2020297
Limei Dai. Multi-valued solutions to a class of parabolic Monge-Ampère equations. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1061-1074. doi: 10.3934/cpaa.2014.13.1061
Shouchuan Hu, Haiyan Wang. Convex solutions of boundary value problem arising from Monge-Ampère equations. Discrete & Continuous Dynamical Systems, 2006, 16 (3) : 705-720. doi: 10.3934/dcds.2006.16.705
Shuyu Gong, Ziwei Zhou, Jiguang Bao. Existence and uniqueness of viscosity solutions to the exterior problem of a parabolic Monge-Ampère equation. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4921-4936. doi: 10.3934/cpaa.2020218
Yahui Niu. Monotonicity of solutions for a class of nonlocal Monge-Ampère problem. Communications on Pure & Applied Analysis, 2020, 19 (11) : 5269-5283. doi: 10.3934/cpaa.2020237
Luca Codenotti, Marta Lewicka. Visualization of the convex integration solutions to the Monge-Ampère equation. Evolution Equations & Control Theory, 2019, 8 (2) : 273-300. doi: 10.3934/eect.2019015
Mengni Li. Global regularity for a class of Monge-Ampère type equations with nonzero boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (1) : 301-317. doi: 10.3934/cpaa.2020267
Jingang Xiong, Jiguang Bao. The obstacle problem for Monge-Ampère type equations in non-convex domains. Communications on Pure & Applied Analysis, 2011, 10 (1) : 59-68. doi: 10.3934/cpaa.2011.10.59
Ziwei Zhou, Jiguang Bao, Bo Wang. A Liouville theorem of parabolic Monge-AmpÈre equations in half-space. Discrete & Continuous Dynamical Systems, 2021, 41 (4) : 1561-1578. doi: 10.3934/dcds.2020331
Nam Q. Le. Optimal boundary regularity for some singular Monge-Ampère equations on bounded convex domains. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021188
Qi-Rui Li, Xu-Jia Wang. Regularity of the homogeneous Monge-Ampère equation. Discrete & Continuous Dynamical Systems, 2015, 35 (12) : 6069-6084. doi: 10.3934/dcds.2015.35.6069
Bo Guan, Qun Li. A Monge-Ampère type fully nonlinear equation on Hermitian manifolds. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1991-1999. doi: 10.3934/dcdsb.2012.17.1991
Paul Bracken. Connections of zero curvature and applications to nonlinear partial differential equations. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1165-1179. doi: 10.3934/dcdss.2014.7.1165
HTML views (166) | CommonCrawl |
Feynman Lectures Vol I 41-2: why can $\omega$ replace $\omega_{0}$ in deriving Rayleigh's law?
NB: I understand that the model presented here is the "failing" classical model. I'm just trying to understand the formal reasoning of the model.
In The Feynman Lectures on Physics Vol. I Ch. 41-2 Thermal equilibrium of radiation Feynman begins with the model of a rarefied gas confined to a perfectly mirrored box, in thermal equilibrium with the ambient electromagnetic radiation in the box. He assumes the gas atoms are classical oscillators of natural frequency $\omega_{0},$ and, thereby establishes Equation 41-12
$$I\left[\omega_{0}\right]=\frac{9\gamma^{2}kT}{4\pi^{2}r_{0}^{2}\omega_{0}^{2}}.$$
The derivation begins with the assumption that the atoms have one resonant frequency $\omega_{0}.$ But he tells us
Then we substitute the formula (41.6) [$\gamma=\frac{\omega_{0}}{Q}=\frac{2}{3}\frac{r_{0}\omega_{0}^{2}}{c},$] for gamma (do not worry about writing $\omega_{0};$ since it is true of any $\omega_{0}$, we may just call it $\omega$) and the formula for $I\left[\omega\right]$ then comes out
$$I\left[\omega\right]=\frac{\omega^{2}kT}{\pi^{2}c^{2}}.$$
He goes on to say:
First, let us notice a remarkable feature of that expression. The charge of the oscillator, the mass of the oscillator, all properties specific to the oscillator, cancel out, because once we have reached equilibrium with one oscillator, we must be at equilibrium with any other oscillator of a different mass, or we will be in trouble.
I'm not sure how to comprehend that. My best guess is that it means, given a gas of atoms resonant at $\omega_{0}$ in thermal equilibrium with the ambient radiation of a perfectly reflective cavity, if another kind of atom with natural frequency $\omega_{1}$ were mixed in at the same temperature and pressure, there wold be no change in the ambient radiation profile. That is, the original gas "wouldn't know the difference".
We can therefore assume the radiation profile is independent of the kinds of oscillators present. That is, the intensity at frequency $\omega$ and temperature $T$ is the same, whether or not there are atomic oscillators resonant at $\omega$.
Is that a good way to think about this?
PS: In the past when reading these lectures, I attempted to "understand" the topics well enough to justify turning the page. This time I am attempting to summarize everything. It is far more difficult than I had expected.
quantum-mechanics thermodynamics statistical-mechanics thermal-radiation kinetic-theory
Steven Thomas HattonSteven Thomas Hatton
I believe that in this model the black body is assumed to contain oscillators of all possible natural frequencies $\omega$ (since the black body is by definition able to absorb all frequencies). Each of these oscillators contribute to the ambient electromagnetic radiation present in the box when the whole system has reached thermal equilibrium.
Now we consider only oscillators with natural frequency $\omega_0$. At equilibrium we need that these emit exactly the same amount of radiation as they absorb from the ambient radiation (if not they would lose energy and the whole system would cool). From Feynman's analysis we see that in order for this to happen the intensity of the ambient radiation with frequency $\omega_0$ must go like $\omega_0^2$ (the intensities of other frequencies are not important since only those near $\omega_0$ are absorbed). But we can apply the same argument for all other oscillators of all other possible frequencies, deducing that for any frequency ω the intensity of ambient radiation of that frequency goes like $\omega^2$.
Chris♦
GuestGuest
$\begingroup$ My reading of Feynman is that the classical black body radiation profile is the same whether there is one species of oscillator with natural frequency $\omega_{0}$; two types of oscillators with resonance at $\omega_{0}$ and $\omega_{1}$, respectively; $n$ types, each with its own resonant frequency, or a continuous distribution. Also, there is no specification of the relative number density of each type of oscillator. So, no matter what the mix of oscillators, the classically predicted distribution will be given by the same formula: $I\left[\omega\right]=\frac{\omega^{2}kT}{\pi^{2}c^{2}}.$ $\endgroup$ – Steven Thomas Hatton May 13 '18 at 19:38
$\begingroup$ No, we should assume oscillators of all frequencies are present. If we only had one species of oscillator with freq. ω0 we could only deduce from Feynman's argument that in the ambient radiation I[ω0] goes like ω0^2. The argument does not allow us to say anything about the intensity of general ω. Only if we assume oscillators of all frequencies are present can we replace ω0 by ω. You should be able to see that the relative number density of each type of oscillator does not matter in the argument (only one oscillator at any particular frequency would do!) $\endgroup$ – Guest May 14 '18 at 14:12
$\begingroup$ I am familiar with the model of black body radiation for a continuous distribution of oscillators. A metal with free "conduction" electrons and soot are two examples. Attend to the second quote included in the original post: "...once we have reached equilibrium with one oscillator, we must be at equilibrium with any other oscillator..." Applied to a single species of oscillator, with a single fundamental frequency, (as I understand him) Feynman asserts that the profile is indistinguishable from that of a continuous distribution of fundamental frequencies. $\endgroup$ – Steven Thomas Hatton May 15 '18 at 16:18
$\begingroup$ Feynman's quoted statement is correct. He means if we think one oscillator in the gas has reached equilibrium all others present must have too. So the formula for the ambient radiation better not depend on the mass of the oscillator since then two oscillators of the same resonant frequency w0 but different masses could never reach equilibrium because they would both need different intensities of the ambient radiation at w0 depending on their mass. $\endgroup$ – Guest May 16 '18 at 8:49
$\begingroup$ Your second statement is wrong. If we had a gas containing oscillators of a single natural frequency then clearly we would have only one frequency of radiation present in the ambient radiation - the resonant frequency of the oscillator. $\endgroup$ – Guest May 16 '18 at 8:52
Not the answer you're looking for? Browse other questions tagged quantum-mechanics thermodynamics statistical-mechanics thermal-radiation kinetic-theory or ask your own question.
Deriving Planck's radiation law from microscopic considerations?
Partition function for quantum harmonic oscillator
Accessible microstates of harmonic oscillator in microcanonical enemble
Understanding Feynman's development of the ideal gas law: Vol I 39-2 of The Feynman Lectures on Physics
Has the definition of "statistical mechanics" changed from its original meaning?
Is the quantum harmonic oscillator energy $E = n\hbar\omega$ or $E = (n + 1/2)\hbar\omega$? (Feynman Lectures)
Einstein solid: one or three dimensional quantum harmonic oscillator?
Why can we treat normal modes of black-body radiation as thermally independent? | CommonCrawl |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
View all journals
npj computational materials
Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods
David Montes de Oca Zapiain1,
James A. Stewart2 &
Rémi Dingreville ORCID: orcid.org/0000-0003-1613-695X1
npj Computational Materials volume 7, Article number: 3 (2021) Cite this article
117 Altmetric
Structure of solids and liquids
Theory and computation
The phase-field method is a powerful and versatile computational approach for modeling the evolution of microstructures and associated properties for a wide variety of physical, chemical, and biological systems. However, existing high-fidelity phase-field models are inherently computationally expensive, requiring high-performance computing resources and sophisticated numerical integration schemes to achieve a useful degree of accuracy. In this paper, we present a computationally inexpensive, accurate, data-driven surrogate model that directly learns the microstructural evolution of targeted systems by combining phase-field and history-dependent machine-learning techniques. We integrate a statistically representative, low-dimensional description of the microstructure, obtained directly from phase-field simulations, with either a time-series multivariate adaptive regression splines autoregressive algorithm or a long short-term memory neural network. The neural-network-trained surrogate model shows the best performance and accurately predicts the nonlinear microstructure evolution of a two-phase mixture during spinodal decomposition in seconds, without the need for "on-the-fly" solutions of the phase-field equations of motion. We also show that the predictions from our machine-learned surrogate model can be fed directly as an input into a classical high-fidelity phase-field model in order to accelerate the high-fidelity phase-field simulations by leaping in time. Such machine-learned phase-field framework opens a promising path forward to use accelerated phase-field simulations for discovering, understanding, and predicting processing–microstructure–performance relationships.
The phase-field method is a popular mesoscale computational method used to study the spatio-temporal evolution of a microstructure and its physical properties. It has been extensively used to describe a variety of important evolutionary mesoscale phenomena, including grain growth and coarsening1,2,3, solidification4,5,6, thin-film deposition7,8, dislocation dynamics9,10,11, vesicles formation in biological membranes12,13, and crack propagation14,15. Existing high-fidelity phase-field models are inherently computationally expensive because they solve a system of coupled partial differential equations for a set of continuous field variables that describe these processes. At present, the efforts to minimize computational costs have focused primarily on leveraging high-performance computing architectures16,17,18,19,20,21 and advanced numerical schemes22,23,24, or on integrating machine-learning algorithms with microstructure-based simulations25,26,27,28,29,30,31. For example, leading studies have constructed surrogate models capable of rapidly predicting microstructure evolution from phase-field simulations using a variety of methods, including Green's function solution25, Bayesian optimization26,28, or a combination of dimensionality reduction and autoregressive Gaussian processes29. Yet, even for these successful solutions, the key challenge has been to balance the accuracy with computational efficiency. For instance, the computationally efficient Green's function solution cannot guarantee accurate solutions for complex, multi-variable phase-field models. In contrast, Bayesian optimization techniques can solve complex, coupled phase-field equations, but at a higher computational cost (although the number of simulations to be performed is kept to a minimum, since each subsequent simulation's parameter set is informed by the Bayesian optimization protocol). Autoregressive models are only capable of predicting microstructural evolution for the values for which they were trained, limiting the ability of this class of models to predict future values beyond the training set. For all three classes of models, computational cost-effectiveness decreases as the complexity of the microstructure evolution process increases.
In this work, we create a cost-minimal surrogate model capable of solving microstructural evolution problems in fractions of a second by combining a statistically representative, low-dimensional description of the microstructure evolution obtained directly from phase-field simulations with a history-dependent machine-learning approach (see Fig. 1). We illustrate this protocol by simulating the spinodal decomposition of a two-phase mixture. The results produced by our surrogate model were achieved in fractions of a second (lowering the computational cost by four orders in magnitude) and showed only a 5% loss in accuracy compared to the high-fidelity phase-field model. To arrive at this improvement, our surrogate model reframes the phase-field simulations as a multivariate time-series problem, forecasting the microstructure evolution in a low-dimensional representation. As illustrated in Fig. 1, we accomplish our accelerated phase-field framework in three steps. We first perform high-fidelity phase-field simulations to generate a large and diverse set of microstructure evolutionary paths as a function of phase fraction, cA and phase mobilities, MA and MB (Fig. 1a). We then capture the most salient features of the microstructures by calculating the microstructures' autocorrelations and we subsequently perform principal component analysis (PCA) on these functions in order to obtain a faithful low-dimensional representation of the microstructure evolution (Fig. 1b). Lastly, we utilize a history-dependent machine-learning approach (Fig. 1c) to learn the time-dependent evolutionary phenomena embedded in this low-dimensional representation to accurately and efficiently predict the microstructure evolution without solving computationally expensive phase-field-based evolution equations. We compare two different machine-learning techniques, namely time-series multivariate adaptive regression splines (TSMARS)32 and long short-term memory (LSTM) neural network33, to gauge their efficacy in developing surrogate models for phase-field predictions. These methods are chosen due to their non-parametric nature (i.e. they do not have a fixed model form), and their demonstrated success in predicting complex, time-dependent, nonlinear behavior32,34,35,36. Based on the comparison of results, we chose the LSTM neural network as the primary machine-learning architecture to accelerate phase-field predictions (Fig. 1c), because the LSTM-trained surrogate model yielded better accuracy and long-term predictability, even though they are more demanding and finicky to train than the TSMARS approach. Besides being computationally efficient and accurate, we also show that the predictions from our machine-learned surrogate model can be used as an input for a classical high-fidelity phase-field model via a phase-recovery algorithm37,38 in order to accelerate the high-fidelity predictions (Fig. 1d).
Fig. 1: Machine-learned surrogate model for accelerating phase-field based microstructure evolution predictions.
a Data preparation to generate training and testing phase-field data sets. b Low-dimensional representation of the microstructure evolution. c Time-series analysis using a long short-term memory (LSTM) neural network to predict the time evolution of the microstructure principal component scores. d Prediction from the accelerated phase-field framework based on the first three steps.
Hence, the present study consists of three major parts: (i) constructing surrogate models trained via machine-learning methods based on a large phase-field simulation data set; (ii) executing these models to produce accurate and rapid predictions of the microstructure evolution in a low-dimensional representation; (iii) performing accelerated high-fidelity phase-field simulations using the predictions from this machine-learned surrogate model.
Low-dimensional representation of phase-field results
We base the formulation of our history-dependent surrogate model on a low-dimensional representation of the microstructure evolution. To this end, we first generated large training (5000 simulations) and moderate testing (500 simulations) phase-field data sets for the spinodal decomposition of an initially random microstructure by independently sampling the phase fraction, cA, and phase mobilities, MA and MB, and using our in-house multiphysics phase-field modeling code MEMPHIS (mesoscale multiphysics phase-field simulator)8,39. The results of these simulations gave a wide variety of microstructural evolutionary paths. Details of our phase-field model and numerical solution are provided in "Methods" and in Supplementary Note 1. Examples of microstructure evolutions as a function of time for different set of model parameters (cA, MA, MB) are reported in Supplementary Note 2.
We then calculated the autocorrelation \({{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({\bf{r}},{t}_{i}\right)\) of the spatially dependent composition field c(x, ti) at equally spaced time intervals ti for each spinodal decomposition phase-field simulation in our training set. Additional information on the calculation of the autocorrelation is provided in "Methods". For a given microstructure, the autocorrelation function can be interpreted as the conditional probability that two points at positions x1 and x2 within the microstructure, or equivalently for a random vector r = x2 − x1, are found to be in phase A. Because the microstructures of interest comprise two phases, the microstructure's autocorrelation and its associated radial average, \(\overline{S}(r,{t}_{i})\), contain the same information about the microstructure as the high-fidelity phase-field simulations. For example, the volume fraction of phase A, cA, is the value of the autocorrelation at the center point, while the average feature size of the microstructure corresponds to the first minimum of \(\overline{S}(r,{t}_{i})\) (i.e. \({\mathrm d}\overline{S}(r,{t}_{i})/{\mathrm d}r=0\)). Collectively, this set of autocorrelations provides us with a statistically representative quantification of the microstructure evolution as a function of the model inputs (cA, MA, MB)40,41,42. Figure 2a illustrates the time evolution of the microstructure, its autocorrelation, and the radial average of the autocorrelation for phase A for one of our simulations at three distinct time frames. For all the simulations in our training and testing data set, we observe similar trends for the microstructure evolution, regardless of the phase fraction and phase mobilities selected. We first notice that, at the initial frame t0, the microstructure has no distinguishable feature since the compositional field is randomly distributed spatially. We then observe the rapid formation of subdomains between frame t0 and frame t10, followed by a smooth and slow coalescence and evolution of the microstructure from frame t10 until the end of the simulation at frame t100. Based on this observation, we trained our machine-learned surrogate model starting at frame t10, once the microstructure reached a slow and steady evolution regime.
Fig. 2: Low-dimensional representation of the microstructure evolution.
a Transformation of a two-phase microstructure realization (top row) to its autocorrelation representation (middle row: autocorrelation; bottom row: radial average) at three separate frames (t0, t10, and t100). b Microstructure evolution trajectories over 100 frames represented as a function of the first three principal components. c Cumulative variance explained as a function of the number of principal components included in the representation of the microstructure.
We simplified the statistical, high-dimensional microstructural representation given by the microstructures' autocorrelations via PCA25,43,44. This operation enables us to construct a low-dimensional representation of the time evolution of the microstructure spatial statistics, while at the same time still faithfully capturing the most salient features of the microstructure and its evolution. Details on PCA are provided in "Methods". Figure 2b shows the 5500 microstructure evolutionary paths from our training and testing data sets for the first three principal components. For the 5000 microstructure evolutionary paths in our training data set, the principal components are fitted to the phase-field data. For the 500 microstructure evolutionary paths in our testing data set, the principal components are projected. In the reduced space, we can make the same observations regarding the evolution of the microstructure: a rapid microstructure evolution followed by a steady, slow evolution. In Fig. 2c, we show that we only need the first 10 principal components to capture over 98% of the variance in the data set. Thus, we use the time evolution of these 10 principal components to construct our low-dimensional representation of the microstructure evolution. Therefore the dimensionality of the microstructure evolution problem was reduced from a (512 × 512) × 100 to a 10 × 100 spatio-temporal space.
LSTM neural network parameters and architecture
The previous step captured the time history of the microstructure evolution in a statistical manner. We combine the PCA-based representation of the microstructure with a history-dependent machine-learning technique to construct our microstructure evolution surrogate model. Based on performance, we employed a LSTM neural network, which uses the model inputs (cA, MA, MB) and the previous known time history of the microstructure evolution (via a sequence of previous principal scores) to predict future time steps (results using TSMARS, which uses the "m" most recent known and predicted time frames of the microstructure history to predict future time steps, are discussed in Supplementary Note 3).
In order to develop a successful LSTM neural network, we first needed to determine its optimal architecture (i.e. the number of LSTM cells defining the neural network, see Supplementary Note 4 for additional details) as well as the optimal number of frames on which the LSTM needs to be trained. We determined the optimal number of LSTM cells by training six different LSTM architectures (architectures comprising 2, 4, 14, 30, 40, and 50 LSTM cells) for 1000 epochs. For all these architectures, we added a fully connected layer after the last LSTM cell in order to produce the desired output sequence of principal component scores. We trained each of these architectures on the sequence of principal component scores from frame t10 to frame t70 for each of the 5000 spinodal decomposition phase-field simulations in our training data set. As a result, each different LSTM architecture was trained on a total of 300,000 time observations (i.e. 5000 sequences comprised of 60 frames). To prevent overfitting, we kept the number of training weights among all the different architectures constant at approximately one half of the total time observations (i.e. ~150,000) by modifying the hidden layer size of each different architecture accordingly. The training of each LSTM architecture required 96 hours of training using a single node with 2.1 GHz Intel Broadwell®E5-2695 v4 processors with 36 cores per node and 128 GB RAM per node. Details of the LSTM architecture are provided in Supplementary Note 4.
In Fig. 3a, we report our training and validation loss results for the 6 different LSTM architectures tested for the first principal component. Our results show that the architectures with two and four cells significantly outperformed the architectures that have a higher number of cells. These results are not a matter of overfitting with more cells, since the sparser (in number of cells) networks train better as well. Rather, this performance can be explained by the fact that, just as in traditional neural networks, the deeper the LSTM architecture, the higher number of observations the network needs in order to learn. The main reason as to why the LSTM architectures with fewer number of cells outperform the architectures with a higher number of cells is due to the "limited" data set on which we are training the LSTM networks. Additionally, for those same reasons, we note that the two-cell LSTM architecture converged faster than the four-cell LSTM architecture, and it is therefore our architecture of choice. As a result, the best performing architecture, and the one we chose for the rest of this work, is the architecture with two-cell LSTM network with one fully connected layer.
Fig. 3: LSTM architecture and calibration.
a Learning curves as a function of the number of epochs for both training and validation sets. b Accuracy of the LSTM network for the absolute relative error, \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\), as a function of the number of frames used for training. c Accuracy of the LSTM network for the normalized distance, \({D}^{(k)}\left({t}_{i}\right)\), as a function of the number of frames used for training. In b and c, the dashed green line indicates the 5% error value, while the black lines indicate the mean value of the absolute relative error and normalized distance respectively at various frames ti.
Regarding the optimal number of frames, we assessed the accuracy of the six different LSTM architectures using two error metrics for each of the realizations k in our training and testing data sets and for each frame ti. The first error metric is based on the absolute relative error \(AR{E}^{(k)}\left({t}_{i}\right)\) which quantifies the accuracy of the model to predict the average microstructural feature size. The second error, \({D}^{(k)}\left({t}_{i}\right)\), uses the Euclidean distance between the predicted and true autocorrelations normalized by the Euclidean norm of the true autocorrelation. This error metric provides insights into the local accuracy of the predicted autocorrelation on a per-voxel basis. Upon convergence of these two metrics, the optimal number of frames on which the LSTM needs to be trained guarantees that the predicted autocorrelation is accurate at a local level but also in terms of the average feature size. Descriptions of the error metrics are provided in "Methods". We trained the different neural networks starting from frame t10 onwards. We then evaluated the following number of training frames: 1, 2, 5, 10, 20, 40, 60, and 80. Recall that the number of frames controls the number of time observations. Therefore, just as before, in order to prevent overfitting, we ensured that the number of weights trained was roughly half of the time observations.
In Fig. 3b, c, we provide the results for both \({\mathrm {ARE}}^{(k)}\left({t}_{100}\right)\) and \({D}^{(k)}\left({t}_{100}\right)\) with respect to the number of frames for which the LSTM was trained. The mean value of each distribution is indicated with a thick black line, and the dashed green line indicates the 5% accuracy difference target. Our convergence study shows that we achieved a good overall accuracy for the predicted autocorrelation when the LSTM neural network was trained for 80 frames. It is interesting to note that fewer frames were necessary to achieve convergence for the normalized distance (Fig. 3c) than the average feature size (Fig. 3b).
Surrogate model prediction and validation
We then evaluated the quality and accuracy of the best performing LSTM surrogate model (i.e. the one that has the two-cell architecture, one fully connected layer and trained for 80 frames) for predicted microstructure evolution for frames ranging from t91 to t100 and for each set of parameters in both our testing and training sets. We report these validation results in Fig. 4.
Fig. 4: Performance and predictability of LSTM-trained surrogate model.
a Predicted absolute relative error, \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\), from frames t91 to t100. b Predicted normalized distance, \({D}^{(k)}\left({t}_{i}\right)\), from frames t91 to t100. In a, b the dashed green line indicates the 5% error value, while the black lines indicate the mean value of the absolute relative error and normalized distance, respectively, at various frames ti. c Point-wise error comparison of the predicted vs. true autocorrelation for a microstructure randomly selected in our test set. d Cumulative probability distribution of the ARE at frame t100 for that microstructure. e Comparison of the predicted (dotted red line) vs. true (solid blue line) radial average autocorrelation.
For our error metrics \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\) and \({D}^{(k)}\left({t}_{i}\right)\), our results show an approximate average 5% loss in accuracy compared to the high-fidelity phase-field results, as seen in Fig. 4a, b. The mean value of the loss of accuracy for \({\mathrm {ARE}}^{(k)}\left({t}_{i}\right)\) is 5.3% for the training set and 5.4% for the testing set. The mean value of the loss of accuracy for \({D}^{(k)}\left({t}_{i}\right)\) is 6.8% for the training set and 6.9% for the testing set. Additionally, the loss of accuracy from our machine-learned surrogate model is constant as we further predict the microstructure evolution in time beyond the number of training frames. This is not surprising since the LSTM neural network utilizes the entire previous history of the microstructure evolution to forecast future frames.
In Fig. 4c–e, we further illustrate the good accuracy of our machine-learned surrogate model by analyzing in detail our predictions for a randomly selected microstructure (i.e. for a randomly selected set of model inputs cA, MA, and MB) in our testing data set at frame t100. In Fig. 4c, we show the point-wise error between the predicted and true autocorrelation for that microstructure, and the corresponding cumulative probability distribution. Overall, we notice a good agreement between the two microstructure autocorrelations, with the greatest error incurred for the long-range microstructural feature correlations. The agreement is easily understood, given the relatively small number of principal components retained in our low-dimensional microstructural representation. An even better agreement could have been achieved if additional principal components had been included. As seen in Fig. 4e, the predictions for the characteristic feature sizes in the microstructure given by our surrogate model are in good agreement with those obtained directly from the high-fidelity phase-field model. These results show that, despite some local errors, both microstructures simulated by the high-fidelity phase-field model and the ones predicted by our machine-learned surrogate model are statistically similar. Finally, we note that both our training and testing data sets cover a range of phase-field input parameters that correspond to a majority of problems of interests, avoiding issues with extrapolating outside of that range.
Computational efficiency
The results above not only illustrated the good accuracy relative to the high-fidelity phase-field model for the broad range of model parameters (cA, MA, MB), but they were also computationally efficient. The two main computational costs in our accelerated phase-field protocol were one-time costs incurred during (i) the execution of \({N}_{{\rm{sim}}}=5000\) high-fidelity phase-field simulations to generate a data set of different microstructure evolutions as a function of the model parameters and (ii) the training of the LSTM neural network. Our machine-learned surrogate model predicted the time-shifted principal component score sequence of 10 frames (i.e. a total of 5,000,000 time steps) in 0.01 s, and an additional 0.05 s to reconstruct the microstructure from the autocorrelation on a single node with 36 processors. In contrast, the high-fidelity phase-field simulations required approximately 12 minutes on 8 nodes with 16 processors per node using our high-performance computing resources for the same calculation of 10 frames. The computational gain factor was obtained by first dividing the total time of the LSTM-trained surrogate model by 3.55 (given the fact that the LSTM-trained model uses approximately four times less computational resources). Subsequently, the total time of the high-fidelity phase-field model to compute 10 frames (i.e. 12 minutes) was divided by the time obtained in the previous step. As such, the computational efficiency of the LSTM model yields results 42,666 times faster than the full-scale phase-field method. Although the set of model inputs can introduce some variability in computing time, once trained, the computing time of our surrogate model was independent of the selection of input parameters to the surrogate model.
Acceleration of phase-field predictions
We have demonstrated a robust, fast, and accurate way to predict microstructure evolution by considering a statistically representative, low-dimensional description of the microstructure evolution integrated with a history-dependent machine-learning approach, without the need for "on-the-fly" solutions of phase-field equations of motion. This computationally efficient and accurate framework opens a promising path forward to accelerate phase-field predictions. Indeed, as illustrated in Fig. 5, we showed that the predictions from our machine-learned surrogate model can be fed directly as an input to a classical high-fidelity phase-field model in order to accelerate the high-fidelity phase-field simulations by leaping in time. We used a phase-recovery algorithm30,37,38 to reconstruct the microstructure (Fig. 5a) from the microstructure autocorrelation predicted by our LSTM-trained surrogate model at frame t95 (details of the phase-recovery algorithm are provided in Supplementary Note 5). We then used this reconstructed microstructure as the initial microstructure in a regular high-fidelity phase-field simulation and let the microstructure further evolve to frame t100 (Fig. 5b). Our results in Fig. 5c–e showed that the microstructures predicted solely from a high-fidelity phase-field simulation and that obtained from our accelerated phase-field framework are statistically similar. Even though our reconstructed microstructure has some noise due to some deficiencies associated with the phase-recovery algorithm30, the phase-field method rapidly regularized and smoothed out the microstructure as it further evolved. Hence, besides drastically reducing the computational time required to predict the last five frames (i.e. 2,500,000 time steps), our accelerated phase-field framework enables us to "time jump" to any desired point in the simulation with minimal loss of accuracy. This maneuverability is advantageous since we can make use of this accelerated phase-field framework to rapidly explore a vast phase-field input space for problems where evolutionary mesoscale phenomena are important. The intent of the present framework is not to embed physics per se, rather our machine-learned surrogate model learns the behavior of a time-dependent functional relationship (which is a function of many input variables) to represent the microstructure evolution problem. However, even though we have trained our machine-learned surrogate model over a broad range of input parameter values, and over a range of initial conditions, these may not necessarily be representative of the generality of phase-field methods, which can have many types of nonlinearities and non-convexities in the free energy. We further discuss this point in the section "Beyond spinodal decomposition".
Fig. 5: Accelerated phase-field predictions.
a Reconstructed microstructure from the LSTM-trained surrogate model using a phase-recovery algorithm. b Phase-field predictions using LSTM-trained surrogate model as an input. c Point-wise error between predicted and true microstructure evolution. d Cumulative probability distribution of the absolute relative error on characteristic microstructural feature size. e Comparison of radial average of the microstructure autocorrelation between predicted (red) and true (black) microstructure evolution.
Comparison with other machine-learning approaches
The comparison of the TSMARS- and LSTM-trained surrogate model highlights both the advantages and inconveniences of using the LSTM neural network as the primary machine-learning architecture to accelerate phase-field predictions (see Supplementary Note 3 for TSMARS results). The TSMARS-trained model, which is an autoregressive, time-series, forecasting technique, proved to be less accurate for extrapolating the evolution of the microstructure than the LSTM-trained model, and demonstrated a dramatic loss of accuracy as the number of predicted time frames increases, with predictions acceptable only for a couple of time frames beyond the number of training frames. The TSMARS model proved unsuitable for establishing our accelerated phase-field framework because it uses predictions from previous time frames to predict subsequent time frames, thus compounding minor errors as the number of time frames increases. The LSTM architecture does not have this problem, since it only uses the microstructure history from previous time steps and not predictions to forecast a time-shifted sequence of future microstructure evolution. However, the LSTM model is computationally more expensive to train than the TSMARS model. Our LSTM architecture required 96 hours of training using a single node with 2.1 GHz Intel Broadwell®E5-2695 v4 processors with 36 cores per node and 128 GB RAM per node, whereas the TSMARS model only required 214 seconds on a single node on the same high-performance computer. Therefore, given its accuracy for predicting the next frame and its inexpensive nature, the TSMARS-trained model may prove useful for data augmentation in cases where the desired prediction of the microstructure evolution is not far ahead in time.
Beyond spinodal decomposition
There are several extensions to the present framework that can be implemented in order to improve the accuracy and acceleration performances. These improvements are related to (i) the dimensionality reduction of the microstructure evolution problem, (ii) the history-dependent machine-learning approach that can be used as an "engine" to accelerate predictions, and (iii) the extension to multi-phase, multi-field microstructure evolution problems. The first topic is related to improve the accuracy of the low-dimensional representation of the microstructure evolution in order to better capture nonlinearities, non-convexities of the free energy representative of the system. The second and third topics are related to replace the LSTM "engine" with another approach that can either improve accuracy, reduce the required amount of training data, or enable extrapolation over a greater number of frames. As we move forward, we anticipate that these extensions will enable better predictions and capture more complex microstructure evolution phenomena beyond the case study presented here.
Regarding the dimensionality reduction, several ameliorations can be made to the second step of the protocol presented in Fig. 1b. First, we can further improve the efficiency of our machine-learned surrogate model by incorporating higher-order spatial correlations (e.g., three-point spatial correlations and two-point cluster-correlation functions)45,46 in our low-dimensional representation of the microstructure evolution in order to better capture high- and low-order spatial complexity in these simulations. Second, algorithms such as PCA, or similarly independent component analysis and non-negative matrix factorization, can be viewed as matrix factorization methods. These algorithms implicitly assume that the data of interest lies on an embedded linear manifold within the higher-dimensional space describing the microstructure evolution. In the case of the spinodal decomposition exemplar problem studied here, this assumption is for the most part valid, given the linear regime seen in all the low-dimensional microstructure evolution trajectories presented in Fig. 2b. However, for microstructure evolution problems where these trajectories are no longer linear and/or convex, a more flexible and accurate low-dimensional representation of the (nonlinear) microstructure evolution can be obtained by using unsupervised algorithms learning the nonlinear embedding. Numerous algorithms have been developed for nonlinear dimensionality reduction to address this issue, including kernel PCA47, Laplacian eigenmaps48, ISOMAP49, locally linear embedding50, autoencoders51, or Gaussian process latent variable models52 for instance (for a more comprehensive survey of nonlinear dimensionality-reduction algorithms, see Lee and Verleysen53). In this case, a researcher would simply substitute PCA with one of these (nonlinear) manifold learning algorithms in the second step of our protocol illustrated in Fig. 1b.
The comparison between the TSMARS- and LSTM-trained surrogate model in the previous subsection demonstrated the ability of the LSTM neural network to successfully learn the time history of the microstructure evolution. At the root of this performance is the ability of the LSTM network to carry out sequence learning and store traces of past events from the microstructure evolutionary path. LSTM are a subclass of the recurrent neural network (RNN) architecture in which the memory of past events is maintained through recurrent connections within the network. Alternatives RNN options to the LSTM neural network such as the gated recurrent unit54 or the independently RNN (IndRNN)55 may prove to be more efficient at training our surrogate model. Other methods for handling temporal information are also available, including memory networks56 or temporal convolutions57. Instead of RNN architectures, a promising avenue may be to use self-modifying/plastic neural networks58 which harness evolutionary algorithms to actively modulate the time-dependent learning process. Recurrent plastic networks have demonstrated their higher potential to be successfully trained to memorize and reconstruct sets of new, high-dimensional, time-dependent data as compared to traditional (non-plastic) recurrent network58,59. Such networks may be more efficient "engine" solutions to accelerate phase-field predictions for complex microstructure evolutionary paths, especially when dealing with very large computational domains and multi-field, phase-field models, or for nonlinear, non-convex microstructural evolutionary paths. Ultimately, the best solution will depend on both the accuracy of the low-dimensional representation and the complexity of the phase-field problem at hand.
The machine-learning framework presented here is also not limited to the spinodal decomposition of two-phase mixture, and it can also be applied more generally to other multi-phase and multi-field models, although this extension is non trivial. In the case of a multi-phase model, there are numerous ways by which the free energy functional can be extended to multiple phases/components, and it is a well-studied topic in the phase-field community60,61. As it relates to this work, it is certainly possible to build surrogate models for multi-components systems based on some reasonable model output metrics (e.g., microstructure phase distribution in the current work)—although the choice of this metric may not be trivial or straightforward. For example, in a purely interfacial-energy-driven grain-growth model or grain growth via Ostwald-ripening model, one may build a surrogate model by tracking each individual order parameter for every grain and the composition in the system, which may become prohibitive for many grains. However, one could reduce the number of grains to a single metric using the condition that ∑(ϕi) = 1 at every grid point and be left with a single order parameter (along with the composition parameter) defining grain size, distribution, and time evolution as a function of the input variables (e.g., mobilities). Thus the construction of surrogate models based on these metrics with two-point statistics and PCA becomes straightforward. Another possibility would be to calculate and concatenate all n-point spatial statistics deemed necessary to quantify each multi-phase microstructure, and then perform PCA on the concatenated autocorrelation vector. Note that in the present case study, we only needed one autocorrelation to fully characterize the two-phase mixture, more autocorrelations would be needed when the number of phases increases.
In the case of a multi-field phase-field model, in which there are multiple coupled field variables (or order parameters) describing different evolutionary phenomena8, it would be essentially required to track the progression of each order parameter separately, along with the associated cross-correlation terms. However, actual details in each step of the protocol are a little more convoluted than those presented here, as it will depend on (i) the accuracy of the low-dimensional representation and (ii) the complexity of the phase-field problem considered. We envision that for the low-dimensional representation step illustrated in Fig. 1b, the dimensionality-reduction technique to be used would depend on the type of field variable considered. Similarly, depending on the complexity (e.g., linear vs. nonlinear) of the low-dimensional trajectories of the different fields considered, we may be forced to use different history-dependent machine approaches for each field separately used in the step presented in Fig. 1c. An interesting alternative31 might be to use neural network techniques such as convolutional neural networks to learn and predict the homogenized, macroscopic free energy and phase fields arising in a multi-component system.
To summarize, we developed and used a machine-learning framework to efficiently and rapidly predict complex microstructural evolution problems. By employing LSTM neural networks to learn long-term patterns and solve history-dependent problems, we reformulate microstructural evolution problems as multivariate time-series problems. In this case, the neural network learns how to predict the microstructure evolution via the time evolution of the low-dimensional representation of the microstructure. Our results show that our machine-learned surrogate model can predict the spinodal evolution of a two-phase mixture in a fraction of a second with only a 5% loss in accuracy compared to high-fidelity phase-field simulations. We showed that surrogate model trajectories can be used to accelerate phase-field simulations when used as an input to a classical high-fidelity phase-field model. Our framework opens a promising path forward to use accelerated phase-field simulations for discovering, understanding, and predicting processing–microstructure–performance relationships in problems where evolutionary mesoscale phenomena are critical, such as in materials design problems.
Phase-field model
The microstructure evolution for spinodal decomposition of a two-phase mixture62 specifically uses a single compositional order parameter \(c\left({\bf{x}},t\right)\), to describe the atomic fraction of solute. The evolution of c is given by the Cahn–Hilliard equation62 and is derived from an Onsager force–flux relationship63 such that
$$\frac{\partial c}{\partial t}=\nabla \cdot \left({M}_{c}\left(c\right)\nabla \left[{\omega }_{c}\left({c}^{3}-c\right)+{\kappa }_{c}{\nabla }^{2}c\right]\right),$$
where ωc is the energy barrier height between the equilibrium phases and κc is the gradient energy coefficient, respectively. The concentration dependent Cahn–Hilliard mobility is taken to be Mc = s(c)MA + (1 − s(c))MB, where MA and MB are the mobilities of each phase, and \(s(c)=\frac{1}{4}(2-c){(1+c)}^{2}\) is a smooth interpolation function between the mobilities. The free energy of the system in Eq. (1) is expressed as a symmetric double-well potential with minima at c = ±1. For simplicity, both the mobility and the interfacial energy are isotropic. This model was implemented, verified, and validated for use in Sandia's in-house multiphysics phase-field modeling capability MEMPHIS8,39.
The values of the energy barrier height between the equilibrium phases and the gradient energy coefficient were assumed to be constant with ωc = κc = 1. In order to generate a diverse and large set of phase-field simulations exhibiting a rich variety of microstructural features, we varied the phase concentrations and phase mobilities parameters. For the phase concentration parameter, we decided to focus on the cases where the concentration of each phase satisfies ci ≥ 0.15, i = A or B. Note that we only need to specify one phase concentration, since cB = 1 − cA. For the phase mobility parameters, we chose to independently vary the mobility values over four orders of magnitude such that Mi ∈ [0.01, 100], i = A or B. We used a Latin Hypercube Sampling (LHS) statistical method to generate 5000 sets of parameters (cA, MA, MB) for training, and an additional 500 sets of parameters for validation.
All simulations were performed using a 2D square grid with a uniform mesh of 512 × 512 grid points, dimensionless spatial and temporal discretization parameters, a spatial discretization of Δx = Δy = 1, and a temporal discretization of Δt = 1 × 10−4. The composition field within the simulation domain was initially randomly populated by sampling a truncated Gaussian distribution between −1 and 1 with a standard deviation of 0.35 and means chosen to generate the desired nominal phase fraction distributions. Each simulation was run for 50,000,000 time steps with periodic boundary conditions applied to all sides of the domain. The microstructure was saved every 500,000 time steps in order to capture the evolution of the microstructure over 100 frames. Each simulation required approximately 120 minutes on 128 processors on our high-performance computer cluster. Illustrations of the variety of microstructure evolutions obtained when sampling various combinations of cA, MA, and MB are provided in Supplementary Note 2.
Statistical representation of microstructures
We use the autocorrelation of the spatially dependent concentration field, \(c\left({\bf{x}},{t}_{i}\right)\), to statistically characterize the evolving microstructure. For a given microstructure, we use a compositional indicator function, \({I}^{{\rm{A}}}\left({\bf{x}},{t}_{i}\right)\) to identify the dominant phase A at a location x within the microstructure and tesselate the spatial domain at each time step such that,
$${I}^{{\rm{A}}}\left({\bf{x}},{t}_{i}\right)=\left\{\begin{array}{ll}1,&{\rm{if}}\,c({\bf{x}},{t}_{i})\,>\,0\\ 0,&{\rm{otherwise}}\end{array}\right..$$
Note that, in our case, the range of the field variable c is −1 ≤ c ≤ 1, thus motivating our use of 0 as the cutoff to "binarize" the microstructure data. The autocorrelation \({{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({\bf{r}},{t}_{i}\right)\) is defined as the expectation of the product \({I}^{{\rm{A}}}\left({{\bf{x}}}_{1},{t}_{i}\right){I}^{{\rm{A}}}\left({{\bf{x}}}_{2},{t}_{i}\right)\), i.e.
$${{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({\bf{r}},{t}_{i}\right)={{\boldsymbol{S}}}_{2}^{\left({\rm{A}},{\rm{A}}\right)}\left({{\bf{x}}}_{1},{{\bf{x}}}_{2},{t}_{i}\right)=\langle {I}^{{\rm{A}}}\left({{\bf{x}}}_{1}\right){I}^{{\rm{A}}}\left({{\bf{x}}}_{2}\right)\rangle ,\,{\rm{with}}\,{\bf{r}}={{\bf{x}}}_{2}-{{\bf{x}}}_{1}.$$
In this form, the microstructure's autocorrelation resembles a convolution operator and can be efficiently computed using fast Fourier transform38 as applied to the finite-difference discretized scheme.
The autocorrelations describing the microstructure evolution cannot be readily used in our accelerated framework since they have the same dimension as the high-fidelity phase-field simulations. Instead, we describe the microstructure evolutionary paths via a reduced-dimensional representation of the microstructure spatial autocorrelation by using PCA. PCA is a dimensionality-reduction method that rotationally transforms the data into a new, truncated set of orthonormal axes that captures the variance in the data set with the fewest number of dimensions64. The basis vectors of this space, φj are called principal components (PC), and the weights, αj, are called PC scores. The principal components are ordered by variance. The PCA representation \({{\boldsymbol{S}}}_{{\rm{pca}}}^{(k)}\) of the autocorrelation of phase A for a given microstructure is given by,
$${{\boldsymbol{S}}}_{{\rm{pca}}}^{(k)}\left({t}_{i}\right)=\mathop{\sum }\limits_{j = 1}^{Q}{\alpha }_{j}^{(k)}\left({t}_{i}\right){{\boldsymbol{\varphi }}}_{j}+\overline{{\boldsymbol{S}}},$$
where Q is the number of PC direction retained, and the term \(\overline{{\boldsymbol{S}}}\) represents the sample mean of the autocorrelations, \({{\boldsymbol{S}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}\), for \(k=1\ldots {N}_{{\rm{sim}}}\), with \({N}_{{\rm{sim}}}\) being the number of simulations in our training data set. In the construction of our model, PCA is only fitted to the training data. The testing data are projected into the fitted PCA space.
History-dependent machine-learning approaches
Our machine-learning approach establishes a functional relationship \({\mathcal{F}}\) between the low-dimensional representation descriptors of the microstructures (i.e. the principal component scores) at a current time and prior lagged values (ti−1…ti−n) of these microstructural descriptors and other simulation parameters affecting the microstructure evolution process such that, each principal component score, \({\alpha }_{j}^{(k)}\), can be approximated as
$${\alpha }_{j}^{(k)}\left({t}_{i}\right)={\mathcal{F}}\left({\alpha }_{1}^{(k)}\left({t}_{i-1}\right),\ldots ,{\alpha }_{1}^{(k)}\left({t}_{i-n}\right),\ldots ,{\alpha }_{Q}^{(k)}\left({t}_{i-1}\right),\ldots ,{\alpha }_{Q}^{(k)}\left({t}_{i-n}\right),{c}_{{\rm{A}}}^{(k)},{M}_{{\rm{A}}}^{(k)},{M}_{{\rm{B}}}^{(k)}\right).$$
This functional relationship can rapidly (in a fraction of a second as opposed to hours if we use our high-fidelity phase-field model in MEMPHIS) predict a broad class of microstructures as a function of simulation parameters with good accuracy. There are many different ways by which we can establish the desired functional relationship \({\mathcal{F}}\). In the present study, we compared two different history-dependent machine-learning techniques, namely the TSMARS and LSTM neural network. We chose LSTM based on its superior performance.
LSTM networks are RNN architectures, wherein nodes are looped, allowing information to persist between consecutive time steps by tracking an internal (memory) state. Since the internal state is a function of all the past inputs, the prediction from the LSTM-trained surrogate model depends on the entire history of the microstructure. In contrast, instead of making predictions from a state that depends on the entire history, TSMARS is an autoregressive model which predicts the microstructure evolution using only "m" most recent inputs of the microstructure history. Details of both algorithms are provided in the Supplementary Notes 3 and 4.
Error metrics
The loss used to train our neural network is the mean squared error (MSE) in terms of the principal component scores \({\mathrm {MSE}}_{{\alpha }_{j}}\) which is defined as
$${\mathrm {MSE}}_{{\alpha }_{j}}=\frac{1}{KN}\mathop{\sum }\limits_{k = 1}^{K}\mathop{\sum }\limits_{i = 1}^{N}{\left({\hat{\alpha }}_{j}^{(k)}\left({t}_{i}\right)-{\tilde{\alpha }}_{j}^{(k)}\left({t}_{i}\right)\right)}^{2},$$
where N denotes the number of time frames for which the error is calculated, K denotes the total number of microstructure evolution realizations for which the error is being calculated (i.e. number of microstructure in the training data set), and \({\alpha }_{j}^{(k)}\) is the jth principal component score of microstructure realization k at time ti. The hat, \(\hat{\alpha }\), and tilde, \(\tilde{\alpha }\), notations indicate the true and predicted values of the principal component score, respectively. The MSE scalar error metric for each principal component does not convey information about the accuracy of our surrogate model as a function of the frame being predicted. For this purpose, we calculated the ARE between the true (\(\hat{\ell }\)) and predicted (\(\tilde{\ell }\)) average feature size at each time frame ti and for each microstructure evolution realization k in our data set, such that
$${\mathrm {ARE}}^{(k)}({t}_{i})=\frac{| {\hat{\ell }}^{(k)}\left({t}_{i}\right)-{\tilde{\ell }}^{(k)}\left({t}_{i}\right)| }{{\hat{\ell }}^{(k)}\left({t}_{i}\right)}.$$
The average feature size corresponds to the first minimum of the radial average of the autocorrelation. For each microstructure realization k and for each time frame ti, we also calculated the Euclidean distance D(k) between the true and predicted autocorrelation, normalized by the Euclidean of the true autocorrelation such that
$${D}^{(k)}({t}_{i})=\frac{{\sum }_{{\bf{r}}}{\left[{\hat{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})-{\tilde{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\right]}^{2}}{{\sum }_{{\bf{r}}}{\left[{\hat{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\right]}^{2}},$$
where \({\hat{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\) and \({\tilde{{\boldsymbol{S}}}}_{2}^{{\left({\rm{A}},{\rm{A}}\right)}^{(k)}}({\bf{r}},{t}_{i})\) index the true (\(\hat{\,}\)) and predicted (\(\tilde{\,}\)) autocorrelations respectively at time frame ti. Note that by summing over all r vectors for which the autocorrelations are defined, this metric corresponds to the normalized Euclidean distance between the predicted and the true autocorrelations.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
The codes used to calculate the results of this study are available from the corresponding author upon reasonable request.
Krill, C. E. III. & Chen, L.-Q. Computer simulation of 3-D grain growth using a phase-field model. Acta Mater. 50, 3059–3075 (2002).
Chang, K., Chen, L.-Q., Krill, C. E. III. & Moelans, N. Effect of strong nonuniformity in grain boundary energy on 3-D grain growth behavior: a phase-field simulation study. Comput. Mater. Sci. 127, 67–77 (2017).
Miyoshi, E. et al. Large-scale phase-field simulation of three-dimensional isotropic grain growth in polycrystalline thin films. Model. Simul. Mater. Sci. Eng. 27, 054003 (2019).
Kim, S. G., Kim, W. T., Suzuki, T. & Ode, M. Phase-field modeling of eutectic solidification. J. Cryst. Growth 261, 135–158 (2004).
Hötzer, J. et al. Large scale phase-field simulations of directional ternary eutectic solidification. Acta Mater. 93, 194–204 (2015).
Zhao, Y., Zhang, B., Hou, H., Chen, W. & Wang, M. Phase-field simulation for the evolution of solid/liquid interface front in directional solidification process. J. Mater. Sci. Technol. 35, 1044–1052 (2019).
Stewart, J. A. & Spearot, D. E. Phase-field simulations of microstructure evolution during physical vapor deposition of single-phase thin films. Comput. Mater. Sci. 131, 170–177 (2017).
Stewart, J. & Dingreville, R. Microstructure morphology and concentration modulation of nanocomposite thin-films during simulated physical vapor deposition. Acta Mater. 188, 181–191 (2020).
Hu, S. Y. & Chen, L.-Q. Solute segregation and coherent nucleation and growth near a dislocation—a phase-field model integrating defect and phase microstructures. Acta Mater. 49, 463–472 (2001).
Chan, P. Y., Tsekenis, G., Dantzig, J., Dahmen, K. A. & Goldenfeld, N. Plasticity and dislocation dynamics in a phase field crystal model. Phys. Rev. Lett. 105, 015502 (2010).
Beyerlein, I. J. & Hunter, A. Understanding dislocation mechanics at the mesoscale using phase field dislocation dynamics. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 374, 20150166 (2016).
Campelo, F. & Hernández-Machado, A. Shape instabilities in vesicles: a phase-field model. Eur. Phys. J. Spec. Top. 143, 101–108 (2007).
Elliott, C. M. & Stinner, B. A surface phase field model for two-phase biological membranes. SIAM J. Appl. Math. 70, 2904–2928 (2010).
Aranson, I. S., Kalatsky, V. A. & Vinokur, V. M. Continuum field description of crack propagation. Phys. Rev. Lett. 85, 118–121 (2000).
Karma, A., Kessler, D. A. & Levine, H. Phase-field model of mode III dynamic fracture. Phys. Rev. Lett. 87, 045501 (2001).
Shimokawabe, T. et al. Peta-scale phase-field simulation for dendritic solidification on the TSUBAME 2.0 supercomputer. In Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis 1-11 (ACM, New York, NY, USA, 2011).
Hunter, A., Saied, F., Le, C. & Koslowski, M. Large-scale 3D phase field dislocation dynamics simulations on high-performance architectures. Int. J. High. Perform. Comput. Appl. 25, 223–235 (2011).
Vondrous, A., Selzer, M., Hötzer, J. & Nestler, B. Parallel computing for phase-field models. Int. J. High. Perform. Comput. Appl. 28, 61–72 (2014).
Yan, H., Wang, K. G. & Jones, J. E. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures. Model. Simul. Mater. Sci. Eng. 24, 055016 (2016).
Miyoshi, E. et al. Ultra-large-scale phase-field simulation study of ideal grain growth. npj Comput. Mater. 3, 25 (2017).
Shi, X., Huang, H., Cao, G. & Ma, X. Accelerating large-scale phase-field simulations with GPU. AIP Adv. 7, 105216 (2017).
Seol, D. et al. Computer simulation of spinodal decomposition in constrained films. Acta Mater. 51, 5173–5185 (2003).
Muranushi, T. Paraiso: an automated tuning framework for explicit solvers of partial differential equations. Comput. Sci. Discov. 5, 015003 (2012).
Du, Q. & Feng, X. The phase field method for geometric moving interfaces and their numerical approximations. In Bonito, A. & Nochetto, R. H. (eds), Handbook of Numerical Analysis, vol. 21, pp. 425–508 (Elsevier, 2020).
Brough, D. B., Kannan, A., Haaland, B., Bucknall, D. G. & Kalidindi, S. R. Extraction of process-structure evolution linkages from x-ray scattering measurements using dimensionality reduction and time series analysis. Integr. Mater. Manuf. Innov. 6, 147–159 (2017).
Pfeifer, S., Wodo, O. & Ganapathysubramanian, B. An optimization approach to identify processing pathways for achieving tailored thin film morphologies. Comput. Mater. Sci. 143, 486–496 (2018).
Latypov, M. I. et al. BisQue for 3D materials science in the cloud: microstructure–property linkages. Integr. Mater. Manuf. Innov. 8, 52–65 (2019).
Teichert, G. H. & Garikipati, K. Machine learning materials physics: surrogate optimization and multi-fidelity algorithms predict precipitate morphology in an alternative to phase field dynamics. Comput. Methods Appl. Mech. Eng. 344, 666–693 (2019).
Yabansu, Y. C., Iskakov, A., Kapustina, A., Rajagopalan, S. & Kalidindi, S. R. Application of gaussian process regression models for capturing the evolution of microstructure statistics in aging of nickel-based superalloys. Acta Mater. 178, 45–58 (2019).
Herman, E., Stewart, J. A. & Dingreville, R. A data-driven surrogate model to rapidly predict microstructure morphology during physical vapor deposition. Appl. Math. Model. 88, 589–603 (2020).
Zhan, X. & Garikipati, K. Machine learning materials physics: multi-resolution neural networks learn the free energy and nonlinear elastic response of evolving microstructures. Comput. Methods Appl. Mech. Eng. 372, 113362 (2020).
Lewis, P. A. & Ray, B. K. Modeling long-range dependence, nonlinearity, and periodic phenomena in sea surface temperatures using TSMARS. J. Am. Stat. Assoc. 92, 881–893 (1997).
Hochreiter, S. & Schmidhuber, J. Long short-term memory. Neural Comput. 9, 1735–1780 (1997).
Zaytar, M. A. & El Amrani, C. Sequence to sequence weather forecasting with long short-term memory recurrent neural networks. Int. J. Comput. Appl. 143, 7–11 (2016).
Zhao, Z., Chen, W., Wu, X., Chen, P. C. & Liu, J. LSTM network: a deep learning approach for short-term traffic forecast. IET Intell. Transp. Syst. 11, 68–75 (2017).
Vlachas, P. R., Byeon, W., Wan, Z. Y., Sapsis, T. P. & Koumoutsakos, P. Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks. Proc. R. Soc. A Math. Phys. Eng. Sci. 474, 20170844 (2018).
Yang, G., Dong, B., Gu, B., Zhuang, J. & Ersoy, O. Gerchberg–Saxton and Yang–Gu algorithms for phase retrieval in a nonunitary transform system: a comparison. Appl. Opt. 33, 209–218 (1994).
Fullwood, D. T., Niezgoda, S. R. & Kalidindi, S. R. Microstructure reconstructions from 2-point statistics using phase-recovery algorithms. Acta Mater. 56, 942–948 (2008).
Dingreville, R., Stewart, J. A. & Chen, E. Y. Benchmark Problems for the Mesoscale Multiphysics Phase Field Simulator (Memphis). Tech. Rep., Albuquerque, NM (United States) (2020).
Torquato, S. Random Heterogeneous Materials: Microstructure and Macroscopic Properties (Springer-Verlag, New York, 2002).
Fullwood, D. T., Niezgoda, S. R., Adams, B. L. & Kalidindi, S. R. Microstructure sensitive design for performance optimization. Prog. Mater. Sci. 55, 477–562 (2010).
Kalidindi, S. R. Hierarchical Materials Informatics: Novel Analytics for Materials Data (Elsevier, 2015).
Niezgoda, S. R., Kanjarla, A. K. & Kalidindi, S. Novel microstructure quantification framework for databasing, visualization, and analysis of microstructure data. Integr. Mater. 2, 54–80 (2013).
Gupta, A., Cecen, A., Goyal, S., Singh, A. K. & Kalidindi, S. R. Structure–property linkages using a data science approach: application to a non-metallic inclusion/steel composite system. Acta Mater. 91, 239–254 (2015).
Jiao, Y., Stillinger, F. & Torquato, S. Modeling heterogeneous materials via two-point correlation functions: basic principles. Phys. Rev. E 76, 031110 (2007).
Jiao, Y., Stillinger, F. & Torquato, S. Modeling heterogeneous materials via two-point correlation functions. II. Algorithmic details and applications. Phys. Rev. E 77, 031135 (2008).
Schölkopf, B., Smola, A. & Müller, K.-R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1998).
Belkin, M. & Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, 585–591 (Vancouver, BC, Canada, 2002).
Tenenbaum, J. B., De Silva, V. & Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000).
Roweis, S. T. & Saul, L. K. Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000).
Hinton, G. E. & Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006).
Lawrence, N. Probabilistic non-linear principal component analysis with gaussian process latent variable models. J. Mach. Learn. Res. 6, 1783–1816 (2005).
Lee, J. A. & Verleysen, M. Nonlinear Dimensionality Reduction (Springer Science & Business Media, 2007).
Cho, K. et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. Preprint at https://arxiv.org/abs/1406.1078 (2014).
Li, S., Li, W., Cook, C., Zhu, C. & Gao, Y. Independently recurrent neural network (IndRNN): building a longer and deeper RNN. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5457–5466 (Salt Lake City, UT, USA, 2018).
Sukhbaatar, S., Weston, J., Fergus, R. et al. End-to-end memory networks. In Advances in Neural Information Processing Systems 2440–2448 (Montreal, QC, Canada, 2015).
Varol, G., Laptev, I. & Schmid, C. Long-term temporal convolutions for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1510–1517 (2017).
Stanley, K. O., Clune, J., Lehman, J. & Miikkulainen, R. Designing neural networks through neuroevolution. Nat. Mach. Intell. 1, 24–35 (2019).
Soltoggio, A., Stanley, K. O. & Risi, S. Born to learn: the inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. 108, 48–67 (2018).
Nestler, B. & Wheeler, A. A. A multi-phase-field model of eutectic and peritectic alloys: numerical simulation of growth structures. Phys. D 138, 114–133 (2000).
Zhang, L. & Steinbach, I. Phase-field model with finite interface dissipation: extension to multi-component multi-phase alloys. Acta Mater. 60, 2702–2710 (2012).
Chen, L.-Q. Phase-field models for microstructure evolution. Annu. Rev. Mater. Res. 32, 113–140 (2002).
Balluffi, R. W., Allen, S. M. & Carter, W. C. Kinetics of Materials (Wiley, 2005).
Suh, C., Rajagopalan, A., Li, X. & Rajan, K. The application of principal component analysis to materials science data. Data Sci. J. 51, 19–26 (2002).
This work was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the U.S. Department of Energy. This work was also supported by a Laboratory Directed Research and Development (LDRD) program at Sandia National Laboratories. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under Contract No. DE-NA0003525. The views expressed in this article do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
Center for Integrated Nanotechnologies, Sandia National Laboratories, Albuquerque, NM, 87185, USA
David Montes de Oca Zapiain & Rémi Dingreville
Energetic Materials Dynamic and Reactive Science, Sandia National Laboratories, Albuquerque, NM, 87185, USA
James A. Stewart
David Montes de Oca Zapiain
Rémi Dingreville
R.D., J.A.S., D.M.d.O.Z. conceived the idea; J.A.S performed the phase-field simulations; D.M.d.O.Z. trained the LSTM model; R.D. supervised the work. All authors contributed to the discussion and writing of the paper.
Correspondence to Rémi Dingreville.
The authors declare no competing interests.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Montes de Oca Zapiain, D., Stewart, J.A. & Dingreville, R. Accelerating phase-field-based microstructure evolution predictions via surrogate models trained by machine learning methods. npj Comput Mater 7, 3 (2021). https://doi.org/10.1038/s41524-020-00471-8
Reviews & Analysis
Journal Impact
About the partner
For Authors & Referees
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
npj Computational Materials (npj Comput Mater) ISSN 2057-3960 (online)
nature.com sitemap
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Research Academies
Libraries & institutions
Librarian service & tools
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Middle East
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
for Graph Theory and Combinatorics Seminar events the year of Friday, September 14, 2018.
3:00 pm in 241 Altgeld Hall,Tuesday, January 23, 2018
Graph Theory and Combinatorics Seminar
Topological version of Pach's overlap theorem
Boris Bukh (Carnegie Mellon Math)
Abstract: Given a large point set in the plane, where is its 'center'? Unlike the 1-dimensional case, there is not a single answer. We will discuss some of these answers, with the focus on the result of Pach. He showed that one can always find three large subsets A,B,C and a 'central point' p such that every A-B-C triangle contains p. We will then explain the topological generalization of this and related results, where for example the triangle edges are no longer assumed to be straight. Based on a joint work with Alfredo Hubard.
Submitted by mlavrov
An improved upper bound for the (5,5)-coloring number of K_n
Emily Heath (Illinois Math)
Abstract: A $(p,q)$-coloring of a graph $G$ is an edge-coloring of $G$ in which each $p$-clique contains edges of at least $q$ distinct colors. We denote the minimum number of colors needed for a $(p,q)$-coloring of the complete graph $K_n$ by $f(n,p,q)$. In this talk, we will describe an explicit $(5,5)$-coloring of $K_n$ which proves that $f(n,5,5)\leq n^{1/3+o(1)}$ as $n\rightarrow\infty$, improving the best known probabilistic upper bound of $O(n^{1/2})$ given by Erdős and Gyárfás. This is joint work with Alex Cameron.
3:00 pm in 241 Altgeld Hall,Tuesday, February 6, 2018
The Slow-coloring Game: Online Sum-Paintability
Douglas B. West (Illinois Math and Zhejiang Normal University)
Abstract: The slow-coloring game is played by Lister and Painter on a graph $G$. Initially, all vertices of $G$ are uncolored. In each round, Lister marks a non-empty set $M$ of uncolored vertices, and Painter colors a subset of $M$ that is independent in $G$. The game ends when all vertices are colored. The score of the game is the sum of the sizes of all sets marked by Lister. The goal of Painter is to minimize the score, while Lister tries to maximize it; the score under optimal play is the cost. A greedy strategy for Painter keeps the cost of $G$ to at most $\chi(G)n$ when $G$ has $n$ vertices, which is asyptotically sharp for Turan graphs. On various classes Painter can do better. For $n$-vertex trees the maximum cost is $\lfloor 3n/2\rfloor$. There is a linear-time algorithm and inductive formula to compute the cost on trees, and we characterize the extremal $n$-vertex trees. Also, Painter can keep the cost to at most $(1+3k/4)n$ when $G$ is $k$-degenerate, $7n/3$ when $G$ is outerplanar, and $3.9857n$ when $G$ is planar. These results involve various subsets of Grzegorz Gutowski, Tomasz Krawczyk, Thomas Mahoney, Gregory J. Puleo, Hehui Wu, Michal Zajac, and Xuding Zhu.
3:00 pm in 241 Altgeld Hall,Tuesday, February 13, 2018
Proportional Choosability: A New List Analogue of Equitable Coloring
Jeffrey Mudrock (Department of Applied Mathematics, Illinois Institute of Technology)
Abstract: The study of equitable coloring began with a conjecture of Erdős in 1964, and it was formally introduced by Meyer in 1973. An equitable $k$-coloring of a graph $G$ is a proper $k$-coloring of $G$ such that the sizes of the color classes differ by at most one. In 2003 Kostochka, Pelsmajer, and West introduced a list analogue of equitable coloring, called equitable choosability. Specifically, given lists of available colors of size $k$ at each vertex of a graph $G$, a proper list coloring is equitable if each color appears on at most $\lceil |V(G)|/k \rceil$ vertices. Graph $G$ is equitably $k$-choosable if such a coloring exists whenever all the lists have size $k$. In this talk we introduce a new list analogue of equitable coloring which we call proportional choosability. For this new notion, the number of times we use a color must be proportional to the number of lists in which the color appears. Proportional $k$-choosability implies both equitable $k$-choosability and equitable $k$-colorability, and the graph property of being proportionally $k$-choosable is monotone. We will discuss proportional choosability of graphs with small order, completely characterize proportionally 2-choosable graphs, and illustrate some of the techniques we have used to prove results. This is joint work with Hemanshu Kaul, Michael Pelsmajer, and Benjamin Reiniger.
Fractional DP-Colorings
Anton Bernshteyn (Illinois Math)
Abstract: DP-coloring is a generalization of list coloring introduced by Dvořák and Postle in 2015. This talk will be about a fractional version of DP-coloring. There is a natural way to define fractional list coloring; however, Alon, Tuza, and Voigt proved that the fractional list chromatic number of any graph coincides with its ordinary fractional chromatic number. This result does not extend to fractional DP-coloring: The difference between the fractional DP-chromatic number and the ordinary fractional chromatic number of a graph can be arbitrarily large. A somewhat surprising fact about DP-coloring is that the DP-chromatic number of a triangle-free regular graph is essentially determined by its degree. It turns out that for fractional DP-coloring, this phenomenon extends to a much wider class of graphs (including all bipartite graphs, for example). This is joint work with Alexandr Kostochka (UIUC) and Xuding Zhu (Zhejiang Normal University).
3:00 pm in 110 Speech and Hearing Building,Tuesday, February 27, 2018
Extending edge-colorings of complete hypergraphs into regular colorings
Amin Bahmanian (Illinois State Math)
Abstract: Let $({X \atop h})$ be the collection of all $h$-subsets of an $n$-set $X\supseteq Y$. Given a coloring (partition) of a set $S\subseteq ({X \atop h})$, we are interested in finding conditions under which this coloring is extendible to a coloring of $({X \atop h})$ so that the number of times each element of $X$ appears in each color class (all sets of the same color) is the same number $r$. The case $S=\emptyset, r=1$ was studied by Sylvester in the 18th century, and remained open until the 1970s. The case $h=2,r=1$ is extensively studied in the literature and is closely related to completing partial symmetric Latin squares. An $r$-factorization is a coloring of $({[n] \atop h})$ so that the number of times each element of $[n]$ appears in each color class is $r$. Let $\chi (m,h,r)$ be the smallest $n$ such that any "partial" $r$-factorization of $({[m] \atop h})$ satisfying $r \mid ({{n-1} \atop {h-1}})$, $h \mid rn$ can be extended to an $r$-factorization of $({[n] \atop h})$. We show that $2m\leq \chi (m,4,r)\leq 4.847323m$, and $2m\leq \chi (m,5,r)\leq 6.285214m$.
3:00 pm in Talbot 104,Tuesday, March 6, 2018
Some important combinatorial sequences
Zoltan Furedi (Renyi Institute of Mathematics, Budapest, Hungary and UIUC)
Abstract: The sequence $a(1), a(2), a(3), \dots$ of reals is called subadditive if $a(n+m) \leq a(n)+ a(m)$ for all integers $n,m$. Fekete's lemma states that the sequence $\{a(n)/n\}$ has a limit (it could be negative infinity). Let $f(n)$ be a non-negative, non-decreasing sequence. De Bruijn and Erdos (1952) called the sequence $(a(n))$ nearly $f$-subadditive if $a(n+m) \leq a(n)+ a(m) + f(n+m)$ holds for all $n\leq m \leq 2n$. They showed that if the error term $f$ is small, $\sum_{ n=1}^{\infty} f(n)/n^2$ is finite, then the limit $a(n)/n$ still exists. Their results is listed in the Bollobas-Riordan book (2006) as one of the most useful tools in Percolation Theory. Among other things we show that the de Bruijn-Erdos condition for the error term in their improvement of Fekete's Lemma is not only sufficient but also necessary in the following strong sense. If $\sum_{ n=1}^{\infty} f(n)/n^2 =\infty$, then there exists an nearly $f$-subadditive sequence $(b(n))$, such that the sequence of slopes $(b(n)/n)$ takes every rational number. This is a joint work with I. Ruzsa.
3:00 pm in 241 Altgeld Hall,Tuesday, March 13, 2018
On large bipartite subgraphs in dense H-free graphs
Bernard Lidicky (Iowa state University)
Abstract: A long-standing conjecture of Erdős states that any n-vertex triangle-free graph can be made bipartite by deleting at most n^2/25 edges. In this talk, we study how many edges need to be removed from an H-free graph for a general graph H. By generalizing a result of Sudakov for 4-colorable graphs H, we show that if H is 6-colorable then G can be made bipartite by deleting at most 4n^2/25 edges. Moreover, this amount is needed only in the case G is a complete 5-partite graph with balanced parts. As one of the steps in the proof, we use a strengthening of a result of Füredi on stable version of Turán's theorem. This is a joint work with P. Hu,T. Martins-Lopez, S. Norin and J. Volec.
A containers-type theorem for algebraic hypergraphs
Abstract: An active avenue of research in modern combinatorics is extending classical extremal results to the so-called sparse random setting. The basic hope is that certain properties that a given "dense" structure is known to enjoy should be inherited by a randomly chosen "sparse" substructure. One of the powerful general approaches for proving such results is the hypergraph containers method, developed independently by Balogh, Morris, and Samotij and Saxton and Thomason. Another major line of study is establishing combinatorial results for algebraic or, more generally, definable structures. In this talk, we will combine the two directions and consider the following problem: Given a "dense" algebraically defined hypergraph, can we show that the subhypergraph induced by a generic low-dimensional algebraic set of vertices is also fairly "dense"? This is joint work with Michelle Delcourt (University of Birmingham) and Anush Tserunyan (UIUC).
3:00 pm in 241 Altgeld Hall,Tuesday, April 3, 2018
Packing chromatic number of subdivisions of cubic graphs
Xujun Liu (Illinois Math)
Abstract: A packing $k$-coloring of a graph $G$ is a partition of $V(G)$ into sets $V_1,\ldots,V_k$ such that for each $1\leq i\leq k$ the distance between any two distinct $x,y\in V_i$ is at least $i+1$. The packing chromatic number, $\chi_p(G)$, of a graph $G$ is the minimum $k$ such that $G$ has a packing $k$-coloring. For a graph $G$, let $D(G)$ denote the graph obtained from $G$ by subdividing every edge. The questions on the value of the maximum of $\chi_p(G)$ and of $\chi_p(D(G))$ over the class of subcubic graphs $G$ appear in several papers. Gastineau and Togni asked whether $\chi_p(D(G))\leq 5$ for any subcubic $G$, and later Brešar, Klavžar, Rall and Wash conjectured this, but no upper bound was proved. Recently the authors proved that $\chi_p(G)$ is not bounded in the class of subcubic graphs $G$. In contrast, in this paper we show that $\chi_p(D(G))$ is bounded in this class, and does not exceed $8$. Joint work with József Balogh and Alexandr Kostochka.
3:00 pm in 241 Altgeld Hall,Tuesday, April 10, 2018
Directed hypergraphs
Gyorgy Turan (UIC Math)
Abstract: Directed graphs can be generalized to directed hypergraphs in different ways. The version where hyperedges can have several vertices in their tail, but only a single head, comes up in many contexts, such as reasoning with implications and closure operators. Paths are defined using a forward chaining process. We discuss some extremal, algorithmic and probabilistic aspects of directed hypergraphs and mention several open problems.
The Erdős–Gallai theorem for cycles in hypergraphs
Ruth Luo (UIUC Math)
Abstract: The Erdős–Gallai theorem states that if a graph $G$ on $n$ vertices has no cycle of length $k$ or longer, then $e(G) \leq (k-1)(n-1)/2$. We present a hypergraph analogue of this theorem. A berge-cycle of length $\ell$ in an $r$-uniform hypergraph is a set of $\ell$ hyperedges $\{e_1, ..., e_\ell\}$ and $\ell$ vertices $\{v_1, ..., v_\ell\}$ such that hyperedge $e_i$ contains the vertices $v_i$ and $v_{i+1}$. We show that for $r \geq k+1$, if $H$ is an $r$-uniform hypergraph on $n$ vertices with no berge-cycle of length $k$ or longer, then $|H| \leq (k-1)(n-1)/r$. This is joint work with Alexandr Kostochka.
Colorings of signed graphs - a short survey
Andre Raspaud (LaBRI, Bordeaux University)
Abstract: The signed graphs and the balanced signed graphs were introduced by Harary in 1953. But all the notions can be found in the book of König (Theorie der endlichen und unendlichen graphen 1935). An important, fundamental and prolific work on signed graphs was done by Zaslavsky in 1982. In this talk we are interested in coloring of signed graphs. We will give a short survey of the different existing definitions and the recent results on the corresponding chromatic numbers. We will also present new results obtained by using the DP-coloring.
3:00 pm in 241 Altgeld Hall,Tuesday, May 1, 2018
Planar graphs without adjacent cycles of length at most 8 are 3-choosable
Xiangwen Li (Central China Normal University, Math)
Abstract: DP-coloring as a generation of list coloring was introduced by Dvořák and Postle in 2017, who proved that every planar graph without cycles from 4 to 8 is 3-choosable, which was conjectured by Brodian et al. in 2007. In this paper, we prove that planar graphs without adjacent cycles of length at most 8 are 3-choosable, which extends this result of Dvořák and Postle.
2:00 pm in 243 Altgeld Hall,Tuesday, August 28, 2018
The method of hypergraph containers
Jozsef Balogh (Illinois Math)
Abstract: We will give a gentle introduction to a recently-developed technique for bounding the number (and controlling the typical structure) of finite objects with forbidden substructures. This technique exploits a subtle clustering phenomenon exhibited by the independent sets of uniform hypergraphs whose edges are sufficiently evenly distributed; more precisely, it provides a relatively small family of 'containers' for the independent sets, each of which contains few edges. In the first half of the talk we will attempt to convey a general high-level overview of the method; in the second, we will describe a few illustrative applications in areas such as extremal graph theory, Ramsey theory, additive combinatorics, and discrete geometry. Note that it is a "repetition of the ICM 2018 talk", hence it will have overlap with several previous (seminar) talks, and no new result will be presented.
2:00 pm in 243 Altgeld Hall,Tuesday, September 4, 2018
Cut-edges and regular factors in regular graphs of odd degree
Dara Zirlin (Illinois Math)
Abstract: Previously, Hanson, Loten, and Toft proved that every $(2r+1)$-regular graph with at most $2r$ cut-edges has a 2-factor. We generalize their result by proving for $k \leq (2r+1)/3$ that every $(2r+1)$-regular graph with at most $2r-3(k-1)$ cut-edges has a $2k$-factor. We show that the restriction on $k$ and the restriction on the number of cut-edges are sharp and characterize the graphs that have exactly $2r-3(k-1)+1$ cut-edges but no $2k$-factor. This is joint work with Alexandr Kostochka, André Raspaud, Bjarne Toft, and Douglas West.
2:00 pm in 243 Altgeld Hall,Tuesday, September 11, 2018
Polynomial-Time Approximation Scheme for the Genus of Dense Graphs
Yifan Jing (Illinois Math)
Abstract: The graph genus problem is a fundamental problem in topological graph theory and theoretical computer science. In this talk, we provide an Efficient Polynomial-Time Approximation Scheme (EPTAS) for approximating the genus (and non-orientable genus) of dense graphs. The running time of the algorithm is quadratic. Moreover, we extend the algorithm to output an embedding (rotation system), whose genus is arbitrarily close to the minimum genus, and the expected running time is also quadratic. This is joint work with Bojan Mohar.
Typical structure of Gallai colorings
Lina Li (Illinois Math)
Abstract: An edge coloring of a graph G is a Gallai coloring if it contains no rainbow triangle. Like many of other extremal problems, it is interesting to study how many Gallai colorings are there and what is the typical structure of the Gallai colorings. We show that almost all the Gallai r-colorings of complete graphs are 2-colorings. We also study Gallai 3-colorings of non-complete graphs. This is joint work with Jozsef Balogh.
Generalized Turán problems for graphs and hypergraphs
Ruth Luo (Illinois Math)
Abstract: We will talk about a generalization of the Turán problem for hypergraphs: given a graph $F$, what is the maximum number of hyperedges an $r$-uniform $n$-vertex Berge $F$-free hypergraph can have? In particular, we will discuss tools used to reduce the hypergraph problem to problems for graphs. Finally, I will present some recent results for graphs without long Berge cycles. This is joint work with (different subsets of) Zoltan Furedi and Alexandr Kostochka.
2:00 pm in 243 Altgeld Hall,Tuesday, October 2, 2018
Long paths and large matchings in ordered and convex geometric hypergraphs
Alexandr Kostochka (Illinois Math)
Abstract: An ordered $r$-graph is an $r$-uniform hypergraph whose vertex set is linearly ordered, and a convex geometric $r$-graph (cg $r$-graph, for short) is an $r$-uniform hypergraph whose vertex set is cyclically ordered. Extremal problems for ordered and cg graphs have rich history.
We consider extremal problems for two types of paths and matchings in ordered $r$-graphs and cg $r$-graphs: zigzag and crossing paths and matchings. We prove bounds on Turán numbers for these configurations; some of them are exact. Our theorem on zigzag paths in cg $r$-graphs is a common generalization of early results of Hopf and Pannwitz, Sutherland, Kupitz and Perles for cg graphs. It also yields the current best bound for the extremal problem for tight paths in uniform hypergraphs. There are interesting similarities and differences between the ordered setting and the convex geometric setting.
This is joint work with Zoltán Füredi, Tao Jiang, Dhruv Mubayi and Jacques Verstraëte.
Independent sets in algebraic hypergraphs
Anush Tserunyan (Illinois Math)
Abstract: A modern trend in extremal combinatorics is extending classical results from the dense setting (e.g. Szemerédi's theorem) to the sparse random setting. More precisely, one shows that a property of a given "dense" structure is inherited by a randomly chosen "sparse" substructure. A recent breakthrough tool for proving such statements is the Balogh–Morris–Samotij and Saxton–Thomason hypergraph containers method, which bounds the number of independent sets in homogeneously dense finite hypergraphs, thus implying that a random sparse subset is not independent. Another trend in combinatorics is proving combinatorial properties for algebraic, or more generally, model theoretically definable structures. Jointly with A. Bernshteyn and M. Delcourt, we combine these trends, establishing a containers-type theorem for hypergraphs definable over an algebraically closed field: if such a hypergraph is "dense", then Zariski-generic low-dimensional sets of vertices induce a relatively "dense" subhypergraph (in particular, they are not independent).
2:00 pm in 243 Altgeld Hall,Tuesday, October 16, 2018
Sampling bipartite degree sequence realizations - the Markov chain approach
Péter L. Erdős (A. Rényi Institute of Mathematics)
Abstract: How to analyze real life networks? There are myriads of them and usually experiments cannot be performed directly. Instead, scientists define models, fix parameters and imagine the dynamics of evolution.
Then, they build synthetic networks on this basis (one, several, all) and they want to sample them. However, there are far too many such networks. Therefore, typically, some probabilistic method is used for sampling.
We will survey one such approach, the Markov Chain Monte Carlo method, to sample realizations of given degree sequences. Some new results will be discussed.
Hamiltonian cycles in tough P₂∪P₃-free graphs
Songling Shan (Illinois State Math)
Abstract: Let $t>0$ be a real number and $G$ be a graph. We say $G$ is $t$-tough if for every cutset $S$ of $G$, the ratio of $|S|$ to the number of components of $G-S$ is at least $t$. Determining toughness is an NP-hard problem for arbitrary graphs. The Toughness Conjecture of Chvátal, stating that there exists a constant $t_0$ such that every $t_0$-tough graph with at least three vertices is hamiltonian, is still open in general.
A graph is called $P_2\cup P_3$-free if it does not contain any induced subgraph isomorphic to $P_2\cup P_3$, the union of two vertex-disjoint paths of order 2 and 3, respectively. We show that every 15-tough $P_2\cup P_3$-free graph with at least three vertices is hamiltonian.
The sum-product problem
George Shakan (Illinois Math)
Abstract: The Erdős–Szemerédi sum-product problem asserts that for any $A$ in the integers either $|A+A|$ or $|AA|$ is at least $|A|^2$ up to an arbitrarily small power of $|A|$. In this talk, we'll discuss recent progress and further questions.
2:00 pm in 243 Altgeld Hall,Tuesday, November 6, 2018
Paths and Arctic Curves: the Tangent Method at Work
Philippe R. Di Francesco (Illinois Math)
Abstract: Tiling problems of finite domains of the plane with a fixed set of tiles can often be rephrased in terms of non-intersecting lattice paths. For large scaled domains, random tilings can exhibit a sharp separation between "frozen" regions tiled regularly and "liquid" regions tiled wildly. This is the arctic phenomenon. The separating curve is called "arctic curve".
We present a new technique, called the tangent method, to derive the arctic curve using only boundary properties of the set of paths describing the tilings. We apply this technique to the celebrated domino tiling problem of the Aztec diamond, and to the rhombus tiling of certain domains with arbitrary boundary shape. We perform exact enumeration using the Gessel-Viennot theorem for non-intersecting lattice paths, and asymptotic analysis. This leads to compact expressions for arctic curves and their q-deformations in the presence of area-dependent weights.
(Based on joint works with M.F. Lapa and E. Guitter.)
2:00 pm in 243 Altgeld Hall,Tuesday, November 13, 2018
Special four cycles in triple systems
Zoltan Furedi (Alfréd Rényi Institute of Mathematics, Budapest, Hungary)
Abstract: A special four-cycle $F$ in a triple system consists of four triples inducing a $C_4$. This means that $F$ has four special vertices $v_1,v_2,v_3,v_4$ and four triples in the form $v_iv_{i+1}w_i$ where the $w_j$'s are not necessarily distinct but disjoint from $\{v_1,v_2,v_3,v_4\}$ (indices are understood $\pmod 4$). There are seven non-isomorphic special four-cycles, their family is denoted by $\cal{F}$. Our main result implies that the Turán number ${\rm ex}(n,{\mathcal{F}})=\Theta(n^{3/2})$. In fact, we prove more, ${\rm ex}(n,\{F_1,F_2,F_3\})=\Theta(n^{3/2})$, where the $F_i$'s are specific members of $\mathcal{F}$.
We also study further generalizations, many cases remain unsolved.
How many edges guarantee a monochromatic ordered path?
Mikhail Lavrov (Illinois Math)
Abstract: An ordered graph is a simple graph with an ordering on its vertices, in which an ordered path $P_n$ is a path on $n$ edges whose vertices are in increasing order. In this talk, we will investigate the ordered size Ramsey number $\tilde r(P_r, P_s)$. This is the minimum $m$ for which some $m$-edge graph $H$ exists, such that every red-blue coloring of some the edges of $H$ contains either a red $P_r$ or a blue $P_s$.
I will show upper and lower bounds on $\tilde r(P_r, P_s)$ which are tight up to a polylogarithmic factor, and discuss connections to other Ramsey numbers for paths.
This is joint work with József Balogh, Felix Clemen, and Emily Heath.
2:00 pm in 243 Altgeld Hall,Tuesday, December 4, 2018
On Vertex-Disjoint Chorded Cycles
Derrek Yager (Illinois Math)
Abstract: In 1963, Corrádi and Hajnal proved that for all \( k \geq 1 \), any graph with \( |G| \geq 3k\) and \( \delta(G) \geq 2k \) has \( k \) vertex-disjoint cycles. In 2010, Chiba, Fujita, Gao, and Li proved that for all \( k \geq 1 \), any graph with \( |G| \geq 4k \) and minimum Ore-degree at least \( 6k - 1 \) contains \( k \) vertex-disjoint chorded cycles. In 2016, Molla, Santana, and Yeager refined this to characterize all graphs with \( |G| \geq 4k\) and minimum Ore-degree at least \( 6k - 2 \) that do not have \( k \) vertex-disjoint chorded cycles. We further refine this to characterize such graphs with Ore-degree at least \( 6k - 3\) that do not have \( k \) vertex-disjoint chorded cycles. | CommonCrawl |
Majorana bound states from exceptional points in non-topological superconductors
Topological superconductivity in a phase-controlled Josephson junction
Hechen Ren, Falko Pientka, … Amir Yacoby
Creating Majorana modes from segmented Fermi surface
Michał Papaj & Liang Fu
Large Josephson current in Weyl nodal loop semimetals due to odd-frequency superconductivity
Fariborz Parhizgar & Annica M. Black-Schaffer
A gap-protected zero-Hall effect state in the quantum limit of the non-symmorphic metal KHgSb
Sihang Liang, Satya Kushwaha, … N. P. Ong
Non-hermitian topology as a unifying framework for the Andreev versus Majorana states controversy
J. Avila, F. Peñaranda, … R. Aguado
Majorana zero modes in superconductor–semiconductor heterostructures
R. M. Lutchyn, E. P. A. M. Bakkers, … Y. Oreg
Experimental signature of the parity anomaly in a semi-magnetic topological insulator
M. Mogi, Y. Okamura, … Y. Tokura
Topological isoconductance signatures in Majorana nanowires
L. S. Ricco, J. E. Sanches, … A. C. Seridonio
Non-Majorana states yield nearly quantized conductance in proximatized nanowires
P. Yu, J. Chen, … S. M. Frolov
Pablo San-Jose1,
Jorge Cayao1,
Elsa Prada2 &
Ramón Aguado1
Superconducting devices
Topological matter
Recent experimental efforts towards the detection of Majorana bound states have focused on creating the conditions for topological superconductivity. Here we demonstrate an alternative route, which achieves fully localised zero-energy Majorana bound states when a topologically trivial superconductor is strongly coupled to a helical normal region. Such a junction can be experimentally realised by e.g. proximitizing a finite section of a nanowire with spin-orbit coupling and combining electrostatic depletion and a Zeeman field to drive the non-proximitized (normal) portion into a helical phase. Majorana zero modes emerge in such an open system without fine-tuning as a result of charge-conjugation symmetry and can be ultimately linked to the existence of 'exceptional points' (EPs) in parameter space, where two quasibound Andreev levels bifurcate into two quasibound Majorana zero modes. After the EP, one of the latter becomes non-decaying as the junction approaches perfect Andreev reflection, thus resulting in a Majorana dark state (MDS) localised at the NS junction. We show that MDSs exhibit the full range of properties associated to conventional closed-system Majorana bound states (zero-energy, self-conjugation, 4π-Josephson effect and non-Abelian braiding statistics), while not requiring topological superconductivity.
The emergence of topologically protected Majorana zero modes in topological superconductors has recently entered the spotlight of condensed matter research1,2,3,4,5. One of the main reasons is the remarkable prediction that such Majorana bound states (MBSs), also known as Majorana zero modes, should obey non-Abelian braiding statistics6,7, much like the 5/2 states in the fractional Hall effect, without requiring many-body correlations. It has been argued that the successful generation, detection and manipulation of MBSs would open the possibility of practical topologically protected quantum computation8,9. Despite impressive experimental progress10,11,12,13,14,15,16,17,18,19, such ambitious goals have still not been conclusively achieved.
A number of practical proposals have been put forward aiming to generate the conditions for the spontaneous emergence of robust MBSs in real devices. Some of the most studied ones are based on proximitizing topological insulators20 or semiconductor nanowires21,22. The core challenge in all these proposals has been to artificially synthesise a topologically non-trivial superconductor with a well-defined and robust topological gap23. The bulk-boundary correspondence principle dictates that the superconductor surface is then host to topologically protected MBSs. Creating a topological gap is arguably the main practical difficulty of such proposals, particularly since topological superconductors are rather sensitive to disorder.
In this work we demonstrate an alternative scheme for the creation of MBSs that does not require topological superconductivity at all. The possibility of engineering Majoranas in topologically trivial setups has been studied in other contexts before. It has been shown, for example, that topological excitations and MBSs in particular, may arise in trivial superconductors under adequate external driving24,25, similarly to the mechanism behind Floquet topological insulators26. Also, cold-atom systems with specifically engineered dissipation27,28 may relax into a topologically non-trivial steady state that are host to dark states at zero energy with Majorana properties. Our approach is implemented in a solid state setup and is based on proximitized semiconductor nanowires. In its topologically trivial regime, such a wire will not generate MBSs when terminated with vacuum (i.e. at a closed boundary). Its spectrum is instead a set of Bogoliubov quasiparticles that can be seen as pairs of Majoranas hybridized to finite energy. By creating a sufficiently transparent normal-superconductor (NS) junction at one end of the wire, we create a different kind of open boundary, to which the bulk-boundary correspondence principle does not apply. Such a high-transparency junction can be fabricated by proximitizing only one half of a pristine semiconducting nanowire (Fig. 1). We demonstrate that, as one tunes the normal side into a helical (half-metallic) regime via a parallel Zeeman field, one Majorana pair becomes decoupled into two zero energy resonances. One of which is subsequently removed into the reservoir, leaving behind a stable Majorana 'dark state' (MDS) at the NS junction without requiring a non-trivial superconductivity. (A dark state here is defined as a bound state that despite having an energy embedded in a continuum of delocalized excitations is orthogonal to them and, therefore, non-decaying.)
A sketch of a semiconductor nanowire, partially proximitized with a conventional superconductor on the right side.
The normal side may be depleted (small Fermi energy μN) and may become helical under a Zeeman field, B > Bh ≡ μN, while the superconducting side remains topologically trivial at small fields. For sufficiently transparent junctions in the Andreev limit (Δ ≪ μS), this results in Majorana dark state bound to the junction, in red.
The emergence of these MDSs cannot be described using the conventional band topology language, but rather needs to be understood in the context of open quantum systems. Unlike in closed systems, eigenstates in open systems decay with time, as the state leaks into the reservoir. Hence, their energies εp = Ep − iΓp are no longer real, but have a negative imaginary part that represents this decay rate Γp. Such a complex spectrum is sometimes modelled by a non-Hermitian Hamiltonian. A more precise and general description is obtained by considering the analytic continuation of the scattering matrix S(ω), where the energy ω of incoming states from the reservoir is allowed to extend into the lower complex half-plane. The analogous to the real eigenvalues of the closed system then becomes the poles of S(ω) for the open system. In NS junctions, all such poles come in pairs ±Ep − iΓp for all poles with non-zero real part Ep, due to the charge-conjugation symmetry of the Nambu representation29,30. In this sense, poles with zero real part Ep = 0 are special, as need not come in pairs come in pairs. Their total number Z (which by convention excludes 'buried poles', with zero real and imaginary part) has a very important meaning in open NS junctions and defines the analogue of band topology of a closed quantum system. Indeed, it has been shown that the topology of the scattering matrix in quasi-1D NS junctions is classified by the invariant ν = Z, mod,2 i.e. the parity of the number of poles with zero real energy, with ν = 1 signalling an open system with non-trivial topology from the point of view of scattering29,30.
In terms of its S-matrix poles, the topological transitions of an NS junction follow a characteristic pattern. Consider a trivial S-matrix (ν = 0). As a given parameter of the system is varied, a pair of poles εp = ±Ep − iΓp first approach each other and become degenerate at the imaginary axis, Ep = 0. This degeneracy is the open-system counterpart of a band inversion in a closed system and is known as an exceptional point (EP)31,32,33,34,35. It differs from a closed-system degeneracy in that the corresponding eigenstates do not remain orthogonal, but rather coalesce into one. Exceptional points have been extensively studied in photonics where they have been shown to give rise to novel phenomena unique to open systems36,37,38,39,40,41,42,43. Their implications in electronic systems, however, have been seldom discussed44,45.
After the exceptional point, the two degenerate poles branch along the imaginary axis and their decay rates bifurcate into different values Γ0 < Γ1 (see Fig. 2d for an example). The exceptional point thus involves a change of Z by 2, but ν = 0 remains unchanged. If Γ0 evolves towards zero (or close enough to zero for all practical purposes), it is said that the corresponding pole is buried and it is excluded from the N count, effectively signalling a change of topology ν = 1. Crucially, the existence of a buried pole implies the existence of a zero-energy non-decaying (dark) state somewhere in the system with Majorana properties. In this sense, S-matrix topology is a true generalization of the band-structure topology of closed systems and has the same implications in terms of topologically protected excitations, albeit in the context of open systems. It is also closely linked to the existence of an exceptional point in the system that occurs before the pole burying, in the trivial ν = 0 phase.
Exceptional points in the open two-mode Kitaev model.
(a) Sketch of the system. (b) Bandstructure in the closed limit Γres = 0 for Δ = 0.5 t and L = 140, exhibiting trivial (ν = 0) and non-trivial (ν = 1) phases as a function of chemical potential μ. (c,d) Evolution of the lowest lying complex eigenvalues in the trivial phase at μ = 0 [thick black in (b)], as the coupling to the reservoirs Γres is increased. An exceptional point is crossed at a critical Γres ≈ 0.55 t, which results in the emergence of Z = 2 eigengalues per edge with Re εp = 0. At the exceptional point both the eigenvalues and the eigenstates coalesce [dot-dashed green curve in (c)]. (e) Full phase diagram for Z as a function of μ and Γres. Colors indicate the decay rate Γ0 of the most stable of the Z eigenvalues, red in panel (d).
Here we show that, while at weak couplings between the normal environment and the superconductor a non-trivial S-matrix implies that the superconductor is also non-trivial in isolation, this is not the case at strong couplings. In specific but experimentally relevant conditions, a sufficiently transparent junction between a normal metal and a trivial superconductor has a non-trivial S-matrix with ν = 1 and is thus host to a Majorana dark state. The required conditions are: (1) the system should have a finite spin-orbit coupling at the contact, (2) the normal part of the junction should be sufficiently depleted (small Fermi energy μN) and polarised by a Zeeman field B into a helical half-metallic phase B > μN, (3) the normal transmission of the junction should be close to one and (4) the trivial superconductor should be in the Andreev limit Δ ≪ μS, where Δ is the superconducting gap and μS is its Fermi energy. The rationale of these conditions is to achieve good Andreev reflection of helical carriers from the normal side, which generates a MDS strongly localized at the junction. The intuitive mechanism behind the process is as follows. When isolated, the trivial superconducting wire is host to a Majorana pair at each end, which is strongly hybridized into a fermionic Bogoliubov state, with real energies ±Ep. As the contact is opened onto a helical wire (which has a single decay channel), one (and only one) of the two Majoranas escapes into the reservoir (blue state in Fig. 1), leaving behind the orthogonal Majorana as a dark state (red), pinned at zero energy since it no longer overlaps with the escaped Majorana. This process takes place formally by crossing an exceptional point bifurcation of the ±Ep poles into poles iΓ0,1 on the imaginary axis. As the conditions above are fulfilled, the zero energy dark state becomes truly non-decaying Γ0 → 0. Deviations from these conditions result in a residual decay rate Γ0. The dark state is then a sharp Majorana resonance centered at zero, but with Majorana properties surviving at times shorter than τ0 = 1/Γ0.
We analyse the generation of a MDSs in two different models. First we consider the problem of the open multimode Kitaev wire and see how the above scenario plays out in this simple model. Then we consider a more realistic model of an NS contact in a proximitized semiconducting wire. We describe the experimental signatures associated to the MDS. We also analyze its properties of the dark state to demonstrate it indeed shares all the characteristics of a Majorana bound state from a closed topological system, including self-conjugation, locality and neutrality and the appearance of uniform charge oscillations, 4π Josephson effect and non-Abelian braiding when combined with another MDS. We show that the residual decay rate of the MDS in non-ideal conditions depend exponentially with junction lengthscales, just like the Majorana splitting in closed topological superconductors.
Exceptional points and Majorana dark states in the open Kitaev model
We first study the emergence of stable zero-energy dark states in an open toy model, in preparation for the more realistic calculation presented later for proximitized semiconducting wires. We consider a natural multimode D-class extension46,47 of the original Kitaev model6. It is a finite length (i.e. closed) quasi-1D chain of spinless fermions with a px + ipy superconducting pairing with interesting topological properties. It may be written as a nearest-neighbour tight-binding Hamiltonian in an L × N square lattice,
The hopping amplitude is t, are integer site indices and μ is the chemical potential. The px + ipy pairing is implemented by and . For N > 1, t > Δ and finite length L ≫ 1 (wire terminated by vacuum), this model exhibits an even-odd effect in its spectrum46 as μ is varied, see Fig. 2b. Odd (non-trivial) phases develop a single zero-energy MBS at each end of the wire (in red, Majorana number ν = 1), while even (trivial) phases have none (ν = 0). These MBSs arise as a result of the bulk-boundary correspondence principle when the bulk bandstructure of the wire becomes topologically non-trivial, with ν as the topological invariant47,48,49.
We now analyse the behaviour of the spectrum with a different type of boundary. Instead of vacuum we couple the wire to a normal single-mode reservoir at each end. This is a very different boundary (it is gapless), to which the bulk-boundary correspondence principle is not applicable. We will show how a strong enough coupling to this specific single-mode reservoir may allow certain topologically trivial phases (ν = 0, no MBSs with vacuum termination) to develop stable bound states at the contacts, or dark states pinned to zero energy and closely related to the MBSs of non-trivial phases.
For concreteness we consider the N = 2 Kitaev wire, which is the simplest that manifests this phenomenon. The wire is coupled to single-mode (N = 1) normal reservoirs at each end, as shown in Fig. 2a. Its effect is modelled through a non-Hermitian self-energy Σres on one end site, which results in a non-Hermitian effective Hamiltonian for the open multimode Kitaev wire
The local decay rate Γres depends on the (constant) local density of states of the reservoir at the contact ρres and the hopping amplitude t′ into the reservoir, Γres = π|t′|2ρres. The shift ReΣres is neglected for the moment.
The eigenvalues of are complex εp = Ep − iΓp and represent the real energy Ep and decay rate Γp into the reservoirs of quasibound states in the wire. They are also the poles of the scattering matrix S(ω) from the reservoir for complex ω. Causality implies Im εp < 0, while the charge-conjugation symmetry of the Bogoliubov-de Gennes description guarantees that if εp is an eigenvalue, then is also. This symmetry classifies eigenvalues in conjugate pairs, unless they lie exactly on the imaginary axis, in which case they may have no partner. As summarised in the introduction, the number of such lone, purely imaginary (or zero) eigenvalues per edge is denoted by Z and carries a profound physical significance related to the topology of the scattering matrix at that contact. The number Z allowed Pikulin and Nazarov29,30 to classify the scattering matrices S(ω) of generic NS junctions into trivial (even Z) and non-trivial (odd Z) topological classes and connect it to bulk topological invariants by ν = Z, mod,2. Z is moreover a robust quantity. An unpaired quasibound state with εp = −iΓp cannot acquire a finite real energy Ep through any small perturbation, just change its decay rate Γp. This robust pinning to zero real energy, protected by charge-conjugation symmetry, is reminiscent of the topologically protected zero energy pinning of MBSs in closed topological superconductors.
The evolution of the complex εp as a function of Γres for μ = 0 is shown in Fig. 2(c,d) and the phase diagram of the model in the full (μ, Γres) plane is shown in Fig. 2e. The trivial ν = Z = 0 phase at Γres = 0 around μ = 0 is characterized by two subgap non-zero eigenvalues resulting from the hybridization of two MBSs at each edge [black curves in panel (b)]. As Γres is increased, this trivial phase experiences a transition into Z = 2 through the fusion of these two levels. This is an instance of an 'exceptional point', a generic feature of open systems, or in general of non-Hermitian matrices with certain symmetries34,45, at which two complex eigenvalues become degenerate and bifurcate in the complex plane. One such bifurcation is shown in panels (c,d). At the exceptional point (here Γres ≈ 0.55t for μ = 0), the corresponding eigenstates coalesce [their overlap reaches one, dot-dashed green curve in panel (c)], in contrast to the Hermitian case where degenerate eigenstates remain orthogonal.
After crossing the exceptional point, the two hybridized Majoranas become exact zero modes, albeit with different lifetimes, without any fine tuning. One of the two eigenvalues at each contact gradually approaches the origin [red in panel (d)], i.e. its imaginary part also approaches zero. We denote this imaginary part, or decay rate, by Γ0. Note that Γ0 is not of order Γres. In fact Γ0 is suppressed as Γres increases, see panels (c,e). As a result, the corresponding state evolves into a long-lived zero-energy resonance localised at an increasingly transparent contact. The suppression of its decay Γ0 is a result of the specific single-mode nature of the reservoirs considered here and is not related to any change in the bulk topology, which remains trivial ν = 0.
Physically, the above bifurcation can be understood as follows. In the N = 2 trivial phase around μ = 0, the Kitaev wire actually hosts two Majorana bound states at each end, initially at zero energy, that owing to their strong overlap hybridize into finite energy (thick black line in Fig. 2b). These two states are schematically represented by the red and blue circles in Fig. 2a. As the wire is opened to the single-mode reservoir, only one combination of these two states (blue circle) can scape across the transparent junction into the reservoir while its overlap with the orthogonal combination (red circle), which remains localized, is suppressed. As a result, the decay rates of the two states bifurcate at the exceptional point while their real part becomes exactly zero. Thus, the fact that the reservoir has a single mode into which only one of the two Majoranas may decay is essential in order to yield one decoupled dark state at a transparent junction. Here, transparent should be understood as perfect Andreev reflection probability. This implies perfect delocalisation of one Majorana and a decoupled zero-energy dark state. To have a perfect Andreev reflection, however, it is not enough to have a transparent contact in the normal phase (Δ = 0). It is also necessary that the pairing is a small perturbation to the normal phase (Δ ≪ t, Andreev limit). Otherwise a non-zero normal reflection will arise at the NS junction, leading to a finite residual decay Γ0. As we approach the Andreev limit, the suppression of the residual decay rate Γ0 at large Γres is very fast, decreasing exponentially with the ratio t/Δ (not shown). Hence, the Majorana resonance quickly becomes a proper MBS for t, Γres ≫ Δ.
It has been shown that realistic models for an isolated, Zeeman-polarised, proximitized 1D Rashba wire belongs to the same topological class as the Kitaev model1. Such models also develop a ν = 1 topological phase with MBSs for strong enough Zeeman fields, while they remain trivial ν = 0 at low fields. In the following we study the emergence at exceptional points of zero energy dark states with Majorana properties in open topologically trivial Rashba wires.
Majorana dark states in a proximitized Rashba wire
In recent years, experimental progress has been reported towards the detection of MBS in Rashba nanowires10,11,12,13,14,15,16. These efforts were in large part stimulated by the prediction by Lutchyn et al.21 and Oreg et al.22 that these type of systems would undergo a topological transition into an effective p-wave superconducting phase when a Zeeman field B parallel to the wire exceeds a critical value Bc. We now consider models relevant to single-mode Rashba wires and demonstrate the formation of MDSs for B < Bc.
Following refs 21,22, we model a thin proximitized Rashba nanowire under a Zeeman field by a spinful 1D tight-binding chain,
The parameters of the model are the chemical potential measured from depletion μS, the hopping , where m* is the effective mass and a0 is the lattice spacing, the induced pairing Δ, the SO hopping , where αSO = 2/(m*λSO) is the SO coupling and λSO is the SO length and the Zeeman field , where g is the g-factor and is the magnetic field along the wire. In what follows we present simulations with parameters corresponding to an InSb proximitized wire10 (Δ = 0.25 meV, αSO = 20 meV nm, m* = 0.015me, g = 40). Like for the Kitaev wire, we will consider both an isolated proximitized wire of finite length and an open NS contact between proximitized and non-proximitized sections of a nanowire, Fig. 1. The latter is assumed infinite (see the supplemental information for finite length effects) and is modelled by the same Hamiltonian, albeit with Δ = 0 and a μN in place of μS. The normal-state average transparency per mode TN of the contact is physically controlled by a electrostatic gating in an actual device and is modelled here either by a hopping t′ ≤ t across the contact, or by a spatial interpolation between μN and μS across a certain contact length LC determined by the distance of the wire to the depletion gate (Δ is always abrupt, see Supplementary Material). Note that if the density of defects in the wire is small, TN ~ 1 − exp(−LC/λ), with λ a lengthscale of the order of the average Fermi wavelength.
Topologically, the isolated nanowire belongs, for finite B, to the same one-dimensional D-class as the multimode Kitaev wire of the preceding section. For |B| smaller than a critical , The nanowire is trivial (ν = Z = 0). As |B| exceeds Bc, the topological invariant becomes non-trivial (ν = 1) through a band inversion, see Fig. 3a, much like the transition at |μ| ≈ t in Fig. 2b, with the peculiarity that the two hybridized Majoranas in the trivial |B| < Bc phase are not deep inside the gap, but at the band edge. As μS grows, Bc quickly becomes unrealistically large and the nanowire remains trivial for all reasonable fields, Fig. 3b. (A large μS, incidentally, is the natural experimental regime, since the superconductor will typically transfer charge to the proximitized section of the wire that is difficult to deplete due to screening.)
Exceptional points in a proximitized Rashba nanowire.
(a,b) The spectrum for an InSb proximitized wire (Δ = 0.25 meV, αSO = 20 meV nm, m* = 0.015 me, g = 40) at vanishing NS transparency both for μS/Δ ≈ 0.5 (a) and μS/Δ ≈ 10 (b), for the same range of Zeeman field B. (c,d) The corresponding phase diagram of the NS junction in the B − TN plane. Colors denote the residual decay rate Γ0 after the exceptional point (thick black line). Panel (d) in the Andreev limit shows the formation of Majorana dark states (MDSs) in the trivial phase B < Bc at high transparency TN [red regions]. Inset: the residual decay rate of the MDSs vanishes as μS is pushed into the Andreev limit μS ≫ Δ. For the realistic parameters of the simulations, this residual decay rate corresponds to a lifetime of the MDSs of ~0.2 microseconds.
As the nanowire contains a non-proximitized normal section, Fig. 1, a phenomenology similar to the open N = 2 Kitaev wire arises. We define the effective Hamiltonian like in Eq. (2), albeit with the exact self energy from the normal portion of the wire evaluated at zero frequency ΣN(ω = 0) (see Supplementary Material). As the coupling to the reservoirs increases (the junction transparency TN grows), two eigenvalues drop out from the band edge into the lower complex plane and merge at the imaginary axis at an exceptional point, much like in Fig. 2d. In contrast to the Kitaev model, however, this exceptional point is only reached if the Zeeman field exceeds a certain value , see dashed lines in 3(c,d). This Bh is the field required for the normal nanowire to become helical. For |B| > Bh, the normal nanowire hosts a single propagating mode, with the other spin sector completely depleted by the Zeeman field and behaves as a single-mode reservoir like the one discussed for the Kitaev wire. Note that this helical regime should be achievable using electrostatic gating, since it only requires a sufficient depletion of the non-proximitized section of the semiconductor nanowire, unscreened by the superconductor.
After crossing the exceptional point (region above thick black curves in 3[c,d]), the scattering matrix S(ω) at the trivial NS contact acquires Z = 2 purely imaginary poles. One of the two moves towards the origin. The asymptotic decay rate Γ0 in the limit TN → 1 is not vanishing in general, so that the corresponding states should be denoted as Majorana resonances50. However, in the experimentally relevant Andreev limit μS ≫ Δ, Fig. 3d, the asymptotic Γ0 vanishes exponentially with μS/Δ (see inset). When the wire is tuned into the regime μS ≫ Δ and the contact is made sufficiently transparent, the Majorana resonances are stabilised into proper non-decaying MDSs. For example, for the realistic parameters of the simulations in Fig. 3d the lifetime for the minimum widths (see inset) corresponds to ~0.2 microseconds. While this is already a rather long time, this is not an upper bound since even longer lifetimes can in principle be obtained by increasing μS.
The interpretation of this mechanism is the same as in the Kitaev model. While two MBSs at opposite ends of an isolated topological nanowire can be considered exact zero modes up to exponentially small corrections (in the wire length) coming from their mutual overlap, a MDSs (red in Fig. 1) also becomes an exact Majorana zero mode without any fine tuning by a suppression of its overlap with its sibling Majorana (blue in Fig. 1), which escapes into the helical reservoir. It is important to note that, in contrast to isolated topological wires, any residual overlap that remains after the exceptional point does not translate into a finite energy splitting, but rather into a residual decay rate Γ0.
Physical properties of Majorana dark states
Having established the emergence of zero energy dark states at a transparent helical metal-trivial superconductor junction in the Andreev limit, we now turn to the analysis of the physical properties of said states and compare them to conventional MBSs. We will study their signatures in transport, their wavefunction locality, particle-hole conjugation, their charge neutrality, uniform charge oscillations and finally, the 4π fractional Josephson effect and their non-Abelian braiding properties in SNS geometries.
Signatures in dI/dV
We start by analysing the differential conductance dI/dV through a normal tunnelling probe weakly coupled to the neighbourhood of the junction, brown in Fig. 1. In the tunneling limit, this is proportional to the local density of states at the junction at energy ε = eV, where V is the bias voltage. (Note that this is different from the differential conductance across the NS contact, which is not necessarily in the tunneling regime, see below). We computed the tunnelling dI/dV versus B and V using standard quantum transport techniques51 (see Supplementary Material). It is shown in Fig. 4(a–f) for several transparencies TN, both far from the Andreev limit (top row) and deep into the Andreev limit (bottom row). Atop each panel, the evolution of the lowest complex eigenvalues with B is shown, with an exceptional point bifurcation at B = Bep. Panel (f), with TN → 1, corresponds to a cut along the top of Fig. 3(e), for which MDSs are fully developed. Their presence gives rise to sharp zero bias anomaly (ZBA) in transport at fields B > Bh, with a sharpness that increases exponentially with μS/Δ. This type of ZBA was the first signature of MBSs explored experimentally10, though in the present context they arise far from the topological regime B ≪ Bc. The ZBA is not preceded by signatures of a gap closing. Note also that away from the ideal conditions TN → 1, μS ≫ Δ, wide Majorana resonances are also visible in the topologically trivial regime, panels (b,c,e), albeit of finite lifetime. Injecting current through the normal reservoir [panels (g,h)] yields a very different dI/dV profile for large TN (unlike for TN ≪ 1), which is no longer a measurement of the local density of states. The ZBA appears in this case as a sharp dip on top of a constant 2e2/h background (perfect Andreev reflection), though again only for |B| > Bh. This is consistent with general scattering theory30,52,53,54,55, that predicts a dI/dV = 0 from a single mode reservoir at V → 0 if the topology is trivial. The dI/dV = 2e2/h plateau is preceded by another unsplit triangular 4e2/h plateau for |B| < Bh, forming a characteristic feature that should be experimentally recognizable. Note that for B > Bc one also obtains a sharp dip at zero, panel (g). This is due to the finite length of the superconducting wire29,30, LS = 1.5μm here.
Signatures in dI/dV.
(a–f) Tunnelling differential conductance dI/dV through a third probe contacted at the junction (brown in Fig. 3a), for different junction transparencies TN and for the same range of Zeeman fields B and bias voltages V. First row corresponds to μS = 0.5Δ, second row to μS = 10Δ, as in Fig. 3. The length of the proximitized wire is 1.5 μm. Evolution of lowest poles across a B = Bep exceptional point is shown atop each panel (higher poles not plotted for clarity). Note the sharp zero bias anomaly (ZBA) in the transparent, Andreev limit of panel (f), signalling the presence of MDSs in the topologically trivial regime Bh < B < Bc. (g,h) Differential conductance across the junction, for the same parameters as panels (c,f). The ZBA becomes a sharp dip on a constant 2G0 = 2e2/h background.
Spatial localization and Majorana character
The spatial locality and Majorana self-conjugation γ = γ† are assessed next, by analysing the wavefunction of the MDSs. Figure 5a shows, in red, the quasiparticle density |ψ(x)|2 = |u|2 + |v|2 (solid lines) and charge density |ρ(x)|2 = |u|2 − |v|2 (dashed lines) of the MDS marked by the white arrow in Fig. 4f (u and v are particle and hole components of its wavefunction, respectively). As discussed above, the MDS represents a non-decaying state at zero energy. The figure shows that it is furthermore well localised at the junction, decaying exponentially as ~e−x/ξ with a Majorana localization length ξ = vF/Δ(B)56 into the superconductor (see envelopes and inset in 5a). For comparison we also show in black the spatial probability |ψ(x)|2 of a conventional B > Bc MBS for a topological bulk at zero transparency (isolated topological wire). For both states, the charge density ρ(x) is zero, as implied by the Majorana relation γ = γ†.
Spatial localization, Majorana character and 4π fractional Josephson effect.
(a) Spatial quasiparticle density |u|2 + |v|2 for Majorana bound states in an NS junction, located at x = 0. The solid red curve corresponds to a MDS between trivial bulks (TN ≈ 1, Bh < B < Bc, see white arrow in Fig. 4f), while the solid black curve corresponds to a conventional MBS in a closed topological wire (TN = 0, B > Bc). Both decay exponentially ~e−x/ξ with a Majorana localization length ξ = vF/Δ(B)56 (see inset). Dotted lines are the corresponding charge densities |u|2 − |v|2, zero everywhere, revealing the Majorana character of both states. (b) Same as (a), albeit in a NSN geometry with finite LS, hosting two overlapping Majoranas. The charge densities |u|2 − |v|2 exhibit spatially uniform oscillations due to the overlap. (d,e) 4π Josephson effect from MDSs in a Josephson junction of transparency TJ, completely open to helical contacts (see inset, B = Bh < Bc, TN ≈ 1). The MDSs at each contact hybridize across the junction. Their real energies (solid curves) depend on the superconducting phase difference ϕ and are zero at π. The double exceptional point structure around ϕ = π found here is already a signature in the open regime of an underlying protected crossing in the closed limit (that here would be reached in the TN → 1, TJ → 0 limit). While for large TJ (d) the states have a fast decay rate Γp (dashed curves), the decay is quickly suppressed at small TJ (e). At the same time, the two exceptional points approach and merge at ϕ = π, which then appears as the protected level crossing Ep = 0 of a closed system. This results in a 4π-Josephson effect like that of topological Josephson junctions, despite the trivial topology.
We now examine the charge density patterns that arise from the weak overlap of two MDSs. It was shown57,58 that the charge density ρ(x) = |u|2 − |v|2, which is zero everywhere for an isolated MBS (see Fig. 5a), develops a spatial oscillatory pattern that is uniform throughout space when two MBSs approach each other in a 1D superconductor, irrespective of their particular positions. This is a very specific and non-trivial signature of MBSs that probes the state wavefunction itself and was proposed as a way to detect MBSs through charge sensing. A transparent NSN junction (with N portions coupled to reservoirs) provides a convenient geometry to study this effect in the topologically trivial regime. Two localized MDSs appear for |B| > Bh at the two ends of the S section. They weakly overlap and should thus be expected to exhibit uniform spatial charge oscillation throughout the superconductor in analogy to conventional MBSs. Figure 5(b,c) compare the charge density ρ(x) for B > Bc MBSs in the tunnelling limit and Bh < B < Bc in the transparent limit. Once more, we see the strong similarity between the two cases, which points to an essential equivalence between the two types of states.
Fractional Josephson effect
We next consider the fractional Josephson effect. A Josephson junction hosting a pair of conventional MBSs was shown, in the absence of quasiparticle poisoning, to develop an anomalous supercurrent I(ϕ) that is 4π-periodic in the superconductor phase difference ϕ6,21,22,59,60,61. The anomalous supercurrent component is carried by the MBSs, which exhibit a parity-protected zero-energy crossing at ϕ = π. We consider the analogous Josephson experiment in which a pair of MDSs reside at either side of a topologically trivial Josephson junction. This may be achieved in an open version of a standard Josephson junction formed in a superconducting ring, see inset of Fig. 5e. The two sides of the weak link, of transmission TJ, are coupled to normal reservoirs by two NS contacts that are kept open (TN ≈ 1), so that each of them hosts a MDS for . A flux Φ through the superconducting loop controls the phase difference ϕ across the junction, which sustains a supercurrent I(ϕ). The main contribution to the supercurrent is carried by the ϕ-dependent hybridization of the two MDSs. As long as TJ is small enough, the MDSs will not decay, since their sibling Majoranas are essentially unperturbed by TJ and remain strongly delocalised into the helical reservoirs. As TJ is increased, however, the MDSs will become a finite-lifetime resonances, as in the case of TN < 1.
Figure 5d shows the computed S-matrix poles of the open Josephson junction from two hybridized decaying Majorana resonances (TJ is large), located at εp and . These poles represent Andreev quasibound states at the junction. Their real energy ±Ep is shown by the solid lines and their decay rate by the dashed lines. Each of these poles contribute with a Lorentzian to the density of states (DOS) of the junction ρ(ω). The derivative of the DOS respect to ϕ, integrated over occupied states (ω < 0) yields the supercurrent carried by these decaying states62, see Supplementary Material. Around ϕ = π, we see that ±Ep(ϕ) crosses two new exceptional points, which in particular force ±Ep = 0 at ϕ = π, just like in isolated topological Josephson junctions. In practice, however, the finite decay rate Γ = Im εp of the hybridized Majorana resonances for large TJ (or smaller TN < 1) precludes a fractional Josephson effect from developing, with 4π-periodic harmonics only expected in fast transients. This also happens in finite-length topological nanowires (see e.g. Andreev spectrum of Fig. 6a) and in Josephson junctions with quasiparticle poisoning63. We emphasize moreover that the double exceptional-point structure around ϕ = π found here is similarly obtained in the phase dependence of a topologically non-trivial Josephson junction open to an environment64 and has more generally been shown to emerge from Dirac cones in photonic crystals with non-hermicity arising from radiation65. As such it is already a signature in the open regime of an underlying protected crossing in the closed limit (here TN → 1, TJ → 0).
Braiding of MBSs and MDSs in 1D.
Topological (a) and non-topological (b) four-Majorana Josephson junction (zero energy γ1,2 Majoranas in red, ancilla Majoranas in blue) and the corresponding low energy Andreev spectrum as a function of superconducting phase difference ϕ. The γ1,2 Majoranas undergo braiding as ϕ adiabatically increases by 2π. (c,d) Fidelity F = |sin2ϕB| of the braiding for the two systems in terms of the Berry phase ±ϕB of the even/odd ground states upon a ϕ = 0 → 2π adiabatic sweep. The corresponding evolution in real space of γ1,2 as a function ϕ, with the relative ±1 phase respect to the initial basis encoded in blue/red (see main text).
Figure 5e shows a case closer to this limit, with TN ≈ 1 and TJ = 0.06. The suppression of TJ enhances Andreev reflection from the normal leads and suppresses normal reflection processes. The residual lifetime of the hybridized MDSs is similarly suppressed (dashed line). At the same time the two exceptional points approach and merge at ϕ = π, which then appears as the protected level crossing Ep = 0 of a closed system. Crucially, no number-conserving perturbation can lift this crossing, just split it again into two exceptional points, always with Ep = 0 at ϕ = π. The two MDSs thus behave as MBSs in a closed topological Josephson junction, although their ϕ = π crossing is protected by the two underlying exceptional points65 and charge conjugation symmetry45, instead of fermion parity (which cannot be defined in an open system). We analysed the continuous evolution of the wavefunction through the level crossing at ϕ = π for TJ → 0 and found that a system driven slowly through this point (but faster than the residual decay rate Γ0) will evolve from its ground state to an excited state with 100% probability (green arrow in panel e). The two hybridized MDSs are indeed interchanged upon crossing ϕ = π. As a result, the Josephson current for small TJ becomes 4π-periodic within the residual lifetime 1/Γ0 of the junction MDSs. Moreover, the critical current scales as . This makes MDS supercurrent indistinguishable from that of conventional MBSs in topological Josephson junctions.
Non-Abelian braiding
We finally analyse the most stringent test of the Majorana character of the MDSs, namely their non-Abelian braiding statistics. This property dictates that upon an adiabatic exchange of two spatially separated (zero energy) MBSs, the corresponding operators will undergo a transformation of the type γ1 → −γ2 and γ2 → γ1. This transformation corresponds to a Berry phase ±π/4 for the even/odd degenerate ground states after the adiabatic exchange7. If we prepare the system in its even ground state (where 'even' refers to the parity of the number of fermions) and perform a braiding operation, it will acquire a −π/4 global phase, . Starting from the odd ground state, which has an additional zero energy fermion , the Berry phase is the opposite . This implies that under the braiding, with ϕB = π/4 irrespective of the details of the exchange path. Together with , this implies the that γ1 → −γ2 and γ2 → γ1. More intuitively, if instead of creating a full zero-energy fermion on , we just create a Majorana bound state or localized at its corresponding boundary, its wavefunction will adiabatically evolve under the exchange process into the other Majorana and similarly .
In the following, we demonstrate that MDSs indeed exhibit this exact braiding statistics, just as conventional MBSs. Our aim here is to conceptually demonstrate braiding of EP Majoranas, not to discuss the optimal braiding strategy for practical quantum computing. We have thus taken the simplest possible implementation of braiding, that we explain as follows. While the spatial exchange of two Majoranas in two dimensions is conceptually clear, a subtler, yet equivalent, method is to braid them in parameter space by means of at least one additional MBS pair (ancilla)66,67,68. Remarkably, this idea works even in one dimension68 and braiding of conventional Majoranas can be achieved by sweeping the superconducting phase difference ϕ from zero to 2π in a single Josephson junction between two finite-length topological (B > Bc) superconducting wires described by Eq. (3), see Fig. 6a. MBSs at the outer (non-contacted) ends of the wire will form a zero energy fermion for long enough wires (in red), while the inner MBSs (the ancilla pair, in blue) will form a fermion with ϕ-dependent energy. The adiabatic evolution of the even and odd ground states as ϕ is increased by 2π indeed results in a Berry phase for long LS, leading to exact braiding of the outer Majoranas in this limit. The computed fidelity of the braiding process, defined as F = |sin2ϕB|, is shown in Fig. 6c for varying B > Bc and LS. A typical spatial evolution under the braiding process of the two Majoranas (i.e. the adiabatically evolved starting from either or is shown in Fig. 6(e), with blue/red denoting the sign of that reveals the braiding statistics.
The equivalent braiding strategy for MDSs is implemented in a topologically trivial superconductor-helical normal-superconductor Josephson junction69. The ancilla Majoranas in this situation correspond to the two Majoranas delocalised over the helical region (blue in Fig. 6b), while the braiding Majoranas are the MDSs localised at each contact (red). Regardless of their different positions in space, the low energy Andreev spectrum in this system is essentially the same as for a four-MBS topological Josephson junction69 (compare Fig. 6a,b). Computing the adiabatic evolution upon increasing ϕ by 2π, we once again find that ϕB approaches π/4, i.e. exact braiding statistics like in the topological case, for good enough contact transmission TN. The corresponding braiding fidelity F is shown in Fig. 6d for increasing TN and B. We see that F = 1 as TN → 1. Note the equivalent roles of TN here and of LS in the topological case: these are the relevant parameters that control the deviations of the braiding Majoranas from true zero energy bound states in each case and hence the deviations from exact braiding statistics. The spatial evolution of the braided Majoranas, shown in Fig. 6f follows a similar pattern as in the topological case, albeit for wavefunctions concentrated at the NS contacts. Hence we conclude that regarding their braiding statistics, MDSs behave once more in the same way as conventional MBSs.
We have presented a novel approach to engineer Majorana bound states in non-topological superconducting wires. Instead of inducing a topological transition in a proximitized Rashba wire, we consider a sufficiently transparent normal-superconductor junction created on a Rashba wire, with a topologically trivial superconducting side and a helical normal side. The strong coupling to the helical environment forces a single long-lived resonance to emerge from an exceptional point at precisely zero energy above a threshold transparency. This resonance evolves as the transparency is increased further into a stable dark state localised at the junction. We have demonstrated this phenomenon both in the multimode Kitaev model and in a realistic model for a proximitized semiconducting nanowire.
The zero-energy state emerges as the junction traverses an exceptional point at the threshold transparency and becomes robustly pinned to zero energy without fine tuning by virtue of charge-conjugation symmetry. Moreover, its residual decay rate at perfect transparency is exponentially suppressed in the experimentally relevant Andreev limit. Finally, we have shown that relevant transport and spectral properties associated to these zero energy states, here dubbed Majorana dark states, are indistinguishable from those of conventional Majorana bound states, in particular their braiding statistics.
Thus, our proposal offers a new promising strategy towards generating and detecting Majorana bound states in the lab, with potential advantages over more conventional approaches in situations where manipulating the metallic environment and contact properties proves to be simpler than engineering a topological superconducting transition. Most importantly, the condition for reaching a helical phase in the normal side, while the proximitized region of the nanowire remains in the large μS Andreev limit, is expected to be accessible experimentally. All the necessary ingredients for our proposal are already available in the lab: dramatic advances in fabrication of thin semiconducting nanowires, proximitized with conventional s-wave superconductors, were recently reported70,71. Highly transparent, single-channel, NS contacts and a high-quality proximity effect in quantitative agreement with theory were demonstrated. Reaching the helical regime in such high-quality and fully tunable devices should be within reach, so we expect that our proposal for MDSs will be soon tested.
The general connection demonstrated here between Majorana states and the bifurcation of zero energy complex eigenvalues at exceptional points in open systems offers a new perspective on the mechanisms that may give rise to Majorana states in condensed matter systems. In this sense it extends conventional strategies based on topological superconductors. It moreover expands on the extensive studies on exceptional point physics in optics, where it has been shown that state coalescence has far-reaching physical consequences, such as e.g. non-Abelian geometric phases72,73,74. To date, most studies of this kind have been concerned with open photonic systems under parity-time (PT) symmetry36,37,38 and with its spontaneous breakdown through exceptional point bifurcations. This leads to intriguing physical phenomena, such as unidirectional transmission or reflection38, loss-induced transparency39, lasers with reversed pump dependence and other exotic properties40,41,42. Such striking optical phenomena are seemingly unrelated to the physics described in this work, but interesting connections are being made. These include open photonic systems with charge-conjugation symmetry45 (as opposed to PT symmetry) and which show spectral transitions analogous to the zero mode bifurcation discussed here. Also, radiation-induced non-hermicity has been demonstrated as a way to convert Dirac cones into exceptional points65, a phenomenon completely analogous to the conversion of zero energy crossings of Bogoliubov-de Gennes excitations in Josephson junctions into double exceptional point structures like those in Fig. 5d. Further research should extend the understanding of these interesting connections. More importantly, our study and others45,65 raise relevant questions, such as whether there is a deeper connection between topological transitions in closed systems and spectral bifurcations in non-hermitian systems. Answering such questions would help in advancing our understanding of the meaning of non-trivial topology in open systems.
How to cite this article: San-Jose, P. et al. Majorana bound states from exceptional points in non-topological superconductors. Sci. Rep. 6, 21427; doi: 10.1038/srep21427 (2016).
Alicea, J. New directions in the pursuit of majorana fermions in solid state systems. Rep. Prog. Phys. 75, 076501 (2012).
ADS Google Scholar
Leijnse, M. & Flensberg, K. Introduction to topological superconductivity and majorana fermions. Semiconductor Science and Technology 27, 124003 (2012).
Beenakker, C. Search for majorana fermions in superconductors. Annu. Rev. Cond. Mat. Phys. 4, 113–136 (2013).
ADS CAS Google Scholar
Stanescu, T. D. & Tewari, S. Majorana fermions in semiconductor nanowires: fundamentals, modeling and experiment. J. Phys Condens. Matter 25, 233201 (2013).
Elliott, S. R. & Franz, M. Colloquium: Majorana fermions in nuclear, particle and solid-state physics. Rev. Mod. Phys. 87, 137–163 (2015).
Article ADS MathSciNet CAS Google Scholar
Kitaev, A. Y. Unpaired majorana fermions in quantum wires. Phys. Usp. 44, 131 (2001).
Ivanov, D. A. Non-abelian statistics of half-quantum vortices in p-wave superconductors. Phys. Rev. Lett. 86, 268–271 (2001).
ADS CAS PubMed Google Scholar
Nayak, C., Simon, S., Stern, A., Freedman, M. & Das Sarma, S. Non-abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083–1159 (2008).
ADS MathSciNet CAS MATH Google Scholar
Sarma, S. D., Freedman, M. & Nayak, C. Majorana zero modes and topological quantum computation. arXiv:1501.02813 (2015).
Mourik, V. et al. Signatures of majorana fermions in hybrid superconductor-semiconductor nanowire devices. Science 336, 1003–1007 (2012).
Deng, M. T. et al. Anomalous zero-bias conductance peak in a nb-insb nanowire-nb hybrid device. Nano Letters 12, 6414–6419 (2012).
Das, A. et al. Zero-bias peaks and splitting in an al-inas nanowire topological superconductor as a signature of majorana fermions. Nat Phys 8, 887–895 (2012).
Rokhinson, L. P., Liu, X. & Furdyna, J. K. The fractional a.c. josephson effect in a semiconductor-superconductor nanowire as a signature of majorana particles. Nat Phys 8, 795–799 (2012).
Finck, A. D. K., Van Harlingen, D. J., Mohseni, P. K., Jung, K. & Li, X. Anomalous modulation of a zero-bias peak in a hybrid nanowire-superconductor device. Phys. Rev. Lett. 110, 126406 (2013).
Churchill, H. O. H. et al. Superconductor-nanowire devices from tunneling to the multichannel regime: Zero-bias oscillations and magnetoconductance crossover. Phys. Rev. B 87, 241401 (2013).
Lee, E. J. H. et al. Spin-resolved andreev levels and parity crossings in hybrid superconductor-semiconductor nanostructures. Nat Nano 9, 79–84 (2014).
Hart, S. et al. Induced superconductivity in the quantum spin hall edge. Nat Phys 10, 638–643 (2014).
Pribiag, V. S. et al. Edge-mode superconductivity in a two-dimensional topological insulator. Nat Nano 10, 593–597 (2015).
Nadj-Perge, S. et al. Observation of majorana fermions in ferromagnetic atomic chains on a superconductor. Science 346, 602–607 (2014).
Fu, L. & Kane, C. L. Superconducting proximity effect and majorana fermions at the surface of a topological insulator. Phys. Rev. Lett. 100, 096407 (2008).
Lutchyn, R. M., Sau, J. D. & Das Sarma, S. Majorana fermions and a topological phase transition in semiconductor-superconductor heterostructures. Phys. Rev. Lett. 105, 077001 (2010).
Oreg, Y., Refael, G. & von Oppen, F. Helical liquids and majorana bound states in quantum wires. Phys. Rev. Lett. 105, 177002 (2010).
Takei, S., Fregoso, B. M., Hui, H.-Y., Lobos, A. M. & Das Sarma, S. Soft superconducting gap in semiconductor majorana nanowires. Phys. Rev. Lett. 110, 186803 (2013).
Jiang, L. et al. Majorana fermions in equilibrium and in driven cold-atom quantum wires. Phys. Rev. Lett. 106, 220402 (2011).
ADS PubMed Google Scholar
Kundu, A. & Seradjeh, B. Transport signatures of floquet majorana fermions in driven topological superconductors. Phys. Rev. Lett. 111, 136402 (2013).
Lindner, N. H., Refael, G. & Galitski, V. Floquet topological insulator in semiconductor quantum wells. Nat Phys 7, 490–495 (2011).
Diehl, S., Rico, E., Baranov, M. A. & Zoller, P. Topology by dissipation in atomic quantum wires. Nat Phys 7, 971–977 (2011).
Bardyn, C.-E. et al. Majorana modes in driven-dissipative atomic superfluids with a zero chern number. Phys. Rev. Lett. 109, 130402 (2012).
Pikulin, D. & Nazarov, Y. Topological properties of superconducting junctions. JETP Letters 94, 693–697 (2012).
Pikulin, D. I. & Nazarov, Y. V. Two types of topological transitions in finite majorana wires. Phys. Rev. B 87, 235421 (2013).
Kato, T. Perturbation theory for linear operators, vol. 132 (Springer, 1995).
Moiseyev, N. Non-Hermitian Quantum Mechanics, vol. 1 (Cambridge University Press, 2011).
Berry, M. Physics of nonhermitian degeneracies. Czechoslovak Journal of Physics 54, 1039–1047 (2004).
ADS MathSciNet CAS Google Scholar
Heiss, W. D. The physics of exceptional points. J. Phys. A 45, 444016 (2012).
ADS MathSciNet MATH Google Scholar
Heiss, W. D. Repulsion of resonance states and exceptional points. Phys. Rev. E 61, 929–932 (2000).
Bender, C. M. & Boettcher, S. Real spectra in non-hermitian hamiltonians having pt symmetry. Phys. Rev. Lett. 80, 5243–5246 (1998).
Klaiman, S., Günther, U. & Moiseyev, N. Visualization of branch points in -symmetric waveguides. Phys. Rev. Lett. 101, 080402 (2008).
ADS MathSciNet PubMed MATH Google Scholar
Regensburger, A. et al. Parity-time synthetic photonic lattices. Nature 488, 167–171 (2012).
Guo, A. et al. Observation of pt-symmetry breaking in complex optical potentials. Phys. Rev. Lett. 103, 093902 (2009).
Liertzer, M. et al. Pump-induced exceptional points in lasers. Phys. Rev. Lett. 108, 173901 (2012).
Brandstetter, M. et al. Reversing the pump dependence of a laser at an exceptional point. Nat Commun 5, 10.1038/ncomms5034 (2014).
Peng, B. et al. Loss-induced suppression and revival of lasing. Science 346, 328–332 (2014).
Poli, C., Bellec, M., Kuhl, U., Mortessagne, F. & Schomerus, H. Selective enhancement of topologically induced interface states in a dielectric resonator chain. Nat Commun 6, 10.1038/ncomms7710 (2015).
Mandal, I. Exceptional points for chiral majorana fermions in arbitrary dimensions. EPL (Europhysics Letters) 110, 67005 (2015).
Malzard, S., Poli, C. & Schomerus, H. Topologically protected defect states in open photonic systems with non-hermitian charge-conjugation and parity-time symmetry. Phys. Rev. Lett. 115, 200402 (2015).
Potter, A. C. & Lee, P. A. Multichannel generalization of kitaev's majorana end states and a practical route to realize them in thin films. Phys. Rev. Lett. 105, 227003 (2010).
Wakatsuki, R., Ezawa, M. & Nagaosa, N. Majorana fermions and multiple topological phase transition in kitaev ladder topological superconductors. Phys. Rev. B 89, 174514 (2014).
Altland, A. & Zirnbauer, M. R. Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures. Phys. Rev. B 55, 1142–1161 (1997).
Qi, X.-L. & Zhang, S.-C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011).
Sau, J. D., Lin, C. H., Hui, H.-Y. & Das Sarma, S. Avoidance of majorana resonances in periodic topological superconductor-nanowire structures. Phys. Rev. Lett. 108, 067001 (2012).
Datta, S. Electronic transport in mesoscopic systems (Cambridge university press, 1997).
Béri, B. Dephasing-enabled triplet andreev conductance. Phys. Rev. B 79, 245315 (2009).
Fidkowski, L., Alicea, J., Lindner, N. H., Lutchyn, R. M. & Fisher, M. P. A. Universal transport signatures of majorana fermions in superconductor-luttinger liquid junctions. Phys. Rev. B 85, 245121 (2012).
Pikulin, D. I., Dahlhaus, J. P., Wimmer, M., Schomerus, H. & Beenakker, C. W. J. A zero-voltage conductance peak from weak antilocalization in a majorana nanowire. New Journal of Physics 14, 125011 (2012).
Ioselevich, P. A. & Feigel'man, M. V. Tunneling conductance due to a discrete spectrum of andreev states. New Journal of Physics 15, 055011 (2013).
ADS MathSciNet Google Scholar
Klinovaja, J. & Loss, D. Composite majorana fermion wave functions in nanowires. Phys. Rev. B 86, 085408 (2012).
Lin, C. H., Sau, J. D. & Das Sarma, S. Zero-bias conductance peak in majorana wires made of semiconductor/superconductor hybrid structures. Phys. Rev. B 86, 224511 (2012).
Ben-Shach, G. et al. Detecting majorana modes in one-dimensional wires by charge sensing. Phys. Rev. B 91, 045403 (2015).
Fu, L. & Kane, C. L. Josephson current and noise at a superconductor/quantum-spin-hall-insulator/superconductor junction. Phys. Rev. B 79, 161408 (2009).
Law, K. T. & Lee, P. A. Robustness of majorana fermion induced fractional josephson effect in multichannel superconducting wires. Phys. Rev. B 84, 081304 (2011).
Jiang, L. et al. Unconventional josephson signatures of majorana bound states. Phys. Rev. Lett. 107, 236401 (2011).
Beenakker, C. Three "universal" mesoscopic josephson effects. In Transport phenomena in mesoscopic systems: proceedings of the 14th Taniguchi symposium, Shima, Japan, November 10–14, 1991 (Springer-Verlag, 1992).
Rainis, D. & Loss, D. Majorana qubit decoherence by quasiparticle poisoning. Phys. Rev. B 85, 174533 (2012).
Tarasinski, B., Chevallier, D., Hutasoit, J. A., Baxevanis, B. & Beenakker, C. W. J. Quench dynamics of fermion-parity switches in a josephson junction. Phys. Rev. B 92, 144306 (2015).
Zhen, B. et al. Spawning rings of exceptional points out of dirac cones. Nature 525, 354–358 (2015).
Sau, J. D., Clarke, D. J. & Tewari, S. Controlling non-abelian statistics of majorana fermions in semiconductor nanowires. Phys. Rev. B 84, 094505 (2011).
van Heck, B., Akhmerov, A. R., Hassler, F., Burrello, M. & Beenakker, C. W. J. Coulomb-assisted braiding of majorana fermions in a josephson junction array. New Journal of Physics 14, 035019 (2012).
Chiu, C.-K., Vazifeh, M. M. & Franz, M. Majorana fermion exchange in strictly one-dimensional structures. EPL (Europhysics Letters) 110, 10001 (2015).
Cayao, J., Prada, E., San-Jose, P. & Aguado, R. Sns junctions in nanowires with spin-orbit coupling: Role of confinement and helicity on the subgap spectrum. Phys. Rev. B 91, 024514 (2015).
Chang, W. et al. Hard gap in epitaxial semiconductor-superconductor nanowires. Nat Nano 10, 232–236 (2015).
Krogstrup, P. et al. Epitaxy of semiconductor-superconductor nanowires. Nat Mater 14, 400–406 (2015).
Heiss, W. Phases of wave functions and level repulsion. Eur. Phys. J. D 7, 1–4 (1999).
Dembowski, C. et al. Experimental observation of the topological structure of exceptional points. Phys. Rev. Lett. 86, 787–790 (2001).
Kim, S. W. Braid operation of exceptional points. Fortschritte der Physik 61, 155–161 (2013).
ADS MATH Google Scholar
We are grateful to C.W.J. Beenakker, D. Pikulin, J. Sau and A. Soluyanov for illuminating discussions. We acknowledge the support of the European Research Council and the Spanish Ministry of Economy and Innovation through the JAE-Predoc Program (J. C.), Grants No. FIS2011-23713 (P. S.-J), FIS2012-33521 (R. A.), FIS2013-47328 (E.P.) and the Ramon y Cajal Program (E.P. and P.S.-J).
Instituto de Ciencia de Materiales de Madrid (ICMM), Consejo Superior de Investigaciones Científicas (CSIC), Cantoblanco, 28049, Madrid, Spain
Pablo San-Jose, Jorge Cayao & Ramón Aguado
Departamento de Física de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera (INC) and Condensed Matter Physics Center (IFIMAC), Universidad Autónoma de Madrid, Cantoblanco, 28049, Madrid, Spain
Elsa Prada
Pablo San-Jose
Jorge Cayao
Ramón Aguado
R.A. and P.S.J. contributed to the conceptual developments. J.C. and E.P. performed preliminary numerics and P.S.J. performed final numerics and prepared figures. All the authors discussed the results and provided scientific insight. R.A. and P.S.J. wrote the manuscript with input from the rest of authors. R.A. supervised the research.
San-Jose, P., Cayao, J., Prada, E. et al. Majorana bound states from exceptional points in non-topological superconductors. Sci Rep 6, 21427 (2016). https://doi.org/10.1038/srep21427
Majorana-like Coulomb spectroscopy in the absence of zero-bias peaks
Marco Valentini
Maksim Borovkov
Georgios Katsaros
Topological quantum criticality in non-Hermitian extended Kitaev chain
S Rahul
Sujit Sarkar
Modified Bogoliubov-de Gennes treatment for Majorana conductances in three-terminal transports
Xin-Qi Li
Wei Feng
Jinshuang Jin
Science China Physics, Mechanics & Astronomy (2022)
L. S. Ricco
J. E. Sanches
A. C. Seridonio
Coalescing Majorana edge modes in non-Hermitian $${\mathscr{P}}{\mathscr{T}}$$-symmetric Kitaev chain
L. Jin
Z. Song | CommonCrawl |
CLRS Solutions 31.2 Greatest common divisor
Prove that equations $\text{(31.11)}$ and $\text{(31.12)}$ imply equation $\text{(31.13)}$.
(Omit!)
Compute the values $(d, x, y)$ that the call $\text{EXTENDED-EUCLID}(899, 493)$ returns.
$(29, -6, 11)$.
Prove that for all integers $a$, $k$, and $n$,
$\gcd(a, n) = \gcd(a + kn, n)$.
$\gcd(a, n) \mid \gcd(a + kn, n)$.
Let $d = \gcd(a, n)$, then $d \mid a$ and $d \mid n$.
$$(a + kn) \mod d = a \mod d + k \cdot (n \mod d) = 0$$
and $d \mid n$, we have
$$d \mid \gcd(a + kn, n)$$
$$\gcd(a, n) \mid \gcd(a + kn, n).$$
$\gcd(a + kn, n) \mid \gcd(a, n)$.
Suppose $d = \gcd(a + kn, n)$, we have $d \mid n$ and $d \mid (a + kn)$.
$$(a + kn) \mod d = a \mod d + k \cdot (n \mod d) = a \mod d = 0,$$
we have $d \mid a$.
Since $d \mid a$ and $d \mid n$, we have
$$d \mid \gcd(a, n)$$
$$\gcd(a + kn, n) \mid \gcd(a, n).$$
$$\gcd(a, n) \mid \gcd(a + kn, n)$$
$$\gcd(a + kn, n) \mid \gcd(a, n),$$
$$\gcd(a, n) = \gcd(a + kn, n).$$
Rewrite $\text{EUCLID}$ in an iterative form that uses only a constant amount of memory (that is, stores only a constant number of integer values).
EUCLID(a, b)
while b != 0
t = a
b = t % b
If $a > b \ge 0$, show that the call EUCLID$(a, b)$ makes at most $1 + \log_\phi b$ recursive calls. Improve this bound to $1 + \log_\phi(b / \gcd(a, b))$.
$$b \ge F_{k + 1} \approx \phi^{k + 1} / \sqrt{5}$$
$$k + 1 < \log_\phi \sqrt{5} + \log_\phi b \approx 1.67 + \log_\phi b$$
$$k < 0.67 + \log_\phi b < 1 + \log_\phi b.$$
Since $d \cdot a \mod d \cdot b = d \cdot (a \mod b)$, $\gcd(d \cdot a, d \cdot b)$ has the same number of recursive calls with $\gcd(a, b)$, therefore we could let $b' = b / \gcd(a, b)$, the inequality $k < 1 + \log_\phi(b') = 1 + \log_\phi(b / \gcd(a, b))$. will holds.
What does $\text{EXTENDED-EUCLID}(F_{k + 1}, F_k)$ return? Prove your answer correct.
If $k$ is odd, then $(1, -F_{k-2}, F_{k - 1})$.
If $k$ is even, then $(1, F_{k-2}, -F_{k - 1})$.
Define the $\gcd$ function for more than two arguments by the recursive equation $\gcd(a_0, a_1, \cdots, a_n) = \gcd(a_0, \gcd(a_1, a_2, \cdots, a_n))$. Show that the $\gcd$ function returns the same answer independent of the order in which its arguments are specified. Also show how to find integers $x_0, x_1, \cdots, x_n$ such that $\gcd(a_0, a_1, \ldots, a_n) = a_0 x_0 + a_1 x_1 + \cdots + a_n x_n$. Show that the number of divisions performed by your algorithm is $O(n + \lg (max \{a_0, a_1, \cdots, a_n \}))$.
$$\gcd(a_0, \gcd(a_1, a_2, \cdots, a_n)) = a_0 \cdot x + \gcd(a_1, a_2, \cdots, a_n) \cdot y$$
$$\gcd(a_1, \gcd(a_2, a_3, \cdots, a_n)) = a_1 \cdot x' + \gcd(a_2, a_3, \cdots, a_n) \cdot y',$$
then the coefficient of $a_1$ is $y \cdot x'$.
EXTENDED-EUCLID(a, b)
if b == 0
return (a, 1, 0)
(d, x, y) = EXTENDED-EUCLID(b, a % b)
return (d, y, x - (a / b) * y)
EXTENDED-EUCLID-MULTIPLE(a)
if a.length == 1
return (a[0], 1)
g = a[a.length - 1]
xs = [1] * a.length
ys = [0] * a.length
for i = a.length - 2 downto 0
(g, xs[i], ys[i + 1]) = EXTENDED-EUCLID(a[i], g)
for i = 1 to a.length
m *= ys[i]
xs[i] *= m
return (g, xs)
Define $\text{lcm}(a_1, a_2, \ldots, a_n)$ to be the least common multiple of the $n$ integers $a_1, a_2, \ldots, a_n$, that is, the smallest nonnegative integer that is a multiple of each $a_i$. Show how to compute $\text{lcm}(a_1, a_2, \ldots, a_n)$ efficiently using the (two-argument) $\gcd$ operation as a subroutine.
GCD(a, b)
return GCD(b, a % b)
LCM(a, b)
return a / GCD(a, b) * b
LCM-MULTIPLE(a)
l = a[0]
l = LCM(l, a[i])
return l
Prove that $n_1$, $n_2$, $n_3$, and $n_4$ are pairwise relatively prime if and only if
$\gcd(n_1n_2,n_3n_4) = \gcd(n_1n_3, n_2n_4) = 1.$
More generally, show that $n_1, n_2, \ldots, n_k$ are pairwise relatively prime if and only if a set of $\lceil \lg k \rceil$ pairs of numbers derived from the $n_i$ are relatively prime.
Suppose $n_1n_2 x + n_3n_4 y = 1$, then $n_1(n_2 x) + n_3(n_4 y) = 1$, thus $n_1$ and $n_3$ are relatively prime, $n_1$ and $n_4$, $n_2$ and $n_3$, $n_2$ and $n_4$ are the all relatively prime. And since $\gcd(n_1n_3, n_2n_4) = 1$, all the pairs are relatively prime.
General: in the first round, divide the elements into two sets with equal number of elements, calculate the products of the two set separately, if the two products are relatively prime, then the element in one set is pairwise relatively prime with the element in the other set. In the next iterations, for each set, we divide the elements into two subsets, suppose we have subsets $\{ (s_1, s_2), (s_3, s_4), \ldots \}$, then we calculate the products of $\{ s_1, s_3, \ldots \}$ and $\{ s_2, s_4, \ldots \}$, if the two products are relatively prime, then all the pairs of subset are pairwise relatively prime similar to the first round. In each iteration, the number of elements in a subset is half of the original set, thus there are $\lceil \lg k \rceil$ pairs of products.
To choose the subsets efficiently, in the $k$th iteration, we could divide the numbers based on the value of the index's $k$th bit.
Previous 31.1 Elementary number-theoretic notions
Next 31.3 Modular arithmetic | CommonCrawl |
The impact of alterations in lignin deposition on cellulose organization of the plant cell wall
Jiliang Liu1,
Jeong Im Kim2,
Joanne C. Cusumano2,
Clint Chapple2,
Nagarajan Venugopalan3,
Robert F. Fischetti3 &
Lee Makowski1,4
Biotechnology for Biofuels volume 9, Article number: 126 (2016) Cite this article
Coordination of synthesis and assembly of the polymeric components of cell walls is essential for plant growth and development. Given the degree of co-mingling and cross-linking among cell wall components, cellulose organization must be dependent on the organization of other polymers such as lignin. Here we seek to identify aspects of that codependency by studying the structural organization of cellulose fibrils in stems from Arabidopsis plants harboring mutations in genes encoding enzymes involved in lignin biosynthesis. Plants containing high levels of G-lignin, S-lignin, H-lignin, aldehyde-rich lignin, and ferulic acid-containing lignin, along with plants with very low lignin content were grown and harvested and longitudinal sections of stem were prepared and dried. Scanning X-ray microdiffraction was carried out using a 5-micron beam that moved across the sections in 5-micron steps and complete diffraction patterns were collected at each raster point. Approximately, 16,000 diffraction patterns were analyzed to determine cellulose fibril orientation and order within the tissues making up the stems.
Several mutations—most notably those exhibiting (1) down-regulation of cinnamoyl CoA reductase which leads to cell walls deficient in lignin and (2) defect of cinnamic acid 4-hydroxylase which greatly reduces lignin content—exhibited significant decrease in the proportion of oriented cellulose fibrils in the cell wall. Distinctions between tissues were maintained in all variants and even in plants exhibiting dramatic changes in cellulosic order the trends between tissues (where apparent) were generally maintained. The resilience of cellulose to degradative processes was investigated by carrying out the same analysis on samples stored in water for 30 days prior to data collection. This treatment led to significant loss of cellulosic order in plants rich in aldehyde or H-lignin, less change in wild type, and essentially no change in samples with high levels of G- or S-lignin.
These studies demonstrate that changes in lignin biosynthesis lead to significant disruption in the orientation and order of cellulose fibrils in all tissues of the stem. These dramatic phenotypic changes, in mutants with lignin rich in aldehyde or H-units, correlate with the impact the mutations have on the enzymatic degradation of the plant cell wall.
Plant cell walls are complex structures composed largely of high molecular weight poly-saccharides, highly glycosylated proteins, and lignin [1]. Little is known about the interdependence that the assembly of different polymers have on one another. That there are codependencies among the synthesis and assembly of different polymeric species seems obvious given the degree of co-mingling and cross-linking intrinsic to the cell wall, but the details of those dependencies are largely unknown. Examination of the phenotype of three mutations that cause collapse of mature xylem cells in inflorescence stems of Arabidopsis [2] suggested that a normal pattern of cellulose deposition may be required for assembly of lignin. Disruptions in lignin metabolism that result in changes in the final lignin composition of cell walls may lead to changes in both microscopic and macroscopic morphologies as well as changes in digestibility [3, 4].
Dissection of the lignin biosynthetic pathways [5] has been enabled by the identification of numerous Arabidopsis mutants with altered lignin compositions and disrupted cell wall microstructure. Lignin deficiency may be associated with dwarfism, breakdown of vascular tissues, alteration of microstructure, but the relationship between lignin content and plant size and recalcitrance is complex. For instance, disruption of the Mediator complex subunits MED5a (med5a) and MED5b (med5b) rescues the stunted growth, lignin deficiency, and changes in gene expression seen in the phenylpropanoid pathway mutant reduced epidermal fluorescence8 (ref8) without restoring the synthesis of guaiacyl and syringyl lignin subunits [6]. The lignin in the resulting med5a med5b ref8 triple mutant is almost entirely H-lignin and the yield of glucose on treatment with a mixture of commercial cellulase and β-glucosidase enzymes was significantly greater than in wild type [6]. The increased susceptibility to enzymatic digestion is suggestive of molecular-level alterations that increase access of enzymes to cellulose within these plants and suggests that in addition to changes in lignin composition and content, cellulose structures may be altered.
Multiscale imaging using light and electron microscopy can provide information about changes in the overall organization of the cell wall but limited information about changes in the molecular-level architecture. X-ray scattering represents a powerful approach to studies of alterations in the molecular-level structure and organization of cellulosic material in plant cell walls [7–9]. X-ray patterns derived from intact tissues such as the Arabidopsis stem contain the superposition of scattering from all constituents. Interpretation of the scattering is usually limited to insights about the organization of constituents that exhibit well-characterized scattering patterns that can be readily identified and quantitated in the context of a varying background made up of scattering from all other components. X-ray scattering from cellulose is very well characterized and readily quantified. Conversely, scattering from lignin, hemicellulose, and other cell wall components is not well defined and usually constitutes diffuse, unoriented scattering that is treated as background beneath the better defined scattering from cellulose. Consequently, interpretation of scattering from Arabidopsis stems is usually confined to conclusions about alterations in the orientation and organization of the cellulose.
Direct analysis of the impact of mutations on cellulosic structures is complicated by the architectural heterogeneity among different tissues which may react differently to changes in lignin composition and deposition. Tissues in the Arabidopsis stem may be only a few 10's of microns across. Any method that averages structural information over larger areas runs the risk of missing tissue-specific variables. Differences in cellulosic structures in these tissues can be distinguished by the use of an X-ray microbeam a few microns in diameter [9–12]. Analysis of the cellulosic structure within Arabidopsis stem has been carried out by scanning X-ray microdiffraction (SXMD) using a 5-micron diameter X-ray beam scanned across a thin section of stem in 5-micron steps collecting a full-diffraction pattern at every pixel [9]. This strategy proved effective at differentiating the organization of cellulose within the different tissues making up the wild-type stem, providing information on the orientation and size of microfibrils and the relative abundance of amorphous materials and crystalline (fibrillar) cellulose. Here, we describe experiments using this same strategy to characterize differences in cellulose organization within plants harboring mutations in lignin biosynthetic genes. The results demonstrate that cellulose organization in Arabidopsis is often significantly impacted by changes in lignin composition and synthesis.
We collected SXMD data on wild-type Arabidopsis and 7 variants, each with a well-characterized alteration in some aspect of lignin biosynthesis. Lignin in wild-type plants typically contain primarily guaiacyl (G) and syringyl (S) monomeric units, with H-monomers accounting for <2 % of total lignin. Variants studied here are as follows:
[fah1-2] A ferulic acid hydroxylase 1 (fah1-2) mutant that is blocked at ferulate 5-hydroxylase (F5H) and fails to deposit S-lignin [13] [14] resulting in a plant that deposits primarily G-lignin [fah1-2] resulting in plant cell walls less digestible than wild type or walls high in S-lignins [4].
[C4H::F5H fah1-2] An F5H overexpressing transgenic plant C4H::F5H fah1-2 [15] that deposits mostly S-lignin and grows like wild type but exhibits a substantially greater susceptibility to cellulases than wild type [4, 15, 16]. Maleic acid-treated material from these plants exhibits microstructure breakdown with fiber cells disjoined from each other, suggesting an origin of their high susceptibility to digestion [4].
[med5a med5b ref8] A triple mutant med5a med5b ref8 containing predominantly H-lignin [med5a med5b ref8] [6].
[cadc cadd] A plant-harboring lesions in both cadc and cadd leading to the incorporation of coniferyl and sinapyl aldehydes in place of their corresponding alcohols [3, 17]. These plants produce about 50 % the level of total lignin seen in wild-type plants and display a limp floral stem and collapse of xylem elements in spite of total cellulose not being significantly different from wild type.
[cadc cadd fah1] A triple mutant, cadc cadd fah1, that synthesizes lignin dominated by guaiacyl-substituted aldehyde units and contains total lignin about half that of wild type [3].
[ccr] A plant with down-regulation of cinnamoyl CoA reductase 1 (ccr—the first enzyme specific to the biosynthetic pathway leading to monolignols), a genetic perturbation which gives rise to cell walls rich in cell wall-bound ferulic esters and an improvement of cell wall digestibility [18–20].
[ref3-2] A ref3-2 mutant with a missense mutation in the gene encoding cinnamic acid 4-hydroxylase (C4H) resulting in reduced lignin deposition and altered lignin monomer content. Plants harboring this defect exhibit collapse of vessel elements [21].
Here we investigate the molecular architecture of cellulose within thin sections of stems from these variants to determine if the alterations in lignin biosynthesis lead to changes in the organization, orientation or order of cellulose fibrils. These studies indicate the structure of cellulose fibril may be disrupted in ref3-2 and ccr1, and the digestibility of plants harboring lesions in cadc cadd, cadc cadd fah1, and med5a med5b ref8 may be greater than wild-type plants.
Scanning microdiffraction of longitudinal sections of Arabidopsis stem
Scanning X-ray microdiffraction of longitudinal sections of stems from wild type and seven mutants of Arabidopsis was carried out in order to assess fibril orientation and organization in the tissues of the stem. Approximately, 16,000 diffraction patterns were collected on 32 samples. Samples were harvested and stems microtomed into 100-micron-thick longitudinal sections and stored in water (either 3 days or 30 days) until just prior to data collection at which point they were dried in air for 24 h. Analysis of two repeats of fresh and stored samples resulted in essentially identical results in experiments carried out over 2 years.
A 5-micron X-ray beam was scanned over a grid with 5-micron step size, and a diffraction pattern was collected at each grid point. The scattering angles subtended by the patterns ranged from a spacing of ~100 Å at the beam stop (small-angle regime) to ~2 Å at the detector edge (wide-angle regime). In most cases, a 300 mm sample to detector distance was used. Figure 1 provides an example of data collected from a thin section of wild-type Arabidopsis stem. Samples were aligned using a coaxial optical microscope with a hole down the optical axis to accommodate the X-ray beam thereby allowing precision alignment of the beam to select positions on the sample. Figure 1a shows an optical micrograph of the wild-type sample with a blue rectangle marking the position of a 160 × 3 grid of positions from which diffraction patterns were taken. Figure 1c contains 57 diffraction patterns selected to represent data from the cortex, xylem, and the region adjacent to the pith.
Scanning X-ray microdiffraction data from a longitudinal section of Arabidopsis stem. a Optical image of the stem with the region scanned marked as a blue rectangle. The grid includes three rows and 160 columns, each grid point being 5-microns square, making the grid 800 × 15 microns. A complete microdiffraction pattern was taken at each grid point. b Fiber content, microfibril angle, and axial coherence length as calculated from the diffraction patterns collected by scanning the rectangle in (a) and plotted as a function of position across the stem. Data from all three rows are plotted. c Enlargement of microdiffraction patterns from selected regions of the scan. Selected regions correspond roughly to the cortex, xylem, and close to pith
Software was developed to automatically extract specific structural information from these patterns (see "Methods" section for details). A number of structural parameters were extracted from each diffraction pattern and their variation as a function of distance across the stem mapped. Figure 1b includes the distribution of fiber content, microfibril angle, and axial coherence across the sample pictured in Fig. 1a. Each of these parameters is described in more detail below. The distributions represent a mapping across the entire breadth of the stem, starting with the epidermis, moving to the vascular tissues and then the pith, and continuing to reverse this sequence to the other side of the stem. The organization of tissues in Arabidopsis stems is not circularly symmetric about the step axis, and within the 100 micron thickness of the sections used, there is an occasional superposition of regions from different tissues. The cambium, the region between xylem and phloem appears to correspond to the region with the greatest fiber content, consistent with our earlier observations [9]. From the mapping of specific properties, representative parameters were extracted to ease the comparison of structural order present in the different variants.
Oriented fiber content
We have observed that the proportion of material exhibiting orientation varies widely among the mutants studied here (as well as among different tissues within individual stems). In order to quantitate this variation, we chose to calculate a measure of the proportion of total material made up of oriented cellulose fibrils. As implied by the results summarized in Fig. 12, not all cellulosic material is well oriented. But the vast majority of oriented scattering can be attributed to cellulose fibrils. Consequently, the ratio of oriented intensity to disoriented intensity provides a relative measure of the extent of structural organization of cellulose fibrils within the scattering volume. As shown in Fig. 1b, oriented fibril content is the largest in the vascular regions, particularly the xylem, and essentially zero in the pith and epidermis where no scattering attributable to oriented, fibrillar cellulose is observed.
Variations of oriented fiber across the stem for each mutant and wild type have been exhibited in Additional file 1: Figures S1–S8. In order to compare the proportion of oriented fibrils in the different variants, we focused on the cambium the region between xylem and phloem which exhibits the highest content of oriented fibrils [9]. Figure 2 shows a bar graph which compares the maximum values of oriented fiber content observed in each of the eight samples (utilization of average values—rather than maximum—results in a plot exhibiting the same trends). Figure 2 also shows that only ccr and ref3-2 exhibit significant reduction of fiber content. ref3-2 and ccr are the samples with reduced lignin content [21] suggesting that assembly of cellulose fibrils into oriented structures parallel to the stem relies on the presence of normal levels of lignin. ccr has high levels of ferulic acid-containing cell walls and is more digestible than wild type.
Comparison of the highest oriented fiber content observed in each sample. The blue bars correspond to the dried samples 3 days after harvest
Crystalline order in the cellulose fibrils
The positions of the (1 1 0)/(1 −1 0) and (2 0 0) reflections of cellulose provide a measure of the lattice dimensions in the directions perpendicular to the fibril axis. The widths of these reflections are determined by a combination of crystallite breadth (number of cellulose chains in the fibril), cross-sectional shape, degree of order in the packing of cellulose chains, and heterogeneity of crystal lattice constants. The difficulty in separating these three variables is a key reason for the continued debate over the number of cellulose molecules making up an elementary fibril. Nevertheless, the sharpness of these peaks provides a measure of the homogeneity of crystalline order within the cellulose fibrils averaged over the scattering volume. Figure 3 shows the traces of the (1 1 0)/(1 −1 0) and (2 0 0) reflections in scattering from the different lignin mutants. Each trace comes from diffraction pattern exhibiting the highest fiber content for corresponding mutants. The pattern corresponding to the highest fiber content from each sample is compared here for the eight samples analyzed 3 days after harvesting. The (2 0 0) reflection is broadest for ccr and ref3-2, the samples with reduced lignin content. Analysis of Raman spectrum showed that either disorganization or size of crystalline cellulose fibril within plant cell could lead to variation of half width of (2 0 0) reflections [22]. Whether this is due to intrinsic disorder within individual fibrils or to a structurally heterogeneous population of fibrils cannot be determined with existing data. However, these also have the lowest crystalline cellulose content suggesting that lowered lignin content leads to both a decrease in the fraction of crystalline cellulose and the degree of order intrinsic to the crystalline cellulose fibrils.
Traces of the intensities of the strongest equatorial reflections of from cellulose Iβ in the eight Arabidopsis variants studied here. In each case, the trace corresponds to that position in the sample that exhibited the highest proportion of oriented cellulose fiber content. Intensities were normalized over the range 0.1–0.3 Å−1 for comparison
Microfibril angle
X-ray patterns from many cellulose containing plants exhibit a double orientation—they appear as two diffraction patterns superimposed and rotated relative to one another. The explanation for this observation is the helical winding of cellulose about plant cells that lead to different orientations of the cellulose on the near and far side of a cell [23] as diagrammed in Fig. 4. Microfibril angle was calculated by transforming all intensity onto a polar coordinate system and identifying angles of maximum intensity as detailed in "Methods" section. Because of the symmetry of scattering from cellulose, two independent measures of microfibril angle are obtained for each pattern and these were averaged. In many cases, the diffraction was almost circularly symmetric and no microfibril angle could be determined. Figure 1b shows the distribution of microfibril angle for the wild-type sample. Figure 5 shows that the helical winding of cellulose in the cortex and outer part of vasculature tissue has relatively constant microfibril angles of about 15°, but the angle gradually increases across the inner part of the vascular bundles and fibers to about 30° immediately adjacent to the pith. The lower microfibril angle correlated with observation from confocal Raman microscopy is consistent with the observations of Gielinger [24] and Mateu et al. [25] who reported that orientation of cellulose tends to be parallel to the elongated direction at outer xylem and cortex. Orientation was seldom observed in the pith, precluding an estimate microfibril angle. For many of the mutants with severe reductions in lignin content, orientation is so poor as to preclude measurement of microfibril angle. C4H::F5H fah1-2 mutants exhibit a microfibril angle distribution similar to fresh samples. The trends of microfibril angle for other samples are less clear. See Additional file 1 for details.
Depiction of cellulose fibrils helically wrapped around a plant cell (left). Projection of cellulose fibrils from front and back produce a double orientation (center) that expresses itself in diffraction patterns as a split or double diffraction pattern (right)
The microfibril angle increases from the periphery of the stem to the center, reflecting a change in the way cellulose fibrils coil around cells. Microfibril angle is the greatest in regions of lowest oriented fibril content as can be seen in the plots in the upper left. The grid within the optical image shows the region scanned and identifies positions with fibril orientation corresponding approximately to the diagrams at the bottom of the figure
Microfibril angle tends to be inversely related to oriented fiber content within individual samples. In a comparison of wild-type and the lignin variants, there is no well-defined correlation between microfibril angle and fiber content. Measurement of microfibril angle is in some cases precluded for mutants with severe reduction of lignin content. As shown in Figs. 4 and 5, the microfibril angles display the tilt to longitudinal direction of cell, larger microfibril angle indicates cellulose fibrils were more transversely arranged within the cell wall. For instance, only small portions of the ref3-2 mutant stem exhibit diffraction patterns with the split reflections required to measure microfibril angle. The mean microfibril angle in the xylem of ref3-2 is about 25°, which is greater than that for wild type and C4H::F5H fah1-2 and suggests that the transverse assembly of cellulose is altered in ref3-2.
It had been well established that in the interfascicular or vascular tissues of Arabidopsis stem, the deposition of lignin decreases as one approaches the pith [26]. Decreasing lignin concentration correlates with decreased fiber content and increased microfibril angle, but to what extent there are causal relationships among these three variables is unclear.
Axial coherence length
A key measure of the crystallinity of cellulose is its axial coherence length. In the axial direction, a useful measure of the degree of imperfection is the coherence length as estimated by the breadth of the (0 0 4) reflection in fiber diffraction patterns. Coherence length may vary among the tissues of the stem as detailed in the figures in the Additional file 1. For simplicity, we chose to compare the distribution of axial coherence lengths observed for each of the lignin mutants as shown in the histograms in Fig. 6. Figure 7 provides comparison of the maximum coherence lengths observed for each of the samples. In fresh samples, ref3-2 and samples with high content of aldehydes ([cadc cadd fah1-2] and [cadc cadd]) appear to have a somewhat lower coherence lengths, with fah1-2 and C4H::F5H fah1-2 exhibiting slightly higher average coherence lengths than wild type.
a Histogram of axial coherence length for samples dried fresh. The axial coherence length differs across the stem, tending to be largest in the vascular tissues. b Gaussian curves fit to each of the histograms in a suppress the effect of random variations and facilitates visual comparisons among the samples
Comparison of maximum coherence lengths for each of the right samples. This is a compilation of the peak positions of the smoothed curves shown in Fig. 8b
Coherence length of a cellulose fibril may be influenced by the cross-links it makes with other cell wall constituents. Cross-links may not be easily accommodated by a highly regular crystalline structure. This suggests that coherence length might provide a measure of the degree to which other constituents disrupt the regular ordering of fibrils. Our data suggest that interactions of cellulose with fah1-2 and C4H::F5H fah1-2 provide somewhat less disruption of cellulose order than the interactions in wild type, while the degree of crystallinity remains unchanged (Fig. 2). Aldehyde containing lignins have lower coherence length, suggesting potentially greater interactions, again without altering crystallinity. ref3-2, with lower lignin content, might be expected to place fewer constraints on the organization than wild type, but lowered crystallinity could offset that, perhaps leading to the observed lower coherence length. For ccr, the balance between lowered crystallinity and lowered constraints results in no change in observed coherence length.
In principle, we could also calculate the coherence length for the unoriented fraction of cellulose using curves similar to those in Fig. 13. In practice, the (0 0 4) reflections in these traces are weak and broad, making accurate measurement of coherence length difficult, while clearly indicating that the coherence length is significantly less for the unoriented fraction than for the oriented fraction of cellulose. In all likelihood, this reflects the greater curvature of fibrils expected in the unoriented fraction.
Packing of cellulose fibrils
The small angle region of the diffraction patterns collected here provides structural information about features ranging from 25 to 100 Å in size. The intensity distribution in this region corresponds to the scattering from individual cellulose fibrils. When the fibrils are arranged in an organized fashion, regularly spaced side-to-side, the intensity is modulated by an 'interference function' that provides information on the spacing of fibrils in the material [7, 8, 27, 28]. Figure 8 shows the intensity distribution in the small angle region of a wild-type sample, an enlargement of the small angle region of exposure 61 and equatorial traces for exposures 15, 30, 45, and 61. The trace through the exposure (taken from a diffraction pattern of a region immediately adjacent to the pith) shows a modulation of the small angle scattering intensity with peak at 1/day ~0.017 Å−1, suggesting that the fibrils are spaced with a nearest neighbor distance of approximately 60 Å. The observation of this interference near the pith is unexpected because this is the region of the stem with the lowest (observable) oriented fibril content. If the region was homogeneous, the spacing observed would imply an oriented fibril content of at least 25 %, far higher than we observe. Therefore, the region must be highly heterogeneous, with the small fraction of oriented fibrils well—ordered in spatially confined regions.
Modulation of small angle scattering by interference due to packing of cellulose fibrils was observed to be strongest in the region near the pith. a Optical image of stem, the grid containing 3 × 160 points corresponds the position of the microdiffraction scan. b The distribution of fiber content and microfibril angle across the stem. c Diffraction pattern collected at position 61 close to the pith exhibits a strong modulation of small angle scattering (inset) attributed to partially ordered packing of cellulose fibrils. d Comparison of intensity at small angle region for four positions from epidermis to pith, and exhibiting the modulation at ~0.017 Å−1 due to interference
Impact of water storage on molecular structure of plant cell wall
In addition to collecting SXMD data on samples harvested 3 days prior to data collection, we collected SXMD data on samples that were stored in water for 30 days at room temperature prior to data collection. Our observations show that water storage contributes to observable structural variations in a lignin-dependent manner.
Impact on oriented fiber content
Storage reduced the observed fiber content of wild type only slightly and the reduction of fiber content in C4H::F5H fah1-2 and fah1-2 was negligible. However, storage significantly lowered the levels of crystalline cellulose in the high aldehyde samples, cadc cadd, cadc cadd fah1, and med5a med5b ref8 sample even though their fiber content was comparable to wild type in fresh samples (shown by red bar in Fig. 9). These results were replicated on two sets of samples grown, harvested and analyzed independently at different times. These observations suggest that lignin organization may be important for protecting cellulose from degradation that may occur in aqueous environments over time. Other samples exhibited less change on storage. Interestingly, Fig. 9 shows that, although ccr samples stored in water for 3 days exhibit lower fiber content than wild type, ccr does not show a decrease in oriented fiber content after storage. No variant exhibited a significant increase in oriented fibril content over wild type.
Comparison of the highest oriented fiber content observed in each sample. The blue bars correspond to the samples dried 3 days after harvest. The red bar corresponds to the samples dried after storage in water for 30 days. Experiments were replicated using two sets of samples grown, harvested, and analyzed months apart and reproducing all the trends represented here
Impact on axial coherence length
Figure 10 shows that exposure to water for 30 days results in most cases in a decrease in coherence length. ref3-2 appears to be the exception. It's very low coherence length appears to increase on storage in water—perhaps through the relaxation of cross-linking constraints on its cellulosic structures. Interestingly, there is no direct correlation between the coherence length and fiber content for the lignin mutants studied here. This may be the result of two competing factors—cellulose crystallinity, which should correlate with increased coherence length; and cross-links to other constituents which should correlate with decreased coherence length.
a Histogram of axial coherence length for samples dried fresh and after 30 days in water. The axial coherence length differs across the stem, tending to be the largest in the vascular tissues. b Gaussian curves fit to each of the histograms in a suppress the effect of random variations and facilitates visual comparisons among the samples
A complex interdependency of cellulose and lignin assembly into the plant cell wall seems essential to production of the intricate nanoscale architecture required for cell wall integrity, strength, and resiliency. Nevertheless, a clear demonstration of this interdependency is complicated by the possibility of indirect effects and individual variations. By querying all stem tissues and multiple individual plants with ~16,000 diffraction patterns, we sought to minimize the potential impact of individual variation and tissue-specific effects. The systematic and reproducible differences we have observed in cellulose structure among Arabidopsis variants with altered lignin metabolism provide compelling evidence that lignin contributes to the organization of cellulose in the plant cell wall and that the specifics of lignin composition impact resilience to degradation.
The vegetative apparatus of Arabidopsis is a ground rosette that develops a lignified flowering stem [29]. Besides the xylem vessels, which are lignified, the interfascicular parenchyma of the flowering stem differentiates into highly lignified fibers with growth. Under certain growth conditions, lignin in Arabidopsis can represent up to 18 % of the dry weight of the extractive-free mature stem and contain both S and G units [29]. Alteration of the lignin composition and content of the stem in some cases leads to significant changes in phenotype. For instance, fah1-2 blocks F5H causing a deposition of primarily G-lignin [13, 14] an F5H overexpressing transgenic plant deposits mostly S-lignin [15, 16]; ccr and ref3-2 lead to deficient lignin deposition [19–21]; and a triple mutant med5a med5b ref8 contains predominantly H-lignin [6]. These phenotypic alterations are obvious in the whole plant and at the level of electron microscopy which reveals in some variants wholesale disruption of the cell wall structure and collapse of the vascular tissues. Finer scale disruptions have been largely unexplored. How does the disruption of normal lignin synthesis lead to the molecular changes that enforce these phenotypic changes? By using scanning X-ray microdiffraction of lignin variants, we sought to reveal the changes in cellulose architecture triggered by alterations in lignin composition.
Mutations in the lignin biosynthetic pathway affect deposition of cellulose
The largest fiber content in the Arabidopsis stem is in the vascular tissue. We observed that variants with fah1-2, med5a med5b ref8, and aldehyde [cadc cadd and cadc cadd fah1] exhibit maximum fiber content comparable to that of wild type. In plants with abnormally C4H::F5H fah1-2 lignin, the maximum fiber content is modestly reduced. Sun et al. [30] reported that the syringyl to guaiacyl (S/G) ratio will cause the changes in carbon hydrate composition, like cellulose. This may lead to modest fiber content variation. However, a significant decrease of fiber content is observed in the ccr and ref3-2 mutants (Fig. 2) that also have decreased lignin content. Turner et al. [2] and Goujon et al. [19] reported that the ccr mutants have a deficient of cellulose deposition in the stem, consistent with our results. Similarly, the ref3-2 variant affects both lower lignin and cellulose deposition in the secondary cell wall [21], as reflected here. These observations indicate that lignin is essential to the deposition of cellulose fibril within the plant cell wall. The subnormal deposition of cellulose fibrils in ccr and ref3-2 may be a factor in the observed dwarfism of these plants.
Alteration in lignin composition accelerates the degradation of order during storage
Although the lignin contents of cadc cadd, cadc cadd fah1, and med5a med5b ref8 mutants are reduced, the highest observed fiber content of these mutants is similar to wild type. Nevertheless, after 30 days in water, thin sections of these plants exhibit significantly lowered content of oriented cellulose fibrils, indicating that the lowered lignin content has increased their susceptibility to degradation processes. In contrast, plants with fah1-2 or C4H::F5H fah1-2 show no significant alteration in fibril content after 30 days in water. This indicates that lignin exhibiting high levels of G or S subunits may confer some protection of cellulose fibrils from the relevant degradative processes.
Altered lignin content changes the orientation of cellulose fibrils
Lichtenegger et al. [10, 11] and Riekel et al. [12] observed the variation of microfibril angle by using micro X-ray technology. The microfibril angle is thought to reflect the helical arrangement of cellulose microfibrils around the plant cell. How the helical geometry of the architecture of the plant cell determines the microfibril angle had been discussed by Emons and Mulder [23] and Lichtenegger et al. [10, 11]. The microfibril angle represents the tilt of cellulose fibril relative to long axis of the stem. The average tilt of cellulose fibrils in the vascular tissues is never more than ~30° in these variants. For comparison, the maximum tilt of cellulose microfibrils in roots of Arabidopsis was reported to be ~45° [31, 32].
Since deposition of lignin within the plant cell wall decreases as one approaches the pith [2], Liu et al. [9] noted that the reduction of lignin content in the cell wall correlates with a decrease in fiber content which inversely correlates with changes of microfibril angle in Arabidopsis stem. We repeated those observations here, but noted that this pattern is not replicated in all variants. The correlation in wild type suggests that lignin may be essential for anchoring cellulose microfibrils within the polysaccharide matrix at appropriate tilts relative to the stem axis. Disruption of lignin synthesis may lower the resilience intrinsic to these structures.
Although orientation of cellulose is sufficient for measurement of fibril reflections in wild-type plants and those that deposit primarily S-lignin, in several of the mutants, the degree of orientation was inadequate for estimation of fibrillar angle. This is especially true for mutants with severe reduction of lignin content. Interestingly, where measurable, the mean value of microfibril angle in these mutants is considerably larger than others. We conclude that lowered lignin content leads to a considerable disruption in the orientation of cellulose fibrils in a manner that mirrors the lower degree of orientation of cellulose in the pith.
The regularity of cellulose fibrils varies with lignin mutants
Coherence length is an important feature reflecting the integrity of cellulose fibrils within plant cell walls. The most obvious reduction in coherence length is observed in ref3-2, a plant with severe lignin deficit [21]. It is possible that the lowered coherence length may be due to curvature of the fibrils. These observations support the necessity of lignin for maintaining proper assembly of cellulose fibrils within cell walls. The loss of integrity of the cellulose fibrils may be an underlying contributor to the extreme dwarfism displayed by this plant. The implication is that lowered lignin content disrupts the natural order of cellulose fibrils within the cell wall. When this happens, it is possible that the cellulose is deposited in the cell wall in such a way that it is susceptible to cell wall enzymes normally involved in wall restructuring. The implication is that cellulose may be produced at normal levels, but with the vast majority of cellulose deposited in an aberrant fashion, most of it is digested and recycled, resulting in a dramatic stunting of plant growth.
The C4H::F5H fah1-2 and G plants exhibit very similar coherence lengths to the wild type. The aldehyde mutations show lower coherence length than wild type but greater than ref3-2. Nevertheless, water storage caused significant fiber content reduction as well as much weaker (0 0 4) reflections. Although coherence lengths of cadc cadd and cadc cadd fah1 do not change as much as ref3-2 for water-stored samples, weakness of axial reflections from these mutants indicates that cellulose fibrils are probably highly curved. This may alter the physical properties of the cell walls, lowering stiffness of cell wall and leading to limp floral stem. We noticed that med5a med5b ref8 is the only variant that exhibits decreased fiber content and coherence length. Therefore, water storage may lead to a digestion of cellulose as well as increased cellulose fibril disorder.
Langan et al. [33] reported that reduction of lignin induced larger elementary fibrils of diameter of 60–70 Å by molecular dynamic simulation. The equatorial scattering reported here is not consistent with the coalescence of fibrils and presumably reflects a distinct phenomenon. Coalescence would result in sharper reflections, whereas we see broadening of the equatorial reflections in the samples with lignin deficit (Fig. 3).
Molecular architecture in the stem of lignin mutants
The oriented fiber content, microfibril angle, and axial coherence lengths are presented for 16 samples in the Additional file 1. Those charts provide a comprehensive overview of the tissue-specific variations in these properties among the samples. Individual variations are inevitable in comparisons of these types, but replicate experiments indicated that the overall trends reported here are representative. No sample exhibited significantly higher oriented fiber content than wild type. Disruption of lignin biosynthesis resulted in either a decrease in the proportion of oriented cellulose fibrils, or no apparent change. Plants with fah1-2 or C4H::F5H fah1-2 behaved quite similar to wild type, with C4H::F5H fah1-2 having, perhaps, somewhat lower oriented cellulose fiber content. Interestingly, they both displayed average axial coherence lengths comparable or slightly higher than that of wild type. Ciesielski et al. [4] reported that wild type and fah1-2 exhibit only very subtle ultrastructure changes even after maleic acid treatment. This is consistent with our observation that the highest fiber content and axial coherence length of fah1-2 are slightly larger than wild type and other samples. Figures 9 and 11 indicate that water storage has little impact on cellulose fibrils within plant cell wall of fah1-2. However, the structural alteration of C4H::F5H fah1-2 due to the water storage is not as great as observed for maleic acid treatment. For C4H::F5H fah1-2, catalytic conditions, such as thermal or ionic catalysis, are essential for improvement of degradation of cell wall.
Comparison of maximum coherence lengths for each of the 16 samples. This is a compilation of the peak positions of the smoothed curves shown in Fig. 10a, b
cadc cadd, cadc cadd fah1, and med5a med5b ref8 plants also had similar fiber content to wild type, but displayed significant sensitivity to degradation during storage in water. They exhibit a broader distribution of axial coherence lengths than wild type, suggesting inhomogeneity in the spatial constraints on cellulose organization. ccr and ref3-2 had lower oriented fiber content than wild type but exhibited little change in oriented fiber content in reaction to storage. However, their axial coherence lengths became more homogeneous after storage, with that of ref3-2 increasing significantly. This suggests that the spatial constraints on cellulosic structures decreased in these plants during storage. Transmission electron microscopy (TEM) shows that secondary cell walls are much more disorganized for cadc cadd, cadc cadd fah1, and med5a med5b ref8 compared to fah1-2, C4H::F5H fah1-2, and wild type, respectively [3, 6]. Their digestibility increase may be due to the observed swelling, allowing higher penetration of reagents into the of plant cell wall for digestive reactions.
Tissue-specific variation in cellulose order
SXMD demonstrated that in wild type, the apparent cellulose fiber content varies across the stem with highest level near the middle of the vascular tissues. The microfibril angle is relatively constant through the vascular tissue but increases toward the pith to 35° or more before becoming unmeasurable due to complete disorientation and low abundance of cellulose within the pith. The observation of low-microfibril angle in vascular tissue is consistent with cellulose orientation observed by Confocal Raman microscopy and VCA [24, 25]. Introducing chemical component analysis, like Raman spectroscopy (Sun et al. [30]) and NMR, combining SXMD results, could provide deeper understanding of the impact of variation of chemical composition on molecular architecture within plant cell wall. The axial coherence length is relatively constant across the vascular tissues of wild-type plants, but decreases significantly toward the pith (probably due to curvature of the cellulose fibrils which may contribute to the complete lack of orientation of cellulose fibrils within the pith). After 30 days in water storage, maximum fiber content of wild type decreases about 20 % and microfibril angle becomes far less constant. Axial coherence length also becomes somewhat less constant, but, on average, is not too different from what is observed in fresh tissue.
Plants with fah1-2 and C4H::F5H fah1-2 lignins do not appear significantly different from wild type in terms of the organization and orientation of cellulose in the stem. ccr and ref3-2 are the only variants that show significant decrease in cellulose fiber content compared to wild type. Both of them show a roughly 20 % drop. ref3-2 exhibits lower lignin content than wild type with almost complete collapse of vascular elements [21]. Given the collapse of vascular elements, it is perhaps surprising that the cellulose fiber content remains as high as it does. The coherence length of fibrils in ref3-2 is also significantly smaller than wild type, again indicative of significant disruption of the order typical of vasculature in wild type plants. The lower cellulose fiber content in ccr may be due to different interactions between cellulose and the ferulic acid-containing lignin that is abundant in ccr during deposition of cellulose. Unlike ref3-2, the coherence length of cellulose fibrils in ccr plants is only slightly lower than that in wild type, indicating somewhat different levels of restraints on the cellulose in these plants.
Storage in water
Storage in water for 30 days has a dramatic impact on the cellulose organization of some plants. Apparent cellulose fiber content decreases by ~20 % in wild-type plants. fah1-2 and C4H::F5H fah1-2 plants appearing resilient to structural changes during storage in water and little difference between fresh and stored samples were observed. However, in aldehyde-rich or med5a med5b ref8 plants, the decrease is closer to 80 %. This observation suggests that lignin has a significant role to play in protecting these tissues from degradation and that change in the nature of the lignin can dramatically alter the speed with which these tissues degrade. Lignin content, per se, is not the overriding factor, as ref3-2, with significantly lowered lignin content, appears far more resistant to structural degradation during storage than the high aldehyde or med5a med5b ref8 plants. med5a med5b ref8 and ccr decrease by ~15 % and, counterintuitively, the coherence length observed in ref3-2 appears to increase on storage in water. One possible explanation is that storage leads to breakage of some of the cross-links between cellulose and other cell wall constituents and this frees up cellulose fibers to take on a lower energy configuration in which the crystallinity of the remaining fibrils is increased (even though the overall fiber content has decreased).
Separation of oriented and disoriented scattering
Diffraction patterns displayed a wide diversity in the degree of orientation with many patterns, such as those from the pith, displaying essentially no oriented scattering and patterns from the xylem of some plants exhibiting significant orientation. Scattering from cellulose fibrillar structure contributes to the majority of oriented diffraction. Studies of cellulose by X-ray often utilize crystalline index (CI) as a quantitative metric of the degree of crystallinity of cellulose fibrils. Several measures similar to CI appear in the literature [34, 35]. Park et al. [35] indicate that CI can be determined by comparing the intensity maximum and minimum in the range of 0.1–0.3 Å−1 or through fitting of cellulose reflections with multiple Gaussian functions. Fernandes et al. [34] postulate an asymmetric function for CI calculations in order to explain their observations. Although these different empirical measures lead to somewhat different values and reflect somewhat different properties, most exhibit similar trends for similar tissues. In order to enable comparison of cellulosic organization in different samples, we introduce 'fiber content' as an alternative to CI utilizes a larger fraction of the observed data and whereby represents a more accurate measure of the relative degree of cellulose crystallinity in different regions of each sample. We separate scattering into oriented and disoriented components as described in "Methods" section. Briefly, as shown in Fig. 12, the separation was made by isolating the anisotropic part of the pattern (green on the right) from the isotropic part (red and white on the right). The scattering from air was accounted for by subtraction of scattering from the camera in the absence of a sample. The separation of oriented from disoriented scattering is carried out by fitting of the intensities with a Gaussian plus a constant. The intensity as a function of scattering angle could then be calculated for both of these fractions resulting in the intensity distributions in Fig. 13. The intensity observed for the oriented part of the pattern exhibits most of the features expected in scattering from cellulose, including the (1 1 0)/(1 −1 0), (2 0 0), and (0 0 4) reflections. Scattering in the disoriented part of the pattern exhibits an intensity distribution rather different from that observed in the oriented part of the pattern, with broader peaks and substantial increase in features not normally associated with scattering from cellulose. The impression is that the unoriented material is a combination of scattering from poorly ordered cellulose and a preponderance of noncellulosic materials.
The scattered intensity was separated into circularly symmetric (red and white in diagram to right) and oriented fractions (green in right) using intensity at the scattering angle with greatest intensity (the position of the (2 0 0) reflection marked as a circle in the scattering pattern to the left)
Example of separation of circularly symmetric and oriented intensities. Each pattern from the C4H::F5H fah1-2 sample was analyzed and separated into oriented (top, middle) and unoriented (bottom, middle) fractions. The intensities were then plotted as a function of spacing 1/day (roughly proportional to scattering angle) for the oriented (top right) and unoriented (bottom right) fractions. The oriented portion exhibited well defined (1 1 0)/(1 −1 0), (2 0 0), and (0 0 4) reflections characteristic of scattering from cellulose Iβ. The unoriented fraction had an intensity distribution rather different from that expected for cellulose and probably represents a sum of scattering from all constituents of the tissue except that from the oriented cellulose fibrils
Wild-type and lignin biosynthetic mutants of Arabidopsis were grown for 6 weeks under long day condition (16 h light/8 h dark), and the bottom of primary stems were cut into 100-µm-thick longitudinal sections. One sample set was immersed in water at room temperature for 3 days and subsequently dehydrated in air at room temperature for 24 h before data collection. A second set was immersed in water at room temperature for 30 days prior to being dehydrated.
Scanning X-ray microdiffraction
X-ray diffraction data were collected at beamline 23ID-B at the APS using the scanning microdiffraction capability developed for use in macromolecular crystallography [36]. Samples were aligned using a coaxial optical microscope with a hole down the optical axis to accommodate the X-ray beam thereby allowing precision alignment of the beam to select positions on the sample. A 5-μm beam size was used and samples were stepped along a 5-μm grid with a wide-angle diffraction pattern collected at each grid point. A specimen-to-detector distance of 300 mm was used with an X-ray wavelength 1.033 Å (X-ray energy of 12 keV). The exposure time was 2 s. Patterns were recorded with a MAR300 detector with 2048 × 2048 pixels in an area of 30.00 cm × 30.00 cm. Pixel size was 146 micron. Reciprocal spacing 1/day = 2sin(θ)/λ, θ is the Bragg angle, and λ is the wave length.
Background scattering was estimated from the first and last diffraction patterns of each row of the montage. These patterns were collected at positions outside of the sample and constitute diffraction from air and residual scatter from camera elements within the experimental environment. These backgrounds were subtracted from all diffraction patterns prior to all other data analysis.
Estimation of microfibril angle
Scattering from fibers within real tissue may overlap at micron-scale due to structural arrangement and sample preparation. Lichtenegger et al. [9, 10] and Liu et al. [11] reported that the fiber orientation of cellulose could be determined by simple analyses of diffraction patterns. Therefore, we developed an algorithm to determine and separate the fiber orientation by azimuthal intensity distribution of diffraction patterns.
Microfibril angles were estimated from the angular variation of intensities in the polar coordinate system (Fig. 12). Azimuthal positions of peaks were determined by fitting of Gaussians to the intensity as a function of θ at a radius corresponding to the (2 0 0) reflection of the cellulose Iβ structure.
Separation of oriented material and disoriented material
Each diffraction pattern was separated into oriented and disoriented fractions (see Fig. 12). The unoriented fraction was determined from the isotropic distribution at each scattering angle (1/day-spacing). Air scattering was derived from patterns at the start and end of each row of the montage which were chosen to be outside the spatial extent of the sample. For each 1/day-spacing, subtraction of disoriented material and air scattering resulted in a distribution exhibiting one or more peaks that could be approximated by Gaussian distributions.
Calculation of fiber content
Fiber content was calculated to exhibit the oriented fibril content. Calculation is a linear combination of the intensities of oriented material and disoriented material, as shown in Fig. 14, the calculation was
$$I_{{{\text{fiber}}\;{\text{content}}}} = \sum {I_{{{\text{oriented}}\;{\text{material}}}} /\left( {\sum {I_{\text{oriented}} } + \sum {I_{\text{disoriented}} } } \right)}$$
\(I_{{{\text{fiber }}\;{\text{content}}}}\) is a measure of oriented fibril content, \(I_{{{\text{oriented}}\;{\text{material}}}}\) is the normalized integral intensity of elementary fibrils, and \(I_{\text{disoriented}}\) is the normalized integral intensity of amorphous component.
Calculation of oriented fiber content. Oriented and circularly symmetric intensities are separated as demonstrated in Fig. 12. The intensity of oriented material (left) and unoriented material (right) are then integrated over a range of spacing corresponding to the positions of the (1 −1 0) and (2 0 0) reflections from cellulose (indicated in orange in the figures and spanning 0.1 < 1/day < 0.3 Å−1). The proportion of oriented fibrillar material is then calculated as indicated in the text
Calculation of axial coherence length
A cellulose fibril is nearly crystalline, but because of its extreme length, very large surface area, curvature and tendency to twist, the crystallinity is imperfect. Because of local stretching, twisting, and curvature, the periodicity of a cellulose fibril varies along its length. This leads to a phase difference in the molecular repeating structure that increases progressively along the length of the fibril. The coherence length is the distance along the fibril beyond which there is no ordered phase relationship. Therefore, the coherence length may provide insight into the interactions of cellulose fibrils with other cell wall constituents, and a change in coherence length may reflect a disruption in those interactions, a perturbation in the ordered process by which cellulose fibrils are assembled into the cell wall or enhanced distortion of the cellulose fibrils caused by increased physical constraints due to interactions with other cell wall constituents.
Axial coherence length was estimated from the breadth of the (0 0 4) reflections calculated from the Scherrer equation. Figure 15 shows the Gaussian fitting of (0 0 4) reflection of cellulose fibrils. Then coherent length could be calculated as follows:
The calculation of axial coherent length for (0 0 4) reflection. The Left image is a trace of the oriented intensity including the (0 0 4) reflection of cellulose fibrils. An enlargement of the (0 0 4) on the right shows the background subtracted reflection and (blue curve) and a Gaussian function (green curve) fit to intensity distribution. Red line indicates the maximum position of Gaussian curve
$${\text{Coherence}}\;{\text{length}} = \frac{K\lambda }{\beta\,\cos \left( \theta \right)}.$$
The green curve shows a Gaussian function fitting the reflection. The θ can be determined by the peak position of the Gaussian and β is the half width of Gaussian function. K is constant 0.9. λ is wavelength of incident X-ray.
SXMD:
fah1-2 :
a ferulic acid hydroxylase 1 (fah1-2) mutants
C4H::F5H fah1-2 :
a F5H overexpressing transgenic plant
med5a med5 ref8 :
a triple mutant med5a med5b ref8
cadc cadd :
a plant-harboring lesions in both cadc and cadd
cadc cadd fah1-2 :
a triple mutant, cadc cadd fah1
ccr :
a plant with down-regulation of cinnamoyl CoA reductase 1
ref3-2 :
a ref3-2 mutant
Somerville C, Bauer S, Brininstool G, Facette M, Hamann T, Milne J, Osborne E, Paredez A, Persson S, Raab T, Vorwerk S. Toward a systems approach to understanding plant cell walls. Science. 2004;306(5705):2206–11.
Turner SR, Somerville CR. Collapsed xylem phenotype of Arabidopsis identifies mutants deficient in cellulose deposition in the secondary cell wall. Plant Cell. 1997;9(5):689–701.
Anderson NA, Tobimatsu Y, Ciesielski PN, Ximenes E, Ralph J, Donohoe BS, Ladisch M, Chapple C. Manipulation of guaiacyl and syringyl monomer biosynthesis in an Arabidopsis cinnamyl alcohol dehydrogenase mutant results in atypical lignin biosynthesis and modified cell wall structure. Plant Cell. 2015;27:2195–209.
Ciesielski PN, Resch MG, Hewetson B, Killgore HP, Curtin A, Anderson N, Chiaramonti AN, Hurley DC, Sanders A, Himmel ME, Chapple C, Mosier N, Donohoe BS. Engineering plant cell walls: tuning lignin monomer composition for deconstructable biofuel feedstocks or resilient biomaterial. Green Chem. 2014;16:2627–35.
Fraser CM, Chapple C. The phenylpropanoid pathway in Arabidopsis. Rockville: The Arabidopsis Book/American Society of Plant Biologists; 2011. p. 9.
Bonawitz ND, Kim JI, Tobimatsu Y, Ciesielski PN, Anderson NA, Ximenes E, Maeda J, Ralph J, Donohoe BS, Ladisch M, Chapple C. Disruption of mediator rescues the stunted growth of a lignin-deficient Arabidopsis mutant. Nature. 2014;509(7500):376–80.
Inouye H, Liu J, Makowski L, Palmisano M, Burghammer M, Riekel C, Kirschner DA. Myelin organization in the nodal, paranodal, and juxtaparanodal regions revealed by scanning X-ray microdiffraction. PLoS One. 2014;9(7):e100592.
Inouye H, Zhang Y, Yang L, Venugopalan N, Fischetti RF, Gleber SC, Vogt S, Fowle W, Makowski B, Tucker M, Ciesielski P, Donohoe B, Matthews J, Himmel ME, Makowski L. Multiscale deconstruction of molecular architecture in corn stover. Sci Rep. 2014;4:3756.
Liu J, Inouye H, Venugopalan N, Fischetti RF, Gleber SC, Vogt S, Cusumano JC, Kim JI, Chapple C, Makowski L. Tissue specific specialization of the nanoscale architecture of Arabidopsis. J Struct Biol. 2013;184(2):103–14.
Lichtenegger H, Müller M, Paris O, Riekel C, Fratzl P. Imaging of the helical arrangement of cellulose fibrils in wood by synchrotron X-ray microdiffraction. J Appl Crystallogr. 1999;32(6):1127–33.
Lichtenegger H, Reiterer A, Stanzl-Tschegg SE, Fratzl P. Variation of cellulose microfibril angles in softwoods and hardwoods—a possible strategy of mechanical optimization. J Struct Biol. 1999;128(3):257–69.
Riekel C, Cedola A, Heidelbach F, Wagner K. Microdiffraction experiments on single polymeric fibers by synchrotron radiation. Macromolecules. 1997;30:1033–7.
Meyer K, Cusumano JC, Somerville C, Chapple CC. Ferulate-5-hydroxylase from Arabidopsis thaliana defines a new family of cytochrome P450-dependent monooxygenases. Proc Natl Acad Sci. 1996;93(14):6869–74.
Chapple CC, Vogt T, Ellis BE, Somerville CR. An Arabidopsis mutant defective in the general phenylpropanoid pathway. The Plant Cell. 1992;4(11):1413–24.
Meyer K, Shirley AM, Cusumano JC, Bell-Lelong DA, Chapple C. Lignin monomer composition is determined by the expression of a cytochrome P450-dependent monooxygenase in Arabidopsis. Proc Natl Acad Sci. 1998;95(12):6619–23.
Ruegger M, Meyer K, Cusumano JC, Chapple C. Regulation of ferulate-5-hydroxylase expression in Arabidopsis in the context of sinapate ester biosynthesis. Plant Physiol. 1999;119(1):101–10.
Sibout R, Eudes A, Mouille G, Pollet B, Lapierre C, Jouanin L, Séguin A. CINNAMYL ALCOHOL DEHYDROGENASE-C and-D are the primary genes involved in lignin biosynthesis in the floral stem of Arabidopsis. Plant Cell. 2005;17(7):2059–76.
Derikvand MM, Berrio Sierra J, Ruel K, Pollet B, Do C-T, Thévenin J, Buffard D, Jouanin L, Lapierre C. Redirection of the phenylpropanoid pathway to feruloyl malate in Arabidopsis mutants deficient for cinnamoyl-CoA reductase 1. Planta. 2008;227:943–56.
Goujon T, Ferret V, Mila I, Pollet B, Ruel K, Burlat V, Joseleau J, Barriere Y, Lapierre C, Jouanin L. Down-regulation of the AtCCR1 gene in Arabidopsis thaliana: effects on phenotype, lignins and cell wall degradability. Planta. 2003;217(2):218–28.
Jones L, Ennos AR, Turner SR. Cloning and characterization of irregular xylem4 (irx4): a severely lignin-deficient mutant of Arabidopsis. Plant J. 2001;26(2):205–16.
Schilmiller AL, Stout J, Weng JK, Humphreys J, Ruegger MO, Chapple C. Mutations in the cinnamate 4-hydroxylase gene impact metabolism, growth and development in Arabidopsis. Plant J. 2009;60(5):771–82.
Agarwal UP, Ralph SA, Reiner RS, Baez C. Probing crystallinity of never-dried wood cellulose with Raman spectroscopy. Cellulose. 2016;23:125–44.
Emons AMC, Mulder BM. The making of the architecture of the plant cell wall: how cells exploit geometry. Proc Natl Acad Sci. 1998;95(12):7215–9.
Gielinger N. Revealing changes in molecular composition of plant cell walls on the micron-level by Raman mapping and vertex component analysis (VCA). Front Plant Sci. 2014;5:306.
Mateu BP, Hauser MH, Heredia A, Gielinger N. Water proofing in Arabidopsis: following phenolic and lipids in situ by confocal Raman microscopy. Front Chem. 2016;4:10.
Rogers LA, Campbell MM. The genetic control of lignin deposition during plant growth and development. New Phytol. 2004;164:17–30.
Kennedy CJ, Cameron GJ, Šturcová A, Apperley DC, Altaner C, Wess TJ, Jarvis MC. Microfibril diameter incelery collenchyma cellulose: X-ray scattering and NMR evidence. Cellulose. 2007;14(3):235–46.
Thomas LH, Forsyth VT, Šturcová A, Kennedy CJ, May RP, Altaner CM, Apperley DC, Wess TJ, Jarvis MC. Structure of cellulose microfibrils in primary cell walls from collenchyma. Plant Physiol. 2013;161(1):465–76.
Dharmawardhana DP, Ellis BE, Carlson JE. Characterization of vascular lignification in Arabidopsisthaliana. Can J Bot. 1992;70(11):2238–44.
Sun L, Varanasi P, Yang F, Loque D, Simmons BA, Singh S. Rapid determination of syringyl: guaiacyl ratios using FT-raman spectroscopy. Biotechnol Bioeng. 2012;109:3.
Anderson CT, Carroll A, Akhmetova L, Somerville C. Real-time imaging of cellulose reorientation during cell wall expansion in Arabidopsis roots. Plant Physiol. 2010;152(2):787–96.
Burk DH, Ye ZH. Alteration of oriented deposition of cellulose microfibrils by mutation of a katanin-like microtubule-severing protein. Plant Cell. 2002;14(9):2145–60.
Langan P, Petridis L, O'Neill HM, Pingali SV, Foston M, Nishiyama Y, Schulz R, Lindner B, Hanson BL, Harton S, Heller WT, Urban V, Evans BR, Gnankaran S, Ragauskas AJ, Smith JC, Davison BH. Common processes drive the thermochemical pretreatment of lignocellulosic biomass. Green Chem. 2014;16(1):63–8.
Fernandes AN, Thomas LH, Altaner CM, Callowd P, Forsyth VT, Apperley DC, Kennedy CJ, Jarvis MC. Nanostructure of cellulose microfibrils in spruce wood. Proc Natl Acad Sci. 2011;108:1195–203.
Park S, Bark JO, Himmel ME, Parilla PA, Johnson DK. Research cellulose crystallinity index: measurement techniques and their impact on interpreting cellulose performance. Biotechnol Biofuels. 2010;3(10):1.
Fischetti RF, Xu S, Yoder DW, Becker M, Nagarajan V, Sanishvili R, Sanishvili R, Hilgart MC, Stepanov S, Makarov O, Smith JL. Mini-beam collimator enables microcrystallography experiments on standard beamlines. J Synchrotron Radiat. 2009;16(2):217–25.
JL and LM collected and analyzed SXMD data. JK, JC, and CC constructed the Arabidopsis mutants, carried out studies of the properties of the mutants, and prepared the stem sections. NV and RF designed and constructed the beam line used fir SXMD experiments, designed the small angle collimator, and aided in all aspects of data collection. JL wrote the manuscript; JK, CC, and LM edited the manuscript. All authors read and approved the final manuscript.
This work was supported as part of the Center for Direct Catalytic Conversion of Biomass to Biofuels (C3Bio), an Energy Frontier Research Center funded by the U.S. department of Energy, Office of science, Basic Energy Science under Award #DE-SC0000997. Use of the Advanced Photon Source, an Office of Science User Facility operated for the U.S. DOE under Contract No. DE-AC02-06CH11357. Use of GM/CA at the APS was supported by the National Institutes of Health, the National Cancer Institute (Y1-CO-1020), and the National Institute of General Medical Science (Y1-GM-1104).
Department of Bioengineering, Northeastern University, 360 Huntington Ave, Boston, MA, 02148, USA
Jiliang Liu & Lee Makowski
Department of Biochemistry, Purdue University, 175 South University Street, West Lafayette, IN, 47907, USA
Jeong Im Kim, Joanne C. Cusumano & Clint Chapple
GM/CA CAT, XSD, Advanced Photon Source, Argonne National Laboratory, 9700 Cass Ave, Lemont, IL, 60439, USA
Nagarajan Venugopalan & Robert F. Fischetti
Department of Chemistry and Chemical Biology, Northeastern University, 360 Huntington Ave, Boston, MA, 02148, USA
Lee Makowski
Jiliang Liu
Jeong Im Kim
Joanne C. Cusumano
Clint Chapple
Nagarajan Venugopalan
Robert F. Fischetti
Correspondence to Jiliang Liu.
Additional file
13068_2016_540_MOESM1_ESM.pdf
Additional file 1. Variations of oriented fiber, microfibril angle, and axial coherence length across the stem for each mutant and wild type have been exhibited in figures S1–S8 of supplement material.
Liu, J., Kim, J.I., Cusumano, J.C. et al. The impact of alterations in lignin deposition on cellulose organization of the plant cell wall. Biotechnol Biofuels 9, 126 (2016). https://doi.org/10.1186/s13068-016-0540-z
DOI: https://doi.org/10.1186/s13068-016-0540-z
Lignin biosynthetic mutants
Cellulose fibril
X-ray microdiffraction | CommonCrawl |
Science By Jason
Astronomy for Beginners: The Stars
The stars in our universe vary greatly. They are all different sizes and colors. Stars also have a lot of common attributes too though. Instead of talking about how they are unique, I will discuss how they are similar today.Read on if you are interested in finding out more about these incredible objects.
Stars Around Us
Within our own solar system there are untold numbers of stars. If you happened to somehow leave our galaxy then the number of stars increases exponentially. So the first thing we want to do is figure out how far away all thesestars are. When we figure out distances to the stars their other properties can also be inferred to a decent degree of accuracy.
Stellar Parallax
Stellar parallax is the degree our view changes when an object is observed from two different points whether from telescope or with our eyes. It is also described as the displacement of a star as seen from Earth. This lets us calculate the distance between us and the object. this is done using angles between the points involved and geometry. The easiest object to practice this on would be our Moon. Distances are measured in arcseconds. The distance an object is away from us that has a parallax of 1 arcsecond is called a parsec, which is slightly over 200,000 AU. An AU is the distance from us to the Sun. Parallax decreases as the distance gets greater so we can use a formula to calculate it.
$$ distance =\frac{1}{parallax} $$
Motion of the Stars
The stars move. This may not be apparent, especially if you never thought about it before.They move very fast as well by our own standards. They are just so far away it is difficult to notice. They move across our sky a very small amount in a human lifetime. Stars have their own motion as they soar through the sky and they also rotate extremely quick too. Proper motion is how to describe a star's velocity.
Luminosity and Brightness
Every star has its own luminous qualities and apparent brightness as seen from Earth. A star's apparent brightness depends partly on its distance fromEarth. So, the amount of light that reaches us varies inversely as the square of its distance from the source. Therefore, doubling the distance from a star makes it appear 4 times fainter. This means two different stars can appear to have the same brightness. If the brighter star is farther away than a slightly dimmer star then to us they could appear to be the exact same. Remember how we talked about inferring properties of stars once we know a few things about them?This is a good example. If we know a star's distance through the use of parallax calculations and a particular star is farther away, but appears to have the same brightness, then we know it is inherently brighter and puts off more radiation than the other star.
Star Temperatures
Stars also have their own temperature ranges. Star temperatures can vary greatly. However, we can get an idea of its temperature from the star's color. This is not very intuitive but red stars are a lot cooler than blue stars. Astronomers measure the temperature of a star by taking the brightness at several different frequencies. They then match these readings to a blackbody curve and see where they align. This will give the approximate temperature of the star.
Spectra of the Stars
Spectroscopy is a very useful tool. I talked about it in a past article and it has many uses. With stars, it is used to collect data on individual stars and then study the differences. The patterns are what tell us the differences and compositions of stars. Spectroscopy is a fascinating subject and can tell us so much just by looking at absorption lines.
Stellar Size
Like their temperatures, stars can be all different sizes as well. However, all stars so far are usually much larger than any discovered planet. The problem, like always, is that they are too far away to directly measure in most cases. So indirect techniques have to be used. The most popular method is using the radiation laws. The radiation that comes from a star is governed by the Stefan-Boltzmann law. This radiation law says that the energy emitted per unit area per unit of time increases as the fourth power of the stars temperature. We know that larger stars give off more radiation than smaller stars. Since this is the case, knowing how much radiation a star gives off gives us a good idea about how large a star is. See how all these pieces fit together?This will let us do an indirect measurement of its size.
H-R Diagrams
Hertzsprung-Russell diagrams are used as a way to classify stars once we know some of their attributes. This helps us visually compare many different stars at a time. Most people learn and interpret better using visual methods so this is handy indeed. Typically these diagrams are composed of luminosity and temperature data. Any star can be included in these diagrams if we have its data. What we have learned is that cool stars will be faint and hot stars will be brighter.
Mass of Stars
A star's mass is one of its fundamental properties. Mass tells us about its internal core and subsequent layers. Its mass is determined by the gravitational pull on nearby objects. Determining the mass of a star can be done in a few different ways. The best methods are studying the orbit and the dipping of the light curves when objects pass in front of it. When looking at an H-R diagram, for instance, we see the masses of stars are grouped very logically. They start at red dwarf stars and end at giant blue stars. Masses range from much smaller compared to our Sun to much larger.
There are several important properties about stars that aid in greater understanding of our universe. Finding the distance between us and other stars is an excellent start. Stars have their own velocity even though it is difficult for us to see it from here on Earth. Brightness and apparent magnitude are also useful because those attributes give us an idea of radii. Lastly, studying HR-diagrams is very useful in gleaning characteristics from Stars.
This article was updated on December 4, 2019
Previous How To Differentiate Algebra Functions
Next Differentiating Trig Functions
Astronomy For Beginners: The Solar System
Astronomy For Beginners: How Telescopes Are Used
Astronomy For Beginners: Spectroscopy
Jason Moore 2019 | CommonCrawl |
Results for 'Michelle Ng Kwet Shing' (try it on Scholar)
The Validity of the CGI Severity and Improvement Scales as Measures of Clinical Effectiveness Suitable for Routine Clinical Use.Michael Berk, Felicity Ng, Seetal Dodd, Tom Callaly, Shirley Campbell, Michelle Bernardo & Tom Trauer - 2008 - Journal of Evaluation in Clinical Practice 14 (6):979-983.details
Ng and Singer on Utilitarianism: A Reply.Yew-Kwang Ng & Peter Singer - 1983 - Canadian Journal of Philosophy 13 (2):241 - 242.details
Ng and singer derive the principle of utility from the fact of finite sensibility and another principle, weak majority preference: "for a community of n individuals choosing between two possibilities, x and y, if no individual prefers y to x, and at least n/2 individuals prefer x to y, then x increases social welfare and is preferable." this derivation is regarded as incorrect in a comment. this reply explains why the derivation is valid and shows that the comment is based (...) on confusing a general social ordering with a utilitarian one. (shrink)
Investigating the Limits of Competitive Intelligence Gathering: Is Mystery Shopping Ethical?Ng Kwet Shing Michelle & J. Spence Laura - 2002 - Business Ethics 11 (4):343-353.details
Investigating the Limits of Competitive Intelligence Gathering: Is Mystery Shopping Ethical?Michelle Ng Kwet Shing & Laura J. Spence - 2002 - Business Ethics: A European Review 11 (4):343-353.details
Welfarism and Utilitarianism: A Rehabilitation*: Yew-Kwang Ng.Yew-Kwang Ng - 1990 - Utilitas 2 (2):171-193.details
Utilitarianism seems to be going out of fashion, amidst increasing concerns for issues of freedom, equality, and justice. At least, anti-utilitarian and non-utilitarian moral philosophers have been very active. This paper is a very modest attempt to defend utilitarianism in particular and welfarism in general. Section I provides an axiomatic defence of welfarism and utilitarianism. Section II discusses the divergences between individual preferences and individual welfares and argues in favour of welfare utilitarianism. Section III criticizes some non-utilitarian principles, including knowledge (...) as intrinsically good, rights-based ethics, and Rawls's second principle. Section IV argues that most objections to welfarism are probably based on the confusion of non-ultimate considerations with basic values. This is discussed with reference to some recent philosophical writings which abound with such confusion. Section V argues that the acceptance of utilitarianism may be facilitated by the distinction between ideal morality and self-interest which also resolves the dilemma of average versus total utility maximization in optimal population theory. (shrink)
Utilitarianism in Normative Ethics
Does Suffering Dominate Enjoyment in the Animal Kingdom? An Update to Welfare Biology.Zach Groff & Yew-Kwang Ng - 2019 - Biology and Philosophy 34 (4):40.details
Ng :255–285, 1995. https://doi.org/10.1007/bf00852469) models the evolutionary dynamics underlying the existence of suffering and enjoyment and concludes that there is likely to be more suffering than enjoyment in nature. In this paper, we find an error in Ng's model that, when fixed, negates the original conclusion. Instead, the model offers only ambiguity as to whether suffering or enjoyment predominates in nature. We illustrate the dynamics around suffering and enjoyment with the most plausible parameters. In our illustration, we find surprising results: (...) the rate of failure to reproduce can improve or worsen average welfare depending on other characteristics of a species. Our illustration suggests that for organisms with more intense conscious experiences, the balance of enjoyment and suffering may lean more toward suffering. We offer some suggestions for empirical study of wild animal welfare. We conclude by noting that recent writings on wild animal welfare should be revised based on this correction to have a somewhat less pessimistic view of nature. (shrink)
Transculturalism in Tan Twan Eng's The Gift of Rain.Wen Lee Ng, Manimangai Mani & Wan Roselezam Wan Yahya - 2016 - International Letters of Social and Humanistic Sciences 74:1-15.details
Publication date: 30 November 2016 Source: Author: Wen Lee Ng, Manimangai Mani, Wan Roselezam Wan Yahya While the growing body of research on Tan Twan Eng's The Gift of Rain focuses on the protagonist, Philip Hutton's traumatic condition, his Chinese identity, and his ambiguous identity, this study devotes particular attention to the complexity of interactions between various cultures practised by Philip. This study aims to address this gap by applying the concept of transculturalism to analyse the processes of acquiring a (...) foreign culture and incorporating the foreign culture into traditional cultures experienced by Philip. In other words, this study employs the concept of transculturalism to examine multicultural depictions in the novel. Scholars, such as Khan, Tiwari, Sheoran and Tan C. S. who have examined multicultural depictions in various literary texts, have found that multicultural circumstances cause certain ethnic groups to lose their cultures and identities. Hence, the multicultural circumstances depicted are perceived as negative phenomena. However, this study has found that by examining the interactions between various cultures, rather than focusing on the end products such as portrayals of hybridity, the positive sides of multicultural depictions could be revealed. The transculturation process experienced by Philip shows that the new cultural practices he has created are made up of both his traditional cultures and the foreign culture he has acquired. This means that Philip does not totally lose his traditional cultures and identities. Therefore, this study concludes that multicultural depictions in The Gift of Rain could be read positively, provided that the interactions between various cultures, which resulted in the incorporation of a foreign culture into traditional cultures, are examined. (shrink)
Board Age and Gender Diversity: A Test of Competing Linear and Curvilinear Predictions. [REVIEW]Muhammad Ali, Yin Lu Ng & Carol T. Kulik - 2014 - Journal of Business Ethics 125 (3):1-16.details
The inconsistent findings of past board diversity research demand a test of competing linear and curvilinear diversity–performance predictions. This research focuses on board age and gender diversity, and presents a positive linear prediction based on resource dependence theory, a negative linear prediction based on social identity theory, and an inverted U-shaped curvilinear prediction based on the integration of resource dependence theory with social identity theory. The predictions were tested using archival data on 288 large organizations listed on the Australian Securities (...) Exchange, with a 1-year time lag between diversity (age and gender) and performance (employee productivity and return on assets). The results indicate a positive linear relationship between gender diversity and employee productivity, a negative linear relationship between age diversity and return on assets, and an inverted U-shaped curvilinear relationship between age diversity and return on assets. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity. (shrink)
A Comparative Analysis of Ethical Beliefs: A Four Country Study. [REVIEW]Mee-Kau Nyaw & Ignace Ng - 1994 - Journal of Business Ethics 13 (7):543 - 555.details
This study examines the extent to which business students from Canada, Japan, Hong Kong, and Taiwan react differently to ethical dilemmas involving employees, supervisors, customers, suppliers, and business rivals. The empirical results show that the national origin of the students does have an impact on their reactions to particular ethical dilemmas. In addition, the results indicate that controlling for the problem of social desirability response bias is important to ensure the validity of the empirical findings.
Environmental and Economic Dimensions of Sustainability and Price Effects on Consumer Responses.Sungchul Choi & Alex Ng - 2011 - Journal of Business Ethics 104 (2):269-282.details
The lack of attention to sustainability, as a concept with multiple dimensions, has presented a developmental gap in green marketing literature, sustainability, and marketing literature for decades. Based on the established premise of customer–corporate (C–C) identification, in which consumers respond favorably to companies with corporate social responsibility initiatives that they identify with, we propose that consumers would respond similarly to companies with sustainability initiatives. We postulate that consumers care about protecting and preserving favorable economic environments (an economic dimension of sustainability) (...) as much as they care about natural environments. Thus, we investigate how two sustainability dimensions (i.e., environmental and economic) and price can influence consumer responses. Using an experimental method, we demonstrate that consumers favor sustainability in both dimensions by giving positive evaluations of the company and purchase intent. In addition, consumers respond more negatively to poor company sustainability than to high company sustainability. In comparison, consumers respond more negatively to the company's poor commitment to caring for the environment than to the company's poor commitment to economic sustainability. We also find that consumers do not respond favorably to low prices when they have information about the firm's poor environmental sustainability. Finally, we find support for an interaction effect between consumer support for sustainability and corporate sustainability; that is, consumers evaluate a company more favorably if the company shares the consumers' social causes. Overall, we conclude, from our empirical study, support for the idea that consumers do respond to multiple dimensions of sustainability. (shrink)
Sustainability in Applied Ethics
Temporal Dynamics of Emotional Processing in the Brain.C. E. Waugh, E. Z. Shing & B. M. Avery - 2015 - Emotion Review 7 (4):323-329.details
Emotions in Philosophy of Mind
What Should We Do About Future Generations?Yew-Kwang Ng - 1989 - Economics and Philosophy 5 (2):235.details
Parfit's requirements for an ideal Theory X cannot be fully met since the Mere Addition Principle and Non-Antiegalitarianism imply the Repugnant Conclusion: Theory X does not exist. However, since the Repugnant Conclusion is really compelling, the Impersonal Total Principle should be adopted for impartial comparisons concerning future generations. Nevertheless, where our own interests are affected, we may yet choose to be partial, trading off our concern for future goodness with our self-interests. Theory X' meets all Parfit's requirements except the Mere (...) Addition Principle in less compelling cases. (shrink)
Economics and Ethics, Misc in Philosophy of Social Science
Future Generations in Applied Ethics
Normative Economics in Philosophy of Social Science
Population Ethics in Value Theory, Miscellaneous
Ideology Critique From Hegel and Marx to Critical Theory.Karen Ng - 2015 - Constellations 22 (3):393-404.details
In this paper, I explore and defend ideology critique as a method that is descended from the project of the critique of reason. Specifically, I interpret ideology critique as operating through what critical theory calls the dialectics of immanence and transcendence. Turning to Hegel and Marx, I further argue that the dialectics of immanence and transcendence must be more concretely understood as the dialectics of life and self-consciousness. Understanding the relation between life and self-consciousness is crucial for ideology critique because (...) what ideologies distort is the relation between self-consciousness and life, a relation that is fundamental to the actualization of human freedom. I argue that ideologies are social pathologies, or wrong ways of living. I analyze two concepts that illuminate the method of ideology critique in particular: Hegel's "Idea," and Marx's Gattungswesen (species-being). These two concepts provide the normative basis for reconsidering ideology critique in light of a non-reductive critical naturalism. (shrink)
Critical Theory, Misc in Continental Philosophy
Hegel, Misc in 19th Century Philosophy
Karl Marx in 19th Century Philosophy
Self-Consciousness, Misc in Philosophy of Mind
On Δ 2 0 -Categoricity of Equivalence Relations.Rod Downey, Alexander G. Melnikov & Keng Meng Ng - 2015 - Annals of Pure and Applied Logic 166 (9):851-880.details
Model Theory in Logic and Philosophy of Logic
CEO Leadership Styles and the Implementation of Organizational Diversity Practices: Moderating Effects of Social Values and Age. [REVIEW]Eddy S. Ng & Greg J. Sears - 2012 - Journal of Business Ethics 105 (1):41-52.details
Drawing on strategic choice theory, we investigate the influence of CEO leadership styles and personal attributes on the implementation of organizational diversity management practices. Specifically, we examined CEO transformational and transactional leadership in relation to organizational diversity practices and whether CEO social values and age may moderate these relationships. Our results suggest that transformational leadership is most strongly associated with the implementation of diversity practices. Transactional leadership is also related to the implementation of diversity management practices when either CEO social (...) values or age are relatively high. These findings extend previous work examining predictors of diversity management in organizations and highlight the central role that organizational leaders may play in the successful implementation of these practices. (shrink)
Ethical Leadership in Applied Ethics
The Neurological Disease Ontology.Mark Jensen, Alexander P. Cox, Naveed Chaudhry, Marcus Ng, Donat Sule, William Duncan, Patrick Ray, Bianca Weinstock-Guttman, Barry Smith, Alan Ruttenberg, Kinga Szigeti & Alexander D. Diehl - 2013 - Journal of Biomedical Semantics 4 (42).details
We are developing the Neurological Disease Ontology (ND) to provide a framework to enable representation of aspects of neurological diseases that are relevant to their treatment and study. ND is a representational tool that addresses the need for unambiguous annotation, storage, and retrieval of data associated with the treatment and study of neurological diseases. ND is being developed in compliance with the Open Biomedical Ontology Foundry principles and builds upon the paradigm established by the Ontology for General Medical Science (OGMS) (...) for the representation of entities in the domain of disease and medical practice. Initial applications of ND will include the annotation and analysis of large data sets and patient records for Alzheimer's disease, multiple sclerosis, and stroke. (shrink)
Alzheimer's Disease in Philosophy of Cognitive Science
Disease in Philosophy of Science, Misc
Medical Methodology in Philosophy of Science, Misc
Neuroscience in Cognitive Sciences
Towards Welfare Biology: Evolutionary Economics of Animal Consciousness and Suffering. [REVIEW]Yew-Kwang Ng - 1995 - Biology and Philosophy 10 (3):255-285.details
Welfare biology is the study of living things and their environment with respect to their welfare. Despite difficulties of ascertaining and measuring welfare and relevancy to normative issues, welfare biology is a positive science. Evolutionary economics and population dynamics are used to help answer basic questions in welfare biology : Which species are affective sentients capable of welfare? Do they enjoy positive or negative welfare? Can their welfare be dramatically increased? Under plausible axioms, all conscious species are plastic and all (...) plastic species are conscious. More complex niches favour the evolution of more rational species. Evolutionary economics also supports the common-sense view that individual sentients failing to survive to mate suffer negative welfare. A kind of God-made fairness between species is also unexpectedly found. The contrast between growth maximization, average welfare, and total welfare maximization is discussed. It is shown that welfare could be increased without even sacrificing numbers. Since the long-term reduction in animal suffering depends on scientific advances, strict restrictions on animal experimentation may be counter-productive to animal welfare. (shrink)
Animal Experimentation in Applied Ethics
Predictor of Business Students' Attitudes Toward Sustainable Business Practices.Eddy S. Ng & Ronald J. Burke - 2010 - Journal of Business Ethics 95 (4):603 - 615.details
This study examined individual difference characteristics as predictors of business students' attitudes toward sustainable business practices. Three types of predictors were considered: personal values, individualism—collectivism, and leadership styles. Data were collected from 248 business students attending a mid-sized university in western United States using self-reported questionnaires. Few gender differences were present.Hierarchical regression analyses, controlling for personal demographic characteristics, indicated that business students scoring higher on Rokeach's social value scale, collectivism, and transformational leadership also reported more positive attitudes toward sustainable business (...) practices.Implications for research and practice are discussed. (shrink)
Design and Validation of a Novel New Instrument for Measuring the Effect of Moral Intensity on Accountants' Propensity to Manage Earnings.Jeanette Ng, Gregory P. White, Alina Lee & Andreas Moneta - 2009 - Journal of Business Ethics 84 (3):367 - 387.details
The goal of this study was to construct a valid new instrument to measure the effect of moral intensity on managers' propensity to manage earnings. More specifically, this study is a pilot study of the impact of moral intensity on financial accountants' propensity to manage earnings. The instrument, once validated, will be used in a full-study of managers in the hotel industry. Different ethical scenarios were presented to respondents in the survey; each ethical scenario was designed in both high or (...) low moral intensity form, to reflect the importance of the moral dilemma at hand. The results were analysed by factor analysis. The findings of this study have positively validated the instrument, with three of the five moral intensity components identified as having appropriate eigenvalues. This indicates that they have a significant influence in the study. The first factor captures the social consensus dimension and one scenario of the proximity dimension. The second factor indicates an interaction between the temporal immediacy and the magnitude of consequences dimension. The third dimension is probability of effect and one scenario of the proximity dimension. In addition, t-tests indicated that the manipulation of high and low conditions within each scenario were also successful. One limitation of the study might be the use of undergraduate accounting students as manager proxies, although prior evidence suggests use of accounting students as proxies is a valid approach in this type of study. This is a highly novel project as most prior studies have focussed on moral intensity and the general ethical decision-making process. (shrink)
Each Thing a Thief: Walter Benjamin on the Agency of Objects.Julia Ng - 2011 - Philosophy and Rhetoric 44 (4):382-402.details
"I have a tree, which grows here in my close, / That mine own use invites me to cut down, / And shortly I must fell it" (Shakespeare 2001, 168)—Timon's lament, which in Shakespeare's rendition occurs shortly before its utterer's demise "upon the beached verge of the salt flood" (2001, 168) beyond the perimeter of Athens, is an indictment of the nature that Timon finds unable to escape. Having given away his wealth in misguided generosity to a host of parasitic (...) sycophants, Timon turns misanthropic when his "friends" reject his requests for help in kind to repay his debts, eventually exiling himself from the city with the intent of sustaining himself on nothing but water and roots. Yet he soon finds that removing .. (shrink)
Walter Benjamin in Continental Philosophy
The Emergence of Relationship-Based Retailing–a Perspective From the Fashion Sector.Luciano Batista & Irene Ng - 2012 - Emergence: Complexity and Organization 10:11.details
Hegel's Logic of Actuality.Karen Ng - 2009 - Review of Metaphysics 63 (1):139-172.details
Against the standard interpretation that Hegel's idealism, in particular speculative logic, should be understood as an extension of Kant's transcendental idealism, I argue that Hegel's Logic should be understood as a logic of actuality (Wirklichkeit). Rather than seeking to determine the necessary and merely formal conditions and categories for the knowledge of any possible object, speculative logic is the immanent and active process of determining the truth of actual objects and actuality itself. Through a discussion of the status of the (...) transition between the Phenomenology and the Logic, as well as a detailed reading of Hegel's treatment of the modal categories in the Doctrine of Essence, I seek to show how speculative logic offers a way to think the unity of a thing and its conditions without reverting to pre-critical metaphysics. By breaking down the traditional distinctions between actuality, possibility, necessity, and contingency, as well as demonstrating the necessity of contingency in the activity of thinking, I suggest that Hegel provides us with the categories necessary for a new understanding of the relation between thought and reality beyond the Kantian frame. (shrink)
Hegel: Logic, Misc in 19th Century Philosophy
Philosophy of Linguistics in Philosophy of Language
On Strongly Jump Traceable Reals.Keng Meng Ng - 2008 - Annals of Pure and Applied Logic 154 (1):51-69.details
In this paper we show that there is no minimal bound for jump traceability. In particular, there is no single order function such that strong jump traceability is equivalent to jump traceability for that order. The uniformity of the proof method allows us to adapt the technique to showing that the index set of the c.e. strongly jump traceables is image-complete.
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Logical Expressions in Logic and Philosophy of Logic
A Tension in Ch'ing Thought: "Historicism" in Seventeenth-and Eighteenth-Century Chinese Thought.On-cho Ng - 1993 - Journal of the History of Ideas 54 (4):561-583.details
Toward a Theory of Emotive Performance: With Lessons From How Politicians Do Anger.Kwai Hang Ng & Jeffrey L. Kidder - 2010 - Sociological Theory 28 (2):193 - 214.details
This article treats the public display of emotion as social performance. The concept of "emotive performance" is developed to highlight the overlooked quality of performativity in the social use of emotion. We argue that emotive performance is reflexive, cultural, and communicative. As an active social act, emotive performance draws from the cultural repertoire of interpretative frameworks and dominant narratives. We illustrate the utility of the concept by analyzing two episodes of unrehearsed emotive performances by two well-known politicians, Bill Clinton and (...) Jiang Zemin. The two cases demonstrate how emotion can be analyzed as a domain in which culturally specific narratives and rhetorics are used to advance the situational agenda of actors. The concept opens up a more expansive research agenda for sociology. It pushes sociologists to pay greater attention to people's experiences, interpretations, and deployments of emotions in social life. (shrink)
Sociology in Social Sciences
Limits on Jump Inversion for Strong Reducibilities.Barbara F. Csima, Rod Downey & Keng Meng Ng - 2011 - Journal of Symbolic Logic 76 (4):1287-1296.details
We show that Sacks' and Shoenfield's analogs of jump inversion fail for both tt- and wtt-reducibilities in a strong way. In particular we show that there is a ${\mathrm{\Delta }}_{2}^{0}$ set B > tt ∅′ such that there is no c.e. set A with A′ ≡ wtt B. We also show that there is a ${\mathrm{\Sigma }}_{2}^{0}$ set C > tt ∅′ such that there is no ${\mathrm{\Delta }}_{2}^{0}$ set D with D′ ≡ wtt C.
A Cognitive Architecture for Knowledge Exploitation.Gee Wah Ng, Yuan Sin Tan, Loo Nin Teow, Khin Hua Ng, Kheng Hwee Tan & Rui Zhong Chan - 2011 - International Journal of Machine Consciousness 3 (02):237-253.details
Philosophy of Mind, General Works in Philosophy of Mind
Nursing Management of Medication Errors.Leung Andrew Luk, Wai I. Milly Ng, Kam Ki Stanley Ko & Vai Ha Ung - 2008 - Nursing Ethics 15 (1):28-39.details
Medication error is the most common and consistent type of error occurring in hospitals. This article attempts to explore the ethical issues relating to the nursing management of medication errors in clinical areas in Macau, China. A qualitative approach was adopted. Seven registered nurses who were involved in medication errors were recruited for in-depth interviews. The interviews were transcribed and analyzed using content analysis. Regarding the management of patients, the nurses acknowledged the mistakes but did not disclose the incidents to (...) patients and relatives. Concerning management of the nurses involved by senior staff, most participants experienced fairness, comfort and understanding during the process of reporting and investigation. The ethical issues relating to the incidents were discussed, particularly in the Chinese context. There is a need for further study relating to the disclosure of medication incidents to patients and some suggestions were made. (shrink)
Nursing Ethics in Applied Ethics
Children's Task Performance Under Stress and Non-Stress Conditions: A Test of the Processing Efficiency Theory.EeLynn Ng & Kerry Lee - 2010 - Cognition and Emotion 24 (7):1229-1238.details
Disrupting the School-to-Prison Pipeline.Sofía Bahena, North Cooc, Rachel Currie-Rubin, Paul Kuttner & Monica Ng (eds.) - 2012 - Harvard Educational Review.details
A trenchant and wide-ranging look at this alarming national trend, _Disrupting the School-to-Prison Pipeline_ is unsparing in its account of the problem while pointing in the direction of meaningful and much-needed reforms. The "school-to-prison pipeline" has received much attention in the education world over the past few years. A fast-growing and disturbing development, it describes a range of circumstances whereby "children are funneled out of public schools and into the juvenile and criminal justice systems." Scholars, educators, parents, students, and organizers (...) across the country have pointed to this shocking trend, insisting that it be identified and understood—and that it be addressed as an urgent matter by the larger community. This new volume from the _Harvard Educational Review _features essays from scholars, educators, students, and community activists who are working to disrupt, reverse, and redirect the pipeline. Alongside these authors are contributions from the people most affected: youth and adults who have been incarcerated, or whose lives have been shaped by the school-to-prison pipeline. Through stories, essays, and poems, these individuals add to the book's comprehensive portrait of how our education and justice systems function—and how they fail to serve the interests of many young people.". (shrink)
Effective Packing Dimension and Traceability.Rod Downey & Keng Meng Ng - 2010 - Notre Dame Journal of Formal Logic 51 (2):279-290.details
We study the Turing degrees which contain a real of effective packing dimension one. Downey and Greenberg showed that a c.e. degree has effective packing dimension one if and only if it is not c.e. traceable. In this paper, we show that this characterization fails in general. We construct a real $A\leq_T\emptyset''$ which is hyperimmune-free and not c.e. traceable such that every real $\alpha\leq_T A$ has effective packing dimension 0. We construct a real $B\leq_T\emptyset'$ which is not c.e. traceable such (...) that every real $\alpha\leq_T B$ has effective packing dimension 0. (shrink)
Antecedents of Green Brand Equity: An Integrated Approach.Pui Fong Ng, Muhammad Mohsin Butt, Kok Wei Khong & Fon Sim Ong - 2014 - Journal of Business Ethics 121 (2):203-215.details
Autism: Common, Heritable, but Not Harmful.Ann Gernsbacher Morton, Dawson Michelle & Mottron Laurent - 2006 - Behavioral and Brain Sciences 29 (4):413-414.details
We assert that one of the examples used by Keller & Miller (K&M), namely, autism, is indeed common, and heritable, but we question whether it is harmful. We provide a brief review of cognitive science literature in which autistics perform superiorly to non-autistics in perceptual, reasoning, and comprehension tasks; however, these superiorities are often occluded and are instead described as dysfunctions. (Published Online November 9 2006).
Autism in Philosophy of Cognitive Science
Lowness for Effective Hausdorff Dimension.Steffen Lempp, Joseph S. Miller, Keng Meng Ng, Daniel D. Turetsky & Rebecca Weber - 2014 - Journal of Mathematical Logic 14 (2):1450011.details
We examine the sequences A that are low for dimension, i.e. those for which the effective dimension relative to A is the same as the unrelativized effective dimension. Lowness for dimension is a weakening of lowness for randomness, a central notion in effective randomness. By considering analogues of characterizations of lowness for randomness, we show that lowness for dimension can be characterized in several ways. It is equivalent to lowishness for randomness, namely, that every Martin-Löf random sequence has effective dimension (...) 1 relative to A, and lowishness for K, namely, that the limit of KA/K is 1. We show that there is a perfect [Formula: see text]-class of low for dimension sequences. Since there are only countably many low for random sequences, many more sequences are low for dimension. Finally, we prove that every low for dimension is jump-traceable in order nε, for any ε > 0. (shrink)
Self Interest Among CPAs May Influence Their Moral Reasoning.Paul W. Allen & Chee K. Ng - 2001 - Journal of Business Ethics 33 (1):29 - 35.details
In 1990, the Federal Trade Commission (FTC) issued a consent order to the American Institute of Certified Public Accountants (AICPA). The order decreed the AICPA to lessen its longstanding ethics code which had until then banned the receipts of commissions, referral fees and contingent fees. The FTC alleged that the AICPA banned receipt of the fees as an attempt to restrain trade (FTC, 1990).In the present study, we sought to determine if CPAs'' preference for bans on commissions, referral fees and (...) contingent fees is related to their moral reasoning whereby CPAs perceive the bans to serve as a means of resolving ethical issues. While determining this matter cannot prove whether the bans did or did not actually result in restrained trade, it can offer insight into the perceived ethical importance to CPAs of the overturned rules. Based on a random sample of AICPA members and using Rest''s Defining Issues Test (DIT) to measure moral reasoning, we did not find a CPA''s moral reasoning to be related to his/her preference for ethics rules which ban commissions, referral fees or contingent fees. However, our results did indicate that most CPAs prefer banning commissions, referral fees and contingent fees, with those CPAs holding a higher financial stake in public accounting, namely partners, favoring banning referral fees and contingent fees significantly less than CPAs with a lesser stake. Further, we noted a significant negative relationship between financial stake and moral reasoning. These results seem to suggest that self-interest among CPAs may influence their moral reasoning.Further study is needed to examine the relationship between self-interest of CPAs and their moral reasoning. If self-interest clouds moral judgments made by CPAs, capital markets are in danger. Rendering an independent audit opinion must exclude self-interest. (shrink)
On Very High Degrees.Keng Meng Ng - 2008 - Journal of Symbolic Logic 73 (1):309-342.details
In this paper we show that there is a pair of superhigh r.e. degree that forms a minimal pair. An analysis of the proof shows that a critical ingredient is the growth rates of certain order functions. This leads us to investigate certain high r.e. degrees, which resemble ∅′ very closely in terms of ∅′-jump traceability. In particular, we will construct an ultrahigh degree which is cappable.
Philosophy of the Yi: Unity and Dialectics.Zhongying Cheng & On Cho Ng (eds.) - 2010 - Wiley-Blackwell.details
This volume, an assemblage of essays previously published in the Journal of Chinese Philosophy, conveniently and strategically brings together some of the trenchant interpretations and analyses of the salient, structural aspects of the philosophy of the Yijing. They reveal how the ancient Classic offers a graphically vivid and conceptually dynamic dramaturgy of the ways in which the natural world works in conjunction with the human one. Its cosmological architectonics and philosophical worldview continue to have enormous purchase on our current imagination, (...) even though readerly imperatives and responses have rendered this classic into a text of multiple significances, catering to pluralistic readerships and clienteles. Nonetheless, the essays in this volume lay bare some of the original authorly visions and insights of the Yijing, clearly showing that their apparent truthfulness to our cosmic and human conditions inspire philosophical and even theological questions. The Yijing's authorial designs of the eight trigrams and hexagrams, which encapsulate the primordial state of homo-cosmic phenomena and situations, together with the yin-yang forces and the dao, are taken for granted as integers in a grand universal equation, factored out to represent a ceaselessly changing cosmos in which heaven, earth and humanity commingle, such that the whole and unity can be found in the individual and the opposite, and vice versa. (shrink)
Yijing (The Book of Change) in Asian Philosophy
$29.41 new $29.80 used $32.05 direct from Amazon Amazon page
The Normalization of Deviant Organizational Practices: The Non-Performing Loans Problem in China. [REVIEW]Jiatao Li & Carmen K. Ng - 2013 - Journal of Business Ethics 114 (4):643-653.details
Research on deviant organizational practices has demonstrated that normative and cognitive institutional forces contribute to making deviance acceptable. Data from a survey of 3,751 Chinese firms were applied to test the idea that a clearly articulated alternative identity is necessary if a firm is to resist the normalization of deviance. Widespread acceptance of delinquency in repaying loans was shown to make it more likely that a firm adopts that practice, but this normalization process is less likely for firms with a (...) stronger anti-deviance identity. (shrink)
Making Power Visible in Global Health Governance.Carles Muntaner, Edwin Ng & Haejoo Chung - 2012 - American Journal of Bioethics 12 (7):63 - 64.details
The American Journal of Bioethics, Volume 12, Issue 7, Page 63-64, July 2012.
Utility of Gambling When Events Are Valued: An Application of Inset Entropy. [REVIEW]C. T. Ng, R. Duncan Luce & A. A. J. Marley - 2009 - Theory and Decision 67 (1):23-63.details
The present theory leads to a set of subjective weights such that the utility of an uncertain alternative (gamble) is partitioned into three terms involving those weights—a conventional subjectively weighted utility function over pure consequences, a subjectively weighted value function over events, and a subjectively weighted function of the subjective weights. Under several assumptions, this becomes one of several standard utility representations, plus a weighted value function over events, plus an entropy term of the weights. In the finitely additive case, (...) the latter is the Shannon entropy; in all other cases it is entropy of degree not 1. The primary mathematical tool is the theory of inset entropy. (shrink)
Thermodynamics and Statistical Mechanics in Philosophy of Physical Science
Neuroscience and the Teaching of Mathematics.Kerry Lee & Swee Fong Ng - 2011 - Educational Philosophy and Theory 43 (1):81-86.details
Much of the neuroimaging research has focused on how mathematical operations are performed. Although this body of research has provided insight for the refinement of pedagogy, there are very few neuroimaging studies on how mathematical operations should be taught. In this article, we describe the teaching of algebra in Singapore schools and the imperatives that led us to develop two neuroimaging studies that examined questions of curricular concerns. One of the challenges was to condense issues from classrooms into tasks suitable (...) for neuroimaging studies. Another challenge, not particular to the neuroimaging method, was to draw suitable inferences from the findings and translate them into pedagogical practices. We describe our efforts and outline some continuing challenges. (shrink)
Philosophy of Neuroscience, Misc in Philosophy of Cognitive Science
Jump Inversions Inside Effectively Closed Sets and Applications to Randomness.George Barmpalias, Rod Downey & Keng Meng Ng - 2011 - Journal of Symbolic Logic 76 (2):491 - 518.details
We study inversions of the jump operator on ${\mathrm{\Pi }}_{1}^{0}$ classes, combined with certain basis theorems. These jump inversions have implications for the study of the jump operator on the random degrees—for various notions of randomness. For example, we characterize the jumps of the weakly 2-random sets which are not 2-random, and the jumps of the weakly 1-random relative to 0′ sets which are not 2-random. Both of the classes coincide with the degrees above 0′ which are not 0′-dominated. A (...) further application is the complete solution of [24, Problem 3.6.9]: one direction of van Lambalgen's theorem holds for weak 2-randomness, while the other fails. Finally we discuss various techniques for coding information into incomplete randoms. Using these techniques we give a negative answer to [24, Problem 8.2.14]: not all weakly 2-random sets are array computable. In fact, given any oracle X, there is a weakly 2-random which is not array computable relative to X. This contrasts with the fact that all 2-random sets are array computable. (shrink)
The Case for and Difficulties in Using "Demand Areas" to Measure Changes in Well-Being.Yew-Kwang Ng - 1990 - Behavioral and Brain Sciences 13 (1):30-31.details
Chinese Philosophy, Hermeneutics, and Onto-Hermeneutics.On-Cho Ng - 2003 - Journal of Chinese Philosophy 30 (3-4):373-385.details
Toward a Hermeneutic Turn in Chinese Philosophy: Western Theory, Confucian Tradition, and CHENG Chung-Ying's Onto-Hermeneutics.On-cho Ng - 2007 - Dao: A Journal of Comparative Philosophy 6 (4):383-395.details
Chung-ying's project of onto-hermeneutics draws in order to shed light on the relations between ontology and epistemology in the hermeneutic act. In the process, not only will we be thinking with Cheng and some Western hermeneutic theorists, but we will also be thinking through history by examining the Confucian act of reading. To the extent that any hermeneutic exercise, in accordance with Cheng's construal, cannot merely be a disembodied act of theoretical knowing but is also moral effort that entails personal (...) cultivation—or, in Heidegger's and Gadamer's terms, Bildung—its espousal and its practice necessarily embody a larger conception of culture. In fact, precisely in terms of the intimate engagement with culture, Confucian insights, filtered through Cheng's onto-hermeneutic lenses, may have much to offer contemporary hermeneutics. (shrink)
Chinese Philosophy: Hermeneutics in Asian Philosophy
Contemporary Chinese Philosophy, Misc in Asian Philosophy
Interpersonal Level Comparability Implies Comparability of Utility Differences: A Reply.Yew-Kwang Ng - 1989 - Theory and Decision 26 (1):91-93.details
The "I Ching" in the Shinto Thought of Tokugawa Japan.Wai-Ming Ng - 1998 - Philosophy East and West 48 (4):568-591.details
The "I Ching" had an important influence on Tokugawa Shinto. First, it played a crucial role in the discussion of Confucian-Shinto relations; many Tokugawa Confucians and Shintoists used it to uphold the doctrine of the unity of Confucianism and Shinto, and Shintoists and scholars of National Learning (kokugaku) used it for its metaphysical and divinational value. Second, scholars of National Learning transformed it from a Confucian classic into a Shinto text, claiming that it was the handiwork of a Japanese deity.
Shinto and Kokugaku Philosophy, Misc in Asian Philosophy
Infinite Utility and Van Liedekerke's Impossibility: A Solution.Yew-Kwang Ng - 1995 - Australasian Journal of Philosophy 73 (3):408 – 412.details
Infinite Decision Theory in Philosophy of Action
Death Substrates Come Alive.Alan G. Porter, Patrick Ng & Reiner U. Jänicke - 1997 - Bioessays 19 (6):501-507.details
Biological Sciences in Natural Sciences
Measuring the Foaminess of Space-Time with Gravity-Wave Interferometers.Y. Jack Ng & H. Van Dam - 2000 - Foundations of Physics 30 (5):795-805.details
By analyzing a gedanken experiment designed to measure the distance l between two spatially separated points, we find that this distance cannot be measured with uncertainty less than (ll 2 P) 1/3 , considerably larger than the Planck scale lP (or the string scale in string theories), the conventional-wisdom uncertainty in distance measurements. This limitation to space-time measurements is interpreted as resulting from quantum fluctuations of space-time itself. Thus, at very short distance scales, space-time is "foamy." This intrinsic foaminess of (...) space-time provides another source of noise in the interferometers. The LIGO/VIRGO and LISA generations of gravity-wave interferometers, through future refinements, are expected to reach displacement noise levels low enough to test our proposed degree of foaminess in the structure of space-time. (shrink)
Space and Time in Philosophy of Physical Science | CommonCrawl |
Accenture's Placement Paper 1 questions are some what different from other companies. It is important to understand the question first before proceeding towards the answer. These questions are basically level 1 and level 2 questions. If you want to do well in these questions, you will need to practice in advance.
The syllabus is given below.
1. Problems on Numbers.
2. HCF and LCM.
3. Decimal Fractions.
4. Squares and Square roots.
5. Averages.
7. Percentages.
8. Profit and Loss.
9. Chain Rule.
10. Time and Work.
These questions are not too difficult as they belong to Placement Paper 1. Candidates who practice these questions well in advance will certainly have an upper hand in the exam. Once you are done with these questions you can move on to Accenture Placement Paper 2.
Also, we advise you to solve Accenture's verbal ability questions and logical reasoning questions as well.
The least number of five digit is exactly divisible by 88 is?
Level 1 Numbers Divisibility and Remainders
The smallest five digit number is 10000
The least number divisible by 88 = 10000+(88-56)
=1000+32=10032
The G.C.D. of 1.08,0.36 and 0.9 is
Given numbers are 1.08,0.36 and 0.90. H.C.F. of 108,36 and 90 is 18
H.C.F. of given numbers = 0.18
Which of the following is equal to 3.14*\(10^{6}\)
Level 1 Decimal Fractions Find unknown value Decimals Basic
3.14*\(10^{6}\) = 3.140000*1000000=3140000
Which of the following is closest to \(\sqrt{3}\)?
Level 1 Squares and Square Roots Basic Questions
\(\frac{173}{100}\)
\(\sqrt{3}\) = 1.732
The average height of 35 girls in a class was calculated as 160 cm. It was later found that the height of one of the girls in the class was wrongly written as 144 cm, whereas her actual height was 104 cm. What is the actual average height of the girls in the class?
Level 1 Average Wrong data to correct average
Correct sum = (160*35+104-144)cm = 5560 cm
Actual average height = \(\frac{5560}{35}\)cm = 158.857 cm = 158.86
The sum of a rational number and its reciprocal is \(\frac{13}{6}\). Find the number.
Level 1 Problems on Numbers Consecutive and reciprocal numbers
2/3 or 3/2
Let the number be x.
Then, \(x+\frac{1}{x}=\frac{13}{6}\Leftrightarrow \frac{x^{2}+1}{x}=\frac{13}{6}\Leftrightarrow 6x^{2}-3x+6=0\)
\(\Leftrightarrow 6x^{2}-9x-4x+6=0\Leftrightarrow (3x-2)(2x-3)=0\)
\(\Leftrightarrow x=\frac{2}{3}\, or\, x=\frac{3}{2}\)
\(2^{3.6}\)*\(4^{3.6}\)*\(4^{3.6}\)*\(32^{2.3}\)=\(32^{?}\)
\(2^{3.6}\)*\(4^{3.6}\)*\(4^{3.6}\)*\(32^{2.3}\)=\(32^{x}\)
\(2^{3.6}\)*\((2^{2)^{3.6}}\)*\((2^{2)^{3.6}}\)*\((2^{5)^{2.3}}\)=\(2^{5^{x}}\)
\(2^{3.6}\)*\(2^{2*3.6}\)*\(2^{2*3.6}\)*\(2^{5*2.3}\)=\(2^{5^x}\) ⇔ \(2^{18}\)*\(2^{5*2.3}\)=\(2^{5^x}\)
\(2^{5^{5.9}}\)=\(2^{5^x}\)
x=5.9
Nagaraj could save 10% of his income. But 2 years later, when his income increased by 20%, he could save the same amount only as before. By how much percentage has his expenditure increased?
Level 2 Percentage Expenditure
22\(\frac{2}{9}\)%
Let, his income be Rs.100
Saving=Rs.10
Expenditure=Rs.90
Increased income=Rs.120
Increased expenditure=Rs.110
Increased in expenditure = \(\frac{110-90}{90}\)*100 = \(\frac{200}{9}\) = 22\(\frac{2}{9}\)%
A trader marked his goods at 20% above the cost price. He sold half the stock at the marked price, one quarter at a discount of 20% on the marked price and the rest at a discount of 40% on the marked price. His total gain is:
Level 2 Profit and Loss Discounts
Let C.P. of whole stock = Rs. 100.
Then, marked Price of whole stock = Rs. 120
M.P of \(\frac{1}{2}\) stock = Rs.60
M.P of \(\frac{1}{4}\) stock = Rs. 30
Total S.P = Rs. [ 60+ (80% of 30) + (60% of 30)] = Rs. (60+24+18)= Rs. 102
Hence, gain% = (102-100)% = 2%
Two coal loading machines each working 12 hours per day for 8 days handle 9000 tonnes of coal with an efficiency of 90% while 3 other coal loading machines at an efficiency of 80% are set to handle 12000 tonnes of coal in 6 days. Find how many hours per day each should work.
Level 2 Chain Rule Work and Quantity Proportion
Let the number of working hours per day be x
More machines, Less working hours per day (Indirect Proportion)
Less days, More working hours per day (Indirect Proportion)
More coal, More working hours per day (Direct Proportion)
Less efficiency, More working hours per day (Indirect Proportion)
Machines 3 : 2 : : 12 : x
Days 6 : 8 : : 12 : x
Coal 9000 : 12000 : : 12 : x
Efficiency 80 : 90 : : 12 : x
3*6*9000*80*x = 2*8*12000*90*12
x = \(\frac{2*8*12000*90*12}{3*6*9000*80}\) = 16
Hence, each machine should work for 16 hours per day.
Two men undertake to do a piece of work for Rs.1400. The first man alone can do this work in 7 days while the second man alone can do this work in 8 days. If they working together complete this work in 3 days with the help of a boy, how should the money be divided?
Rs.600, Rs.550, Rs.250
Boy's 1 days work = \(\frac{1}{3}\)-\(\frac{1}{7}\)+\(\frac{1}{8}\)=\(\frac{11}{168}\)
Ratio of wages of the first man, second man and boy = \(\frac{1}{7}\) : \(\frac{1}{8}\) : \(\frac{11}{168}\) = 24 : 21 : 11
First man's share = Rs.\(\frac{24}{56}\)*1400 = Rs.600
Second man's share = Rs.\(\frac{21}{56}\)*1400 = Rs.525
Boy's share = Rs.1400-(600+525) = Rs.275
There are 8 equidistant points A,B,C,D,E,F,G and H in the clockwise direction on the periphery of a circle. In a time interval t, a person reaches from A to C with uniform motion while another person reaches the point E from the point B during the same time interval with uniform motion. Both the persons move in the same direction along the circumference of the circle and start at the same instant. How much time after the start, will the two persons meet each other?
Level 3 Time and Distance
Let distance between each point be x.
Also, let speed of persons from point A and B be u and v respectively.
u = \(\frac{2x}{t}\)
Distance between A to C = 2x = ut
v = \(\frac{3x}{t}\)
Relative speed of person from B with respect to A
= \(\frac{3x}{t}\) - \(\frac{2x}{t}\) = \(\frac{x}{t}\)
Distance between A and B = 7x
∴ Persons will meet after time= \(\frac{7x}{\frac{x}{t}}\) =7t
Boat A travels downstream from Point X to Point Y in 3 hours less than the time taken by Boat B to travel upstream from Point Y to Point Z. The distance between X and Y is 20 km, which is half of the distance between Y and Z. The speed of Boat B in still water is 10 km/h and the speed of Boat A in still water is equal to the speed of Boat B upstream. What is the speed of Boat A in still water?
Let the speed of current in water = skmph
The time taken by boat A = \(t_{a}\)
And the time taken by boat B = \(t_{b}\)
Distance between point X to point Y= 20 km
Distance between point Y to point Z = 40 km
\(\frac{40}{t_{b}}\) =10+s ..i
\(t_{a}\)= \(t_{b}\) 3
The speed of boat A in still water = The speed of boat B in upstream = 10+s km/h
So, \(t_{b}\) = 20/10 =2 hrs
Hence, \(t_{b}\)= \(t_{a}\)+3 = 2+3 = 5 hrs
By using i
\(\frac{40}{5}\)=10+s
s=2 kmph
The speed of boat A in still water = 10-2 = 8 km/h
Two equal sums of money are lent at the same time at 8% and 7% per annum simple interest. The former is recovered 6 months earlier than the latter and the amount in each case is Rs.2560. The sum and the time for which the sums of money are lent out are
Rs.2000, 3.5 years and 4years
Rs.2000, 4 years and 5.5years
Let each sum = Rs.x
Let the first sum be invested for (T- \(\frac{1}{2}\)) years and the second sum for T years.
Then, x+ \(\frac{x*8*(T-1/2)}{100}\)=2560
100x+8xT-4x=256000
96x+8xT=256000 i
And, x+ \(\frac{x*7*T}{100}\)=2560 ii
100x+7xT=256000
From i and ii
96x+8xT=100x+7xT
4x=xT
Putting T=4 in i we get, 96x+32x=256000
128x=256000
x=2000
Hence, each sum = Rs.2000, time periods = 4 yrs and 3 \(\frac{1}{2}\) years
Read both the sentences carefully and decide on their correctness on the basis of the italicised words.
I. The tragic tale narrated by the old man affected all the children.
II. The humane attitude of the new manager efected a profound change in labour relations.
If only sentence I is correct
If only sentence II is correct
If both sentences I and II are correct
if I as well as II are incorrect, but both could bemade correct by interchanging the italicised words;
The correct word in II should be 'effected'
effect = result,
affect = to influence.
Read the following passage carefully and answer the questions that follow.
Over all the countryside, wherever one goes, indications of technique are visible to the seeing eye. By technique is meant an exercise of skill acquired by practice and directed to a well-foreseen end. It is the name for the action of any of our powers after they have been so improved by training as to perform that action with certainty and success.
The most important aspect of 'technique', as defined in the passage, is the use of skill
Comprehension II
for handling tools and machines
for an understanding of the functions of tools and machinery
for obdervation and analysis
for a definite purpose
In following passage, there has been numbered. These numbers are printed below the passage and against each, some words are suggested, one of which fits the blank approximately. Find out the appropriate words.
It is not proper to damn a system without understanding it. The Indian bureaucracy may be as bad after all, as it is made out to be. Times without number, it has been (1) that our bureaucrat is a (2) creature who has the habit of sitting (3) the files and also happens to sleep (4) the remainders. What is worse is (5) his own word. He turns a (6) ear to the visitor's request and binds his hands and feet with (7) tape. However, in all fairness (8) the bureaucracy, it is necessary to note that a major reason for its (9) to discharge its functions (10) throughout has been the concentration of power in the hands of the politicians. The bureaucrat may be a devil, but because he has his own share of difficulties, we would not deny him his due.
Word for (10) is
elegantly
equitably
Some people believe that marriages are made in heaven. One cannot say (1) this is true or not. However, in America now many (2) who get married seek to evolve a fool proof (3) to ensure that the marriage survives. However, the idea of married persons (4) the burden of domestic chores, instead of all the dirty work being dumped on the woman has (5) been propagated by the feminist (6) and it has gradually, if grudgingly, been (7). This arrangement may work on a temporary basis but taking (8) of a particular domestic work on a permanent basis will pose problems of its own. For instance, taking out the rubbish may (9) make a refreshing change from washing baby's nappies. However, a contract is a contract and must be (10) you must not like it but this is what life is coming out to be in the most modern of the countries in the world.
Word for (1) is
Choose the most appropriate phase to complete a sentence.
Every person must learn .............
Choosing the Appropriate Filler
to make wise use of his time
to using his time in a wisely manner
that his time needs a wise use
wise ways in his time's use
In the following passage, there are words/groups of words printed in italices, each of which was been numbered. These numbers are printed below the passage and against each, threewords/group of words are suggested which can substitute the word printed in italics. Find out the choice which can correctly substitute that word. If the word/group of words is correct as it is and no correction is required, give 'No correction required' as your answer.
What looks very much like genocide has been (1) taking place in Rwanda. People are pulled on (2) cars and buses, ordered to defer (3) their identity papers and then killed on the spot until (4) they belong to the wrong ethnic group. Thousands of bobies have already given (5) up, and the peace (6) continues despite the present (7) of 1700 United Nations peace keepers.
Rearrange the following sentences into a meaningful sequence and then answer the questions that follow.
(A) Absence of other parental figures in the family has worsened the situation.
(B) Drug abuse among youngsters seems to be on the increase.
(C) The need for many mothers to work has put an additional claim on the time available for the children.
(D) Breakdown of the extended family has put a lot of burden on the parents.
(E) Psychologists attribute this to the growing alienation of the new generation from their parents.
Which of the following is to be the third sentence?
Reconstruction of paragraphs
In following question, a paragraph or a sentence has been broken up into different parts. The parts have been scrambled and numbered are given below. Choose the correct order of these parts from the given alternatives.
(1) The African elephant is usually larger
(2) being about three and a half meters in height
(3) than the Indian
(4) and 6000 kg in weight
(5) It has enormous ears
(6) which are valued for the ivory
(7) and very long tusks
(8) that they contain
Rearrangement of Jumbled Parts
Select the combination of numbers so that letters/words arranged accordingly will form a meaningful word.
R E S T L U
In the following question, a sentence has been given in Active (or Passive) Voice. Out of the four alternatives suggested select the one which best expresses the same sentence in Passive (or Active) voice.
I saw him leaving the house.
He has been seen leaving the house
He was seen to be leaving the house
Leaving the house he was seen by me
Rain disrupted the last day's play between India and Sri Lanka.
The last day's play of India and Sri Lanka was disrupted by rain.
India and Sri Lanka play of the last day was disrupted by rain
The last day's play between India and Sri Lanka was disrupted by rain
In each of the following questions, choose from the given words below the two sentences, that word which has the same the same meaning and can be used in the same context as the part given in italics in both the sentences.
I. The government decided to start a new series of lectures called the 'Honour Lecture Series'
II. The decision to set up a new University in that town was welcomed by the people.
Double Synonyms
Find which of the given alternatives is either a synonym or antonym of the word given below.
Vehemently
Vocabulary Test
Openly
Widely
Abruptly
Forcefully
Forcefully : Synonym
In each question below are given two statements followed by two conclusions numbered I and II. You have to take the given two statements to be true even if they seem to be at variance from commonly known facts. Read the conclusion and then decide which of the given conclusions logically follows from the two statements, disregarding commonly known facts.
Give answer
a. if only conclusion I follows;
b. if only conclusion II follows;
c. if either I or II follows;
d. if neither I nor II follows
e. if both I and II follows
Statements : No bat is ball. No ball is wicket.
Conclusions : I. No bat is wicket II. All wickets are bats.
Since both the premises are negative, no definite conclusion follows.
Statement : Should individuals/institutes having treasures of national significance like Nobel Prizes, hand them over to the Central Government for the safe custody?
I. Yes. The individuals or institutions do not have enough resources to protect them.
II. No. These are the property of the individuals/institutions who win them and should be in their custody.
The awards are given for individual excellence and perfection. So, only argument II holds strong.
In each question below is given a statement followed by three courses of action numbered I, II and III. You have to assume everything in the statement to be true, then decide which of the three given suggested courses of action logically follows for pursuing.
Statement : Poverty is increasing because the people, who are deciding how to tackle it, know absolutely nothing about the poor.
Courses action :
I. The decision makers should go to the grass root levels.
II. The decision makers should come from the power sections of the society.
III. A new set of decision makers should replace the existing one.
All follow.
The statement indirectly asserts that the decision makers can work effectively to eliminate poverty, only if they get to know the basic problems afflicting the poor people through interaction with them. So, only I follows.
Statement : Vegetable prices are soaring in the market.
I. Vegetables are becoming a rare commodity.
II. People cannot eat vegetables.
The availability of vegetables is not mentioned in the given statement. so, I does not follow. Also, II is not directly related to the statement and so it also does not follow.
I. The university authority has decided to conduct all terminal examinations in March/April every year to enable them to declare results in time.
II. There has been considerable delay in declaring results in the past due to shortage of teachers evaluating the answer sheets of the examination conducted by the university.
Each statement is self-sufficient in itself and stands independent of the other.
In a certain code, CERTAIN is written as XVIGZRM and SEQUENCE is written as HVJFVMXY, how will MUNDANE be written in that code?
NFMWZMX
NFMWZMV
NFMXZMV
Each letter in the word is replaced by the letter which occupies the same position from the other end of the alphabet, to obtain the code.
A,P,R,X,S and Z are sitting in a row. S and Z are in the centre, and A and P are at the ends. R is sitting on the left of A. Then who is sitting on the right of P?
R is on the left of A i.e., R, A.
A and P are at the ends i.e., P, -, -, -, R, A.
S and Z are at the centre i.e., P, -, S, Z, R, A.
Thus, the arrangement in the row is: P, X, S, Z, R, A.
Clearly, X is on the right of P.
The number of letters missed is not uniform
BFJNRV
DGJMPS
MORTXY
PRTVXZ
B F J N R V
D G J M P S
M O R T X Y
P R T V X Z
Y W @ 1 & C N 3 P L B 9 ↑ = D * E 2 £ M V $ 7 # 4 F G 5
If the numbers immediately preceding the symbols are attached the value double their numerical value, then what will be the sum of the values of all such numbers?
Required sum = 2*(1+9+2+7) = 2*19 = 38
A newspaper always has
There has been a recent death in your family, and you are still grieving. However, your quarterly appraisal is round the corner, and for this you have to catch up with a lot of work. What would you do?
Take the help of your organisation counselor to get overcome your emotions
Ignore the appraisal and continue grieving since the appraisal happens every three months, you feel you can make up the next time.
You will just try your best to wriggle out of the situation by asking your superior to just postpone the appraisal for you this time.
You will get back to the work immediately
Assertion (A) : India has a tropical monsoon type climate.
Reason (R) : India is located exactly between the tropical latitudes.
India has a tropical monsoon type climate owing to its geographical relief. Only the lower half of India lies amidst tropical latitude, as the tropic of Cancer passes through its centre.
Given an input, a machine generates pass code for the six batches each day as follows:
Input : these icons were taken out from the sea
Pass code :
Batch I : from sea the out taken were icons these
Batch II : from icons these were taken out the sea
Batch III : from icons out the taken were these
Batch IV : from icons out sea these were taken the
First batch starts at 10.00 a.m. and each batch is for one hour. These is a rest period of one hour after the end of the fourth batch.
What will be the pass code for the batch at 3.00 p.m. if input is 'four of the following five from a group'?
Sequential Output Tracing
a five following form four group the of
a five following form group the of four
a five following form four of the group
a five following form four group of the
The pattern followed is as under:
In the first step, the word which comes first in the dictionary is placed at the first place and the remaining words are written in a reverse order.
In the second step, the word which comes second in the dictionary is placed at the second place and all words except the first and the second are written in a reverse order. The process continues in the same manner to give the pass codes for the subsequent batches.
Input : four of the following five forms a group
Batch I (10 a.m. to 11 a.m.) : a group form five following the of four
Batch II (11 a.m. to 12 noon) : a five four of the following form group
Batch III (12 noon to 1 p.m.) : a five following group form the of four
Batch IV (1 p.m. to 2 p.m.) : a five following form four of the group
Rest hour (2 p.m. to 3 p.m.)
Batch V (3 p.m. to 4 p.m.) : a five following form four group the of
Kunal walks 10 km towards North. From there, he walks 6 kms towards South. Then, he walks 3 kms towards East. How far and in which direction is he with reference to his starting point?
5 kilometers West
5 kilometres North-east
7 Kilometres East
7 kilometres West
The movements of Kunal are as shown in figure (A to B, B to C and C to D).
AC=(AB-BC)=(10-6)km = 4 km
Clearly, D is to the North-east of A.
Kunal's distance from starting point A
\(=AD=\sqrt{AC^{2}+CD^{2}}=\sqrt{4^{2}+3^{2}}=\sqrt{25}=5\, km\)
So, Kunal is 5 km to the North-east of his starting point.
In each of the following questions, arrange the given words in a meaningful sequence and then choose the most appropriate sequence from amongst the alternatives provided below each question:
1. Atomic age
2. metallic Age
3. Stone Age
4. Alloy Age
Six persons A, B, C, D, E and F are sitting around a round facing towards the centre of the table in a restaurant. They have ordered for different items (Pizza, Strawberry, Vanilla, Burger, Pastries and Patties) as their lunch. They are wearing T-shirts of different colours, i.e., white, black, green, red, yellow and blue. Order of items for the lunch and colours of t-shirts are not necessary according to the order of their names.
I. The persons who have ordered for Pizza, Vanilla and Pastries are neither in white T-shirt nor in black.
II. The persons who are in green and yellow T-shirts have neither ordered for Pizza nor for Vanilla.
III. A is neither in white T-shirt nor on the immediate left of the person who has ordered for Burger.
IV. The only person who is between E and F eats Strawberry. The person who is on the left side of the person in white T-shirt does not eat patties.
V. D has ordered Burger and the colour of his T-shirt is green. He is facing the person who has ordered for strawberry.
VI. One who has ordered for Pizza is seated opposite to the person wearing blue T-shirt, while the person whose T-shirt is of green colour is on the left of the person who has ordered for Pastries.
VII. One who has ordered for Patties is on the immediate right of the person in white T-shirt but on the immediate left of the person who has ordered for Vanilla.
VIII. C has ordered for Vanilla while F has not ordered for Pizza.
Who among the following is in white T-shirt?
D orders for Burger and wears green T-shirt. The person opposite to D orders for Strawberry. Now, the persons wearing white and black do not order Pizza, Vannila, Pastries or burger. So, they order for Strawberry or Patties. But one who orders for Patties is on the immediate right of the person in white T-shirt. So, the person wearing white orders for strawberry while the person who orders for Patties wears black and is seated to the right of the person in white T-shirt.
Now, the person who orders for Patties is on the immediate left of the person who orders for Vanilla. Clearly, the person to the left of the person in white, orders for Pizza. But he is opposite to the person wearing blue T-shirt. So, the person who orders for Vanilla, wears blue T-shirt. Now the person who orders for Pizza must be wearing yellow or red. But a person with yellow T-shirt doesn't order for Pizza. So, he wears red. Thus, the person who orders for Pastries wears yellow T-shirt.
The person who likes strawberry is the only person between E and F. But F does not like Pizza. So, E orders for Pizza and wears red T-shirt, while f orders for Patties and wears black T-shirt. Now, A neither wears white nor sits to the immediate left of D. So, A orders for Pastries and wears yellow T-shirt. C does not order for Vanilla. So, C orders for Strawberry and wears white T-shirt. Thus, B orders for Vanilla and wears blue T-shirt.
C wears a white T-shirt. | CommonCrawl |
Beni-Suef University Journal of Basic and Applied Sciences
QSAR and molecular docking studies of novel 2,5-distributed-1,3,4-thiadiazole derivatives containing 5-phenyl-2-furan as fungicides against Phythophthora infestans
Yusuf Isyaku ORCID: orcid.org/0000-0001-7436-35071,
Adamu Uzairu1 &
Sani Uba1
Beni-Suef University Journal of Basic and Applied Sciences volume 9, Article number: 11 (2020) Cite this article
The 1,3,4-thiadiazoles are among the structural moieties that were found to be of utmost importance in the fields of pharmacy and agrochemicals because of their widespread biological activity that includes anti-tumor, antibacterial, anti-inflammatory, antihypertensive, anti-tuberculosis, anticonvulsant, and antimicrobial, among others.
QSAR and molecular docking studies were carried out on thirty-two (32) derivatives of 2,5-disubstituted-1,3,4-thiadiazoles for their antifungal activities toward Phytophthora infestans. Using the "graphical user interface" of Spartan14 software, the structure of the compounds of the dataset is drawn and then optimized at DFT/B3LYP/6-31G* quantum mechanical method of the software. Molecular descriptors of the optimized compounds were calculated and later on divided into the training set and test sets (at a ratio of 3:1). The training set was used for model generation and the test set was for external validation of the generated model. Four models were generated by the employment of genetic function approximation (GFA) in which the optimal model (4) turned out to have the following statistical parameters: R2 = 0.798318, R2adj = 0.750864, cross-validation R2(Q2cv) = 0.662654, and external validation R2pred = 0.624008. On the molecular docking study of thiadiazole compounds with the target protein of Phytophthora infestans effector site (PDB ID: 2NAR ), compound 13 shows the highest binding affinity with − 9.3 kcal/mol docking score and composes hydrophobic as well as H-bond interactions with the target protein (2NAR).
The result of the QSAR study signifies the stability and robustness of the built model by considering the validation parameters and this gave an idea of template/ligand-based design while the molecular docking study revealed the binding interaction between the ligand and the protein site which gave an insight toward an "optimization method" of the structure-based design for the discovery of more potent compounds with better activity against Phytophthora infestans using the approach of computer-aided drug design (CADD) in plant pathology.
Phytophthora infestans (also called potato blight) may be the most destructive of all plant pathogens that excessively damage potato/Irish potato leading to famines and immigration in the nineteenth century [13, 14]. Some of the signs and symptoms of this disease can be seen as a white color in potato. P. infestans generate some sporanges on the steam and leaves of potato [15]. The sporanges always displayed at the lower superficies of the leaves. However, as in the case of tuber blight, the white hypha usually appears at the superficies of the tuber [11]. In normal circumstances, P. infestans perfect its life on potato or tomato leaves at approximately 5 days [22]. The sporanges formed at the surface of foliage thereby dispersing through plants at over 10 °C (50 °F) temperature and humidity of above 75–80% in two or more days. Sometimes the spores are washed away by the rain which gets into the soil and infect the early-stage tubers; and also, these spores can make long distances into the air which can easily get into another host. The premature levels of the blight may have disappeared. Some of the symptoms involve dark blotches displayed at the extreme end of the leaf and on the plant's stem. A grey/dark patch developed on the affected tuber which covered the skin and rapidly decomposed it to an unpleasant odor. And apparently, healthy tubers may later become rotten while stored. According to the FOA report, the most thrusting/attacking problem in the third world apart from poverty must be food shortages. Farmers in Africa are encountering distinct limitations in food production as well as cash crops. Some of those limitations include damages from diseases and pests like fungi. In the search for food and the fight for human survival, the Irish potato has a significant role to play in food supply and, therefore, has been an instrument in addressing the issues of food insecurity, due to its performance in a given area and in a given time. This potato blight cause excessive economic loose, the annual economic loose caused by P. infestans in the developing countries begins to approach a $3-billion mark [5]. Due to its rapid adaptation to the various management skills (such as genetic resistivity), control of this plant pathogen is really challenging [10]. And this makes the synthesis of novel compounds that will inhibit the dangerous P. infestans to be among the most considerable in the field of agrochemicals. Some of these researches include computational studies.
The 1,3,4-thiadiazoles derivatives are among the structural moieties that were found to be of utmost importance in the fields of pharmacy and agrochemicals for their widespread biological activity such as anti-tumor [28], antibacterial [25], anti-inflammatory [19], antihypertensive [30], antituberculosis [23], anticonvulsant [18], and antimicrobial [2], among others. Furthermore, reports identify that compounds containing furan are intensively bioactive. Several researches on the derivatives of furan such as "pyrazole and triazole [6], diacyl-hydrazine derivatives [7]" containing 5-phenyl-2-furan moiety were carried out in which there appeared to have extensive biological activities including fungicidal and insecticidal activities, among others.
The Quantitative structure–activity relationship (QSAR) study aims to develop correlation models considering the activity of compounds and other chemical information in a statistical approach [16, 27] which will lead us to the design of new compounds. While molecular docking study is "a way of predicting the favorable orientation of one molecule to another when reacted to produce a stable complex", it will also lead us to the design of more potent compounds.
Our aim in this research work is to predict highly active compounds by the employment of Genetic function approximation (GFA) and perform a molecular docking study between the 1,3,4-thiadiazole compounds and the 2NAR protein of P. infestans to predict their stable molecular orientation.
Thirty-two derivatives of 2,5-disubstituted-1,3,4-thiadiazole derivatives containing 5-phenyl-2-furan used in this work were taken from the literature [8]. The activity of the compounds were reported in EC50 (g/L) values, which were converted to pEC50 (pEC50 = − log1/EC50). Presented in Fig. 1 and Table 1 are the molecular structures and their corresponding activities found in the dataset.
Parent structure of the dataset compounds
Table 1 Compounds and their pEC50 values
Molecular structure optimization
The structures of the compounds were optimized at the "Density function theory (DFT)" level, "Becke's three-parameter Lee-Yang-Parr hybrid functional (B3LYP)" version together with the "6-31G*" basis set of Spartan14 [4]. In this process, all the molecular structures were drawn in the graphical user interface of Spartan14 software. The energies of the drawn molecules were minimized using Molecular Mechanics Force Field (MMFF) calculation [3].
Molecular descriptor calculations
Molecular descriptors are the properties of the molecule in numerical/mathematical values. PaDEL descriptor software was used to further calculate additional energy of those low-energy conformers, where a total of 1875 descriptors were calculated.
Dataset splitting
Using Kennard–Stone algorithm technics, the dataset of 32 compounds was split into two: the training set and the test set (70% of the training and 30% to the test set) which is found in DatasetDivision GUI 1.2 software. In this technic, the training and the test set were used for model development and its validation (externally) [12].
The training set in the dataset was used for model generation through the employment of the GFA method available in the material studio. The regression analysis occurs by considering the inhibition concentration (pEC50) as a dependent variable while the chosen descriptors served as independent variables.
Internal validation
Internal validation of 22 compounds of the training set took place in the software (Material studio) used for building the model. The validation parameters are as follows:
Cross-validation
This parameter was used to determine the ability of the QSAR model in predicting the activities of newly designed compounds. This indicates the stability of the built model.
$$ {Q}_{\mathrm{cv}}^2=1-\frac{\sum {\left(Y\mathrm{pred}-Y\exp \right)}^2}{\sum {\left(Y\exp -\overline{Y}\right)}^2} $$
where Yexp is the "observed/experimental activity", Ypred is the "predicted activity", and \( \overline{Y} \) is the "mean value of the observed activity".
Friedman's lack of fit (LOF)
The parameter describes the measure of the fitness of the model and it is given by equation. ii below:
$$ \mathrm{LOF}=\frac{\mathrm{SEE}}{{\left(1-\frac{C+ dP}{M}\right)}^2} $$
where SEE is the standard error,
$$ \mathrm{SEE}\sqrt{\frac{{\left({Y}_{\mathrm{exp}}-{Y}_{\mathrm{pred}}\right)}^2}{N-P-1}} $$
C is the "number of terms in the model", d is the "user-defined smoothing parameter", P is "the total number of descriptors in the model", and M is "the number of molecules in the training set".
The regression model is given by the straight line graphs' equation, "(Y = mx + c)",
$$ Y={D}_1{x}_1+{D}_2{x}_2+{D}_3{x}_3\dots .+{D}_n{x}_n+c $$
(iv)
where Y is the predicted activity (pEC50), D is the corresponding coefficients, x is the independent variable, and c is the regression constant [17].
The correlation coefficient (R2)
This is another parameter used to assess the model. The closer the value of R2 to 1.0, the better the model generated. R2 is expressed as:
$$ {R}^2=1-\frac{\sum {\left({Y}_{\mathrm{exp}}-{Y}_{\mathrm{pred}}\right)}^2}{\sum {\left({Y}_{\mathrm{exp}}-{\overline{Y}}_{\mathrm{train}}\right)}^2} $$
The value of R2 changes instantly with an increase in descriptors; therefore, the reliability of R2 in measuring the stability of a given model is very minimal. Thus, R2 has to be adjusted in order to have a fit and strong model. The following equation define adjusted R2 as [1]:
$$ {R}^2\mathrm{adj}=\left(1-{R}^2\right)\frac{\left(n-1\right)}{n-P-1}=\frac{\left(n-1\right)\left({R}^2-P\right)}{n-P+1} $$
where P is the number of independent variables possessed by the model and n is the number of training sets' compounds [21].
External validation
The model generated was further validated with the test set of the dataset in order to measure its level of competence in predicting the activity of new compounds. This was done by evaluating the values of the square of the calculated R2 of the test set values. The closer the R2 is to 1.0, the better the robustness, fitness, and the prediction capacity of the model as well. Though sometimes R2 value does not matter if the model fails other statistical analyses such as variance inflation factor (VIF) and mean effect, among others. The coefficient of determination R2pred is given by the following equation:
$$ {R}^2=1-\frac{\sum {\left({Y}_{{\mathrm{pred}}_{\mathrm{test}}}-{Y}_{\exp_{\mathrm{test}}}\right)}^2}{\sum {\left({Y}_{\exp_{\mathrm{test}}}-{\overline{Y}}_{\mathrm{train}}\right)}^2} $$
where \( {Y}_{{\mathrm{pred}}_{\mathrm{test}}} \) and\( {Y}_{\exp_{\mathrm{test}}} \) are the values of predicted and experimental activities for the test set and \( \overline{Y} \)train is the average activity for the training sets' values [3].
Statistical analysis of the descriptors
Variance inflation factor (VIF)
VIF is defined as the measure of multicollinearity amongst the independent variables (i.e., descriptors). It quantifies the extent of correlation between one predictor and the other predictors in a model.
$$ \mathrm{VIF}=\frac{1}{\left(1-{R}^2\right)} $$
(viii)
where R2 gives multiple correlation coefficient between the variables within the model. If the VIF is equal to 1, it means there is no intercorrelation in each variable, and if it ranges from 1 to 5, then it is said to be suitable and acceptable. But if the VIF turns out to be greater than 10, this indicates the instability of the model and need to be reexamined ([20, 26].
Mean effect (ME)
The average effect (mean effect) correlates the effect or influence of given molecular descriptors to the activities of the compounds that made up the model. The descriptor signs show the direction of their deviation toward the activity of the compounds. That is to say, an increase or decrease in the value of the descriptors will improve the activity of the compounds. The mean effect is defined by the following:
$$ \mathrm{Mean}\ \mathrm{effect}=\frac{B_j{\sum}_i^n{D}_j}{\sum_j^m\left({B}_j{\sum}_i^n{D}_j\right)} $$
where Bj and Dj are the j-descriptor coefficient in the model and the values of each descriptor in training set, while m and n stand for the number of molecular descriptors as well as the number of molecules in a training set. To evaluate the significance of the model, the mean effect of each descriptor was calculated [9].
Applicability domain
To confirm the reliability of the model and to examine the outliers as well as the influential compounds, it is very important to evaluate the applicability domain of the built model. Its aim at predicting the uncertainty of a compound depends on its similarities to the compounds used in building the model and also the distance between the training and test set of the compounds. This can be achieved by employing William's plot which was plotted using standardized residuals versus the leverages. The leverages for a particular chemical compound is given as follows:
$$ {h}_i={Z}_i{\left({Z}^T.Z\right)}^{-1}\ {Z_i}^T $$
where hi is the leverage for a particular compound and Zi is the matrix i of training set. Z is the nxk descriptor-matrix for a training set compound. ZT is the transpose of the Z matrix. The warning leverage (h*) that is the boundary for normal values of Z outliers is given by;
$$ {h}^{\ast }=3\frac{\left(p+1\right)}{n} $$
(xi)
Where n is the number of molecules in the training set whereas p gives the amount of descriptors presence in the built model [17].
Molecular docking studies
With the aid of Autodock Vina of Pyrex software and Discovery Studio, a molecular docking study was performed between 2,5-disubstituted-1,3,4-thiadiazole derivatives and P. infestans effector target site to examine the interaction between the binding pocket of the effector and the compounds (i.e., the ligands). A highly resolute crystal structure of P. infestans was downloaded successfully from the protein databank (PDB Code: 2NAR). The downloaded substrate was carefully prepared using Discovery Studio which was later transported to the Pyrex for the docking calculation. With the aid of Spartan14 version 1.1.4, the optimized compounds of 2,5-disubstituted-1,3,4-thiadiazole derivatives (the ligands) were converted to PDB files [24]. The prepared structure of P. infestans effector site and prepared ligands were docked using Autodock Vina 4.2 [29]. Discovery Studio Visualizer was also used to visualize the docking results (Fig. 2).
X-ray structure of the downloaded protein and the prepared ligand
Model building and validation
Below is the equation of the best-chosen model (4).
$$ {\mathrm{pEC}}_{50}=Y=0.037511826\ast \mathrm{AMR}+4.846246933\ast \mathrm{SCH}-7+0.021854712\ast \mathrm{WG}.\mathrm{unity}+0.3299691\ast \mathrm{Wnu}2.\mathrm{eneg}-6.116815304 $$
The validation parameters are shown in Tables 2 and 3 below.
Table 2 Validation parameters of the model 4
Table 3 Minimum recommended values of validated parameters for generallyacceptable QSAR
In the first model, pEC50 = 0.244535617 × BCUTp-1l − 22.874691031 × SCH-6 + 0.213428935 × WA.mass − 0.025525444 × Wgamma3.volume + 10.325883792, R2teat = 0.395084, R2train = 0.824826, R2adj = 0.783609, R2cv = 0.56979, Ntest = 10, Ntrain = 22, LOF = 0.12303, and m in experimental error for non-significant LOF (95%) = 0.12303.
In the second model, pEC50 = 0.297814107 × nCl + 0.168441873 × nBondsS3 + 0.001197233 × PPSA-1 − 0.024107696 × Wgamma3.volume + 0.268877261, R2teat = 0.206664, R2train = 0.807874, R2adj = 0.762668, R2cv = 0.558932, Ntest = 10, Ntrain = 22, LOF = 0.12885, and min experimental error for non-significant LOF (95%) = 0.128845.
In the third model, pEC50 = 0.139831691 × nHeavyAtom + 0.314911162 × nCl + 0.001443139 × PPSA-1 − 0.024455939 × Wgamma3.volume − 2.325579534, R2teat = 0.3681753, R2train = 0.800593, R2adj = 0.753674, R2cv = 0.516043, Ntest = 10, Ntrain = 22, LOF = 0.13126, and min experimental error for non-significant LOF (95%) = 0.131264.
Table 4 and Table 5 presented the external validation and calculation of predicted R2 of the chosen model.
Table 4 External validation
Table 5 Calculations of predicted R2
Statistical analyses of the descriptors
The following are the different analyses: Pearson's correlation, standard regression coefficients, standardized predicted activity against experimental activity, standardized residual against experimental activity (pEC50), and William's plot.
The results of the docking study
The results can be seen in the receptor–ligand interaction, H-bond interactions, and hydrophobic and electrostatic interactions.
QSAR model
The best QSAR model was generated using the GFA method. Four descriptors were used in building the model where four different models were generated and model 4 found to be the best following the statistical parameters. All the values obtained matches the minimum value for evaluating the QSAR model. These values signify that there is a high correlation between the predicted and experimental activity (pEC50, Fig. 3). Internal and external validations, as well as the other statistical analysis, made the model 4 to stand fit, reliable, and highly predictive.
Plot of predicted activity against experimental activity (pEC50)
From Tables 2 and 4, the R2 values of 0.79831800 (internal) and 0.624008 (external) indicate a strong relationship between the experimental and predicted activities. Additionally, the inhibition activities of the compounds increase by the addition of all the descriptors in the best chosen model.
Interpretation of descriptors
The 2D molecular descriptors AMR and SCH-7 defined as "Molar refractivity" and "Simple chain, order 7" are the first and second highest contributors toward the generation of the selected model with a positive mean effect of 0.52115 and 0.4413. Thus, the addition of those descriptors will significantly enhance the antifungal activity of the compound. 3D descriptors WG.unity and Wnu2.eneg with the mean effect of 0.01001 and 0.02754 have a low effect on the model therefore their increase will have no much significant on the activity of the compound. They are defined as "Non-directional WHIM, weighted by unit weights" and "Directional WHIM, weighted by Mulliken atomic electronegativities".
Model 4 was examined as the optimal model considering the descriptors from test set compounds of the dataset.
The experimental activity, predictive activity, and residual values of the compounds are given in Table 6. The residual value is defined as the difference between the experimental and predicted activities. The lower residual values between the experimental and predicted activities indicate the high predictive power of the model.
Table 6 Experimental activity, predictive activity, and residual values of the dataset compounds
Statistical analysis of descriptors
Pearson's correlation (Table 7) was performed between the descriptors of the chosen model in order to evaluate the relationship between each of the descriptors. The result of the correlation showed no intercorrelation among the descriptors with a correlation coefficient of less than 0.5, which signified that the descriptors used in the model were good enough. The VIF values are within the range of 1 to 5 which indicated that the descriptors and model are suitable and acceptable.
Table 7 Pearson's correlation
Table 8 showed the standard regression coefficients "bj", the values of mean effect (MF), and confidence interval (p values). These give vital information on the effect and contribution of the descriptors toward the built model. The individual capability and inducing power of the selected descriptors toward the activity of the compounds depend on their values, signs, and their mean effects as well. The p values of the four descriptors (at a confidence limit of 95%) that made up the model are all less than 0.05; this implies that there is a significant relationship among the descriptors (as contrary to the null hypothesis) and the inhibitory concentration of the compounds.
Table 8 The standard regression coefficients "푏푗", the values of mean effect (MF), and confidence interval (p values)
Figure 4 which presented a graph of observed activity versus standardized residual shows a random dispersion at the baseline where the standardized residual is zero. Therefore, no systematic error occurred in the built model.
Plot of standardized residual against experimental activity (pEC50)
The graph of standardized residuals versus leverages (for all the training set and test set) termed the William's plot shown in Fig. 5. The domain of applicability is established within a box at ± 3.0 limit for the residuals and a leverage threshold h* (h* = 0.68). This William's plot functions to figure out the outliers as well as the influencing compounds in the model. Our results revealed that two compounds of the test set (with pEC50 of 1.84011 and 2.05115) were outside the applicability domain which signified that the compound may be structurally different from other compounds in the dataset. Thus, the compound was outside the warning leverage h* which was found calculated as 0.68.
William's plot
The docking study
Molecular docking was run between the protein of P. infestans effector target site (PDB ID: 2NAR; >95% purity) and the ligands to investigate/examine the mode of interaction of the ligands with the macromolecular target site of the protein. The interaction of all the 32 compounds with the receptor active site was carried out in which the receptor–ligand interactions with lower energy, i.e., those with better docking scores, were recorded in Table 9. The table consists of the ligands with their binding affinity, the H-bonds, H-bond distances, as well as their hydrophobic and electrostatic interactions. The binding affinity for all the compounds is between the range of − 8.2 to − 9.3 kcal/mol. Compound 13 possessed the highest binding score with − 9.3 kcal/mol and showed an interaction mode with H-bonds (GLU88 with H-bond distance of 2.78089Å and GLN67Å with H-bond distance of 2.91512Å), hydrophobic interaction mode of TYR87 (4.7572Å), TYR71, LEU52, TYR87 (4.88051Å), and ALA69 residues.
Table 9 The binding energy, H-bonds, H-bond distances, hydrophobic and electrostatic interactions of receptor, and the ligands with the highest docking scores
Figure 6 showed a receptor–ligand interaction while Fig. 7 is the 2D structure which shows that H-bond interaction exists between the receptor and the compound 13 which has a better binding affinity and showed a better interaction with the macromolecular target site of the residue when compared with other compounds as well.
Receptor–ligand interaction
2D structure showing H-bond interactions between receptor and compound 13
This research involves a QSAR and molecular docking studies on 32 compounds of 2,5-disubstituted-1,3,4-thiadiazole derivatives against P. infestans effector site. After using DFT to optimize the compounds, GFA was used to generate the built model. Among the four generated models, the fourth model was found to be the optimal, having appreciable statistical parameters with R2 = 0.798318, R2adj = 0.750864, cross-validation R2 (Q2cv) = 0.662654, and external validation R2pred = 0.624008. Descriptors AMR and SCH-7 were the first and second highest contributors toward the generation of the selected model, and thus, their increase will increase the activity of the compound while WG.unity and Wnu2.eneg have a low effect on the model, therefore, their increase will have no much significance on the activity of the compound against P. infestans.
According to the docking scores, almost all the ligands (compounds) showed high binding affinity/strong inhibition activity against P. infestans effector site. However, ligands 11, 13, 14, 15, 17, 24, 26, and 30 showed higher binding affinity ranging from − 8.9 to − 9.3 kcal/mol. With ligand 13 having the highest binding energy of − 9.3 kcal/mol. This compound [13] was able to strongly dock at the binding pocket of the P. infestans effector site (2NAR) producing an H-bond as well as hydrophobic interaction with the target site.
The generated QSAR model provides a worthy idea on ligand-based design whereas the molecular docking analysis suggested an approach toward the structure-based design of novel and more potent compounds against P. infestans.
B3LYP:
Becke's three-parameter read Yang-Parr hybrid
DFT:
Density function theory
GFA:
Genetic function approximation
PDB:
Protein data bank
P. infestans :
Phytophthora infestans
QSAR:
Quantitative structure–activity relationship
Adeniji SE, Uba S, Uzairu A (2018) QSAR modeling and molecular docking analysis of some active compounds against Mycobacterium tuberculosis receptor (Mtb CYP121). J Pathog 2018(1018694):24
Almajan GL, Barbuceanu SF, Saramet I, Draghici C (2010) New 6-amino-[1, 2, 4] triazolo [3, 4-b] [1, 3, 4] thiadiazines and [1, 2, 4] triazolo [3, 4-b] [1, 3, 4] thiadiazin-6-ones: synthesis, characterization and antibacterial activity evaluation. Eur J Med Chem 45(7):3191–3195
Arthur DE (2017) Toxicity modelling of some active compounds against k562 cancer cell line using genetic algorithm-multiple linear regressions. J Turkish Chem Soc, Section A: Chem 4(1):355–374
Benarous N, Cherouana A, Aubert E, Durand P, Dahaoui S (2016) Synthesis, characterization, crystal structure and DFT study of two new polymorphs of a Schiff base (E)-2-((2, 6-dichlorobenzylidene) amino) benzonitrile. J Mol Struct 1105:186–193
CABI International, (2018). Phytopthora infestans (Phytopthora blight). Retrieved from https://www.cabi.org/isc/datasheet/40970.
Cui ZN, Shi YX, Zhang L, Ling Y, Li BJ, Nishida Y, Yang XL (2012) Synthesis and fungicidal activity of novel 2, 5-disubstituted-1, 3, 4-oxadiazole derivatives. J Agric Food Chem 60(47):11649–11656
Cui Z, Li X, Tian F, Yan X (2014) Synthesis and bioactivity of 5-substituted-2-furoyl diacylhydazide derivatives with aliphatic chain. Int J Mol Sci 15(5):8941–8958
Cui ZN, Li YS, Hu DK, Tian H, Jiang JZ, Wang Y, Yan XJ (2016) Synthesis and fungicidal activity of novel 2, 5-disubstituted-1, 3, 4-thiadiazole derivatives containing 5-phenyl-2-furan. Sci Rep 6:20204
Edache EI, Hambali HU, Arthur DE, Oluwaseye A, Chinweuba OC (2016) In-silico discovery and simulated selection of multi-target Anti-HIV-1 inhibitors. Int Res J Pure Appl Chem 11(1):1–15
Fry W (2008) Phytophthora infestans: the plant (and R gene) destroyer. Mol Plant Pathol 9(3):385–402
Fry WE, Grünwald NJ (2010) Introduction to oomycetes. The Plant Health Instructor. https://doi.org/10.1094/PHI-I-2010-1207-01.
Gramatica P, Cassani S, Roy PP, Kovarich S, Yap CW, Papa E (2012) QSAR modeling is not "push a button and find a correlation": a case study of toxicity of (benzo-) triazoles on algae. Mol Inform 31(11-12):817–835
Griffiths RG, Dancer J, O'Neill E, Harwood JL (2003) A mandelamide pesticide alters lipid metabolism in Phytophthora infestans. New Phytol 158(2):345–353
Haas BJ, Kamoun S, Zody MC, Jiang RH, Handsaker RE, Cano LM et al (2009) Genome sequence and analysis of the Irish potato famine pathogen Phytophthora infestans. Nature 461(7262):393
Henfling JW (1987) Late blight of potato: Phytophthora infestans, Technical Information Bulletin 4. International Potato Center, Lima
Ibezim EC, Duchowicz PR, Ibezim NE, Mullen LMA, Onyishi IV, Brown SA, Castro EA (2009) Computer-aided linear modeling employing QSAR for drug discovery. Sci Res Essays 4(13):1559–1564
Ibrahim MT, Uzairu A, Shallangwa GA, Ibrahim A (2018) Computational studies of some biscoumarin and biscoumarin thiourea derivatives as α-glucosidase inhibitors. J Eng Exact Sci 4(2):0276–0285
Jatav V, Mishra P, Kashaw S, Stables JP (2008) CNS depressant and anticonvulsant activities of some novel 3-[5-substituted 1,3,4-thiadiazole-2-yl]-2-styryl quinazoline-4 (3H)-ones. Eur J Med Chem 43(9):1945–1954
Kadi AA, Al-Abdullah ES, Shehata IA, Habib EE, Ibrahim TM, El-Emam AA (2010) Synthesis, antimicrobial and anti-inflammatory activities of novel 5-(1-adamantyl)-1,3,4-thiadiazole derivatives. Eur J Med Chem 45(11):5006–5011
Karthikeyan C, Moorthy NHN, Trivedi P (2009) QSAR study of substituted 2-pyridinyl guanidines as selective urokinase-type plasminogen activator (uPA) inhibitors. J Enzyme Inhib Med Chem 24(1):6–13
Mustapha A, Shallangwa G, Ibrahim MT, Bello AU, Ebuka DA, Uzairu A, Mamza P (2018) QSAR studies on some C14-urea tetrandrine compounds as potent anti-cancer against leukemia cell line (K562). J Turkish Chem Soc, Section A: Chem 5(3):1387–1398
Nowicki M, Lichocka M, Nowakowska M, Kłosińska U, Kozik EU (2012) A simple dual stain for detailed investigations of plant-fungal pathogen interactions. Vegetable Crops Res Bull 77:61–74
Oruç EE, Rollas S, Kandemirli F, Shvets N, Dimoglo AS (2004) 1,3,4-thiadiazole derivatives. Synthesis, structure elucidation, and structure−antituberculosis activity relationship investigation. J Med Chem 47(27):6760–6767
Parvatham K, Veerakumari L, Shoba G (2015) Molecular docking studies of acetate-succinate CoA-transferase of Ascaris lumbricoides with a few phytochemicals and anthelmintics. J Comput Methods Mol Design 5(4):1–10
Plech T, Wujec M, Kosikowska U, Malm A, Kaproń B (2012) Studies on the synthesis and antibacterial activity of 3,6-disubstituted 1,2,4-triazolo [3, 4-b]1,3,4-thiadiazoles. Eur J Med Chem 47:580–584
Pourbasheer E, Aalizadeh R, Ganjali MR, Norouzi P (2014) QSAR study of IKKβ inhibitors by the genetic algorithm: multiple linear regressions. Med Chem Res 23(1):57–66
Roy K, Kar S, Das RN (2015) A primer on QSAR/QSPR modeling: fundamental concepts. Springer
Taher AT, Georgey HH, El-Subbagh HI (2012) Novel 1,3,4-heterodiazole analogues: synthesis and in-vitro antitumor activity. Eur J Med Chem 47:445–451
Trott O, Olson AJ (2010) AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading. J Comput Chem 31(2):455–461
Vio L, Mamolo MG, Laneve A (1989) Synthesis and antihypertensive activity of some 1,3,4-thiadiazole derivatives. Farmaco (Societa chimica italiana: 1989) 44(2):165–172
The authors acknowledge the technical effort of the Department of Chemistry, Ahmadu Bello University, Zaria, Nigeria, and Muhammad Tukur Ibrahim for his useful advice toward the successful completion of the research.
Availability of data and material
The authors declare that no funding has been received.
Department of Chemistry, Ahamadu Bello University, Zaria, Nigeria
Yusuf Isyaku, Adamu Uzairu & Sani Uba
Yusuf Isyaku
Adamu Uzairu
Sani Uba
YI contributed throughout the research work. AU gave directives and technical advices. SU partook in the technical activities. All authors have read and approved the manuscript.
Correspondence to Yusuf Isyaku.
Isyaku, Y., Uzairu, A. & Uba, S. QSAR and molecular docking studies of novel 2,5-distributed-1,3,4-thiadiazole derivatives containing 5-phenyl-2-furan as fungicides against Phythophthora infestans. Beni-Suef Univ J Basic Appl Sci 9, 11 (2020). https://doi.org/10.1186/s43088-020-0037-5
QSAR
P. infestans | CommonCrawl |
Mendelian randomization studies of brain MRI yield insights into the pathogenesis of neuropsychiatric disorders
Volume 22 Supplement 3
19th International Conference on Bioinformatics 2020 (InCoB2020): genomics
Weichen Song1 na1,
Wei Qian1 na1,
Weidi Wang1,2,
Shunying Yu1,2 &
Guan Ning Lin ORCID: orcid.org/0000-0001-9496-01491,2
Observational studies have identified various associations between neuroimaging alterations and neuropsychiatric disorders. However, whether such associations could truly reflect causal relations remains still unknown.
Here, we leveraged genome-wide association studies (GWAS) summary statistics for (1) 11 psychiatric disorders (sample sizes varied from n = 9,725 to 1,331,010); (2) 110 diffusion tensor imaging (DTI) measurement (sample size n = 17,706); (3) 101 region-of-interest (ROI) volumes, and investigate the causal relationship between brain structures and neuropsychiatric disorders by two-sample Mendelian randomization. Among all DTI-Disorder combinations, we observed a significant causal association between the superior longitudinal fasciculus (SLF) and the risk of Anorexia nervosa (AN) (Odds Ratio [OR] = 0.62, 95 % confidence interval: 0.50 ~ 0.76, P = 6.4 × 10− 6). Similar significant associations were also observed between the body of the corpus callosum (fractional anisotropy) and Alzheimer's disease (OR = 1.07, 95 % CI: 1.03 ~ 1.11, P = 4.1 × 10− 5). By combining all observations, we found that the overall p-value for DTI − Disorder associations was significantly elevated compared to the null distribution (Kolmogorov-Smirnov P = 0.009, inflation factor λ = 1.37), especially for DTI − Bipolar disorder (BP) (λ = 2.64) and DTI − AN (λ = 1.82). In contrast, for ROI-Disorder combinations, we only found a significant association between the brain region of pars triangularis and Schizophrenia (OR = 0.48, 95 % CI: 0.34 ~ 0.69, P = 5.9 × 10− 5) and no overall p-value elevation for ROI-Disorder analysis compared to the null expectation.
As a whole, we show that SLF degeneration may be a risk factor for AN, while DTI variations could be causally related to some neuropsychiatric disorders, such as BP and AN. Also, the white matter structure might have a larger impact on neuropsychiatric disorders than subregion volumes.
Neuroimaging study is the most widely used procedure for studying brain disorders [1]. Many neuroimaging studies in the past quarter-century have revealed brain abnormalities in neuropsychiatric disorders [1, 2], which have served as the basis for biomarker discovery, clinical guidance, and investigations into the mechanisms of neuropsychiatric disorders [2, 3]. However, it is unclear whether such associations reflect disease causality [4]. One concern is that the spurious correlations [5] could emerge from indirect correlations with confounders such as medication, circadian and dietary changes, or false-positive events. Another issue is the direction of causality: the neurotoxicity hypothesis [6] suggests that psychiatric illnesses have toxic effects on the central nervous system, which leads to structural alterations following disease onset [7]. This theory gained support from several observations in which neuroimaging abnormalities exhibited dynamic progression during the neuropsychiatric disorder course [6, 7]. Under such theories, these associations should be utilized as clinical biomarkers rather than mechanism identifiers.
Several case-control studies with large sample sizes [8, 9] have found a significant correlation between neuroimaging alteration and neuropsychiatric disorder, yet still unable to distinguish the causality from the correlation. Longitudinal analyses have partially overcome the limitations of cross-sectional observational studies [10] for investigating the disease causality, such as detecting neuroimaging alterations in participants with the so-called high-risk status along with their progression [11, 12]. However, these studies are still limited by our current definition of high-risk cohorts. For example, the onset of neuropsychiatric disorders may be far earlier than a clinically recognizable high-risk status, such that the associated abnormalities may include changes that occurred after the primary pathology.
Despite the longitudinal analysis focusing on the high-risk status, an alternative method for addressing the challenge of causality is Mendelian Randomization (MR) [13]. MR has been used to derive the relationships between peripheral inflammatory markers and schizophrenia [14] and between physical activity and depression [15] with great successes. By selecting genetic locus that is strongly associated with exposure, so-called instruments, MR separates subjects into high and low lifetime exposure groups according to their genotypes on the instruments [16], then compares the prevalence of outcomes between the high and low exposure groups. Such grouping is considered unbiased as genotypes are randomly determined during meiosis. Furthermore, a two-sample Mendelian Randomization (MR) estimates instrument-exposure and instrument–outcome relationships in different cohorts to infer the exposure-outcome relationship, without the need for information on individuals [16]. The explosive growth of genome-wide association studies (GWAS) offers opportunities for applying MR to solve the debate of causality in different clinical medicine fields. Recently, two GWAS from UK-Biobank (UKBB) [17, 18] revealed the genetic basis of brain structural measurements, providing an opportunity to address clinical neuroimaging studies' causality. In the present study, with the summary-statistics GWAS data of Magnetic resonance imaging (MRI) (N = 17,706 and 19,629) and twelve neuropsychiatric disorders (N = 9,725 to 1,331,010), we implemented a two-sample MR approach to detect the causal relationship between white matter (WM) structures (diffusion tensor imaging [DTI]), brain subregion volumes (region of interest [ROI]), and neuropsychiatric disorders. An overview of the study design is illustrated in Fig. 1.
Flowchart of the study. Integrative analysis: p value distribution of MR result
Causal associations between DTI and neuropsychiatric Disorder
After data harmonization, 1253/1320 (110*12) DTI-Disorder pairs and 973/1212 (101*12) ROI-Disorder pairs had at least one strong instrument (SNP with P < 5 × 10− 8 in DTI GWAS [17], Additional file 1) and were analyzed by MR (Additional file 2 and 3). We started by evaluating the DTI-Disorder association. In the inverse variance–weighted (IVW) analysis (outer layer of Fig. 2a), which required at least two instruments, one DTI-Disorder pair achieved study-wide significance (SW, P < 0.05/1320): superior longitudinal fasciculus Axial Diffusivity (SLF.AxD) – the risk of Anorexia nervosa (AN) (Fig. 2b; Table 1, odds ratio [OR] = 0.62, 95 % confidence interval [CI], 0.50 to 0.76, P = 6.4 × 10− 6). Another DTI-Disorder pair, body of corpus callosum Fractional Anisotropy (BCC.FA) − Alzheimer disease (AD), was close to reaching SW significance (Fig. 1c and Table 1, OR = 1.07, 95 % CI, 1.03 to 1.11, P = 4.1 × 10− 5). Both associations had relatively consistent results across instruments (Fig. 2b, c), in accordance with the fact that no outlier was detected (Additional file 2). Five other DTI-Disorder pairs also reached single-disease significance (P < 0.05/110) (Table 1). No significant result was found for the Wald ratio (second layer of Fig. 1a) applied to DTI-Disorder pairs with only one instrument.
MR results for DTI-Disease associations. a: Each radius represents one DTI measures (110 in total: 22 white matter tracts × 5 DTI parameters). From outer to inner layer: -log10(p) for MR-IVW; -log10(p) for MR-Wald Ratio; -log10(p) for intercepts (int.) of MR-Egger regression; heritability of each DTI measures. The dotted grey line indicated the nominal p threshold (0.05); solid grey line indicated study-wide significance p threshold (0.05/1253). CST: Corticospinal tract. b&c: Forest plots showing MR effect of Superior longitudinal fasciculus (axial diusivities) on Anorexia nervosa (SLF.AxD-AN) and body of corpus callosum (fractional anisotropy) on Alzheimer Disease (BDD.FA-AD). Each line showed the single SNP MR effects (95 % confidence interval) estimated by Wald Ratio, and the last line showed the meta-analysis results calculated by IVW. Vertical dashed line indicated the Egger estimation of MR effect
Table 1 All DTI-Disorder pairs reaching Single Disease significant threshold (p < 0.05 after correction)
A basic assumption of MR is that MR genetic instruments should only impact the outcome via the exposure and not any other pathway (horizontal pleiotropy) [16]. The existence of heterogeneity, which is introduced by outlier instruments, can also bias the MR estimation [19]. By applying various sensitivity tests, we confirmed that our results were not impacted by horizontal pleiotropy (Egger intercept P > 0.05; the third layer of Fig. 1a and Additional file 2) or heterogeneity (modified Cochran's Q test P > 0.05; Table 1), or bidirectional effects (reverse MR P > 0.05; Additional file 2). They also showed consistent trends across different MR methods, which are robust against pleiotropy and measurement error (see Method for detail) (except for Fornix.FA − bipolar disorder [BP]; Table 1). They also showed little directional pleiotropy, as indicated by funnel plots (Additional file 4). Neither MR-PRESSO nor leave-one-out tests found the impact of outliers on these DTI-Disorder pairs (Additional files 2 and 4). Taken together, these results confirmed the casual relation between SLF.AxD-AN, and suggested potential relations of other five DTI-Disorder pairs.
Overall contribution of DTI on neuropsychiatric disorders
Despite separate DTI-Disorder pairs that reached the significance threshold, we were also interested in whether DTI as a whole made a causal contribution to the diseases. To answer this question, we pooled the IVW P value for all DTI-Disorder pairs and compared them to the null uniform distribution (Fig. 3a and Additional file 2). The distribution of IVW P was significantly inflated (KS P = 0.009, λ = 1.37); this result persisted after removing heterogeneous DTI-Disorder pairs (those with Cochran's P < 0.05) (KS P = 0.002, λ = 1.48) or outlier instruments (SNPs with MR-PRESSO P < 0.05) (KS P = 8.0 × 10− 5, λ = 1.49). When we analyzed each disease separately ("Original" in Fig. 3b), BP (Fig. 3c) and AN showed the most significant inflation (Table 2). These results were also relatively stable against heterogeneity and outlier removal (Fig. 3b and Additional file 4). The permutation test confirmed that the results for BP (permutation P value [pp] for KS test: 0.012; pp for λ: 0.002) and AN (pp for λ: 0.012) were not due to the bias inherent in the data or method. Additionally, heterogeneous DTI-Disorder pairs generally had non-significant MR results and did not contribute to inflation (Fig. 3d and Additional file 2). Thus, we concluded that DTI polymorphisms made an overall causal contribution to neuropsychiatric disorders, especially BP and AN.
General contribution of DTI polymorphism to neuropsychiatric disorders. a: Quantile-Quantile (QQ) plot showing the distribution of all MR p values for the DTI-Disease (DD) association. b: Disease-specific inflation factor (λ). Solid points showed λ with no adjustment and were corresponded to λ in Table 2. Triangular points showed λ after removal of all DTI-Disorder pairs with significant heterogeneity. Cross points showed λ of p values which were calculated after removing all outlier SNPs (detected by MR-PRESSO) for each DTI-Disorder pair. Vertical error bars indicated a 95 % confidence interval. c: QQ plot for DTI-BP associations. d: Rank-Rank overlaps between MR effect and heterogeneity. The color of each grid corresponded to the proportion of each MR effect rank (each row summed up to 1). The number in each grid showed the exact number of DTI-Disorder pairs. NS: non-significant. Nominal: p < 0.05. SD: Single Disease level significance, p < 0.05/110. SW: Study-Wide significance, p < 0.05/1253
Table 2 Disease-specific p value distribution
Causal associations between brain volume and neuropsychiatric Disorders
Similar to the DTI-Disorder analysis, we also assessed the ROI-Disorder association. 973/1212 (101*12) RD pairs had at least one strong instrument (SNP with P < 5 × 10− 8) and were analyzed by MR. In the IVW analysis, we found no SW significant results (outer layer of Fig. 3a and Additional file 3). The Wald ratio revealed a marginal SW result for pars triangularis (PT)-SCZ (OR = 0.48, P = 5.93 × 10− 5) (second layer of Fig. 4a), which was driven by a single SNP (rs2279829). The only trait associated with rs2279829 in PhenoScanner [20] was daytime dozing or sleeping (P = 2.02 × 10− 6). The SMR-HEIDI test detected a significant MR effect in the same direction (OR = 0.48, P = 7.57 × 10− 4) with no evidence of colocalization pleiotropy (HEIDI P = 0.84). We estimated that per 1-SD increment in normalized PT volume, the risk of SCZ decreased by 52 % (OR = 0.48). In conclusion, these results suggested a potential causal relationship between PT and SCZ. However, the validation of this relationship requires more work on potential pleiotropy.
MR results for ROI-Disease associations. a: Circos plot for ROI-Disease (RD) relations, similar to Fig. 1a. Arrow indicated result for pars triangularis-schizophrenia association, which was described in detail in the main text. b: QQ plot for all IVW p values of RD pairs, similar to Fig. 2a. c: Disease-specific inflation factor (λ), similar to Fig. 2b
The overall distribution of the IVW p-value did not significantly differ from the null distribution (Fig. 4b), even after removing heterogeneity (KS P = 0.12) and outliers (KS P = 0.07). However, IVW p-value for SCZ showed significant inflation (λ = 1.90, KS P = 0.001), and the result was significantly impacted by heterogeneity (Fig. 4c). Additionally, 10 % (6/58) of the nominally significant RD pairs showed heterogeneity, and 3 contained outlier SNPs (Additional files 3 and 4). In conclusion, there was no evidence that ROI polymorphisms had a universal contribution to neuropsychiatric disorders.
The issue of causality has long beleaguered clinical neuroimaging studies [4]. Confirmation of a causal change can provide insights into disease mechanisms at the circuit and region levels and may reveal useful biomarkers for predicting prognosis. Many current neuroimaging studies cannot directly draw this conclusion [1, 4], which partly limits the translation of their findings to clinical practice. In this study, we conducted hypothesis-free, data-driven MR analyses to assess the causal relationship between neuroimaging polymorphisms and neuropsychiatric disorders in an unbiased manner.
Our results showed that, in general, WM connectivity was more closely associated with the risk of neuropsychiatric disorders than gray matter volume (GMV) (Fig. 3a and 4b). Among neuropsychiatric disorders, BP showed the most significant association with genetically determined connectivity polymorphisms (Fig. 3b,c). These results support the dysconnectivity theory [21] of psychiatry, positing that major psychiatric disorders such as SCZ and BP have common WM abnormalities in their pathology [22]. According to this hypothesis, anatomic and neurodevelopmental changes that arise from neurotoxicity [6] are a consequence rather than a cause of the illness. Indirect evidence from a functional study of neuromodulation and myelination [21] and a case-control study of the high-risk state [12] supports the dysconnectivity theory. Research interest has now shifted from the region-of-origin to a connectome's concept [22], in which connections between brain regions rather than the regions themselves cause BP and other mental illnesses. Our finding that DTI is more closely associated with the onset of neuropsychiatric disorders than ROI provides supportive evidence for this paradigm shift.
Our study's top MR result was a novel risk factor for AN—namely, decreased SLF.AxD. A few studies have reported a decreased SLF integrity in AN patients [23, 24], but the results were inconsistent with the marginal effect size. As neuroimaging findings in AN patients are influenced by dietary and metabolic alterations [25], a causal change may not be manifested as a visible signal. Nonetheless, the SLF may contribute to body image distortion in the pathology of AN [24], probably through its connection to areas responsible for body image perception (prefrontal and parietal networks) and self-perception (inferior parietal lobe) [26]. Although abnormalities in both GMV [26] and WM connectivity [23, 24] have been observed, our results suggest that the latter is a primary cause. In contrast, the former is a consequence of neuromodulator mechanisms such as activity-dependent pruning [21].
The roles of BCC in AD and PT in SCZ—2 marginally significant results from our MR analysis—have received more attention in the literature than SLF in AN. However, both MR results should be interpreted with caution. BCC atrophy is widely observed in AD patients even at an early stage and reflects Wallerian degeneration and myelin breakdown [27]. However, our MR analysis revealed a reverse association: the FA of BCC was positively associated with AD risk (β = 0.07). One possible explanation for this discrepancy is that an enlarged BCC in early life is a risk factor for AD development at an older age, causing BCC atrophy after disease onset. Confirmation of such a complex theory requires more robust evidence from large-scale longitudinal studies. As for the PT-SCZ association, although it was validated by several additional analyses such as SMR-HEIDI and reverse MR, a single SNP-driven MR result is by nature suspect due to the unexplored pleiotropy [16, 28]. Because the volume reduction of PT has been demonstrated in high-risk psychosis and first-episode SCZ patients [29], we suggest that the inferred causality between PT and SCZ is plausible.
There were some limitations to this study. Firstly, classic MR methods largely depend on high heritability and strong exposure instruments [16, 28]. However, both heritability and number of instruments [17, 18] vary across the tested neuroimaging parameters, such that the power of MR is inconsistent across all DTI-Disorder and RD pairs. In fact, 67 DTI-Disorder and 239 RD pairs were discarded at the beginning of our analysis due to the absence of instruments. Even if there were causal links among them, they would not have been detected in our study. Thus, negative results for DD/RD pairs with limited instruments are not as convincing as positive results for those with adequate instruments. For the positive results, it should also be noticed that only the genetically regulated proportion of polymorphisms are associated with the disorders. Secondly, the original GWAS sample sizes may have impacted the MR results since estimation accuracy (i.e., standard error for effect size) for the instrument–outcome relationship is directly linked to the confidence interval of the MR effect estimate [16, 28]. Since GWAS OCD and TS recruited fewer than 10,000 cases, their MR results were underpowered. In fact, several DD/RD pairs for OCD and TS had a large MR effect, but their wide CI range resulted in non-significant P values. Future GWAS with a larger sample size, both for neuroimaging polymorphism and neuropsychiatric disorders, will provide a better chance to improve our understanding in this field.
In conclusion, our analysis results demonstrate that, in general, WM structures make a more significant contribution to the etiology of neuropsychiatric disorders—especially BP and AN—than brain subregion volumes. SLF.AxD was causally related to AN; marginally significant relationships were also found between BCC.FA and AD and between PT and SCZ.
We obtained publicly available GWAS summary statistics for DTI, ROI, and neuropsychiatric without collecting any individual information. Ethics approval was obtained in each of the original studies; therefore, no further ethics approval was needed for the current study.
The genetic instruments for DTI measurements have been previously described [17]. Briefly, the ENIGMA-DTI pipeline [30] was used to analyze UKBB diffusion MRI data for 17,706 European participants and generate 110 DTI parameters—namely, fractional anisotropy (FA), axial diffusivity (AxD), mean diffusivity, mode of anisotropy, and radial diffusivity, of 21 WM tracts as well as their mean values. The genetic instrument for ROI volumes was also obtained from UKBB GWAS [18], which included 19,629 European participants and used the standard OASIS-30 Atropos template for registration and Mindboggle-101 atlas for labeling [31].
We collected GWAS summary statistics from the following neuropsychiatric studies on European cohorts: (1) Alzheimer disease (AD) [32]; (2) Attention-deficit/ hyperactivity disorder (ADHD) [33]; (3) Anorexia Nervosa (AN) [34]; (4) anxiety disorders (Categorical phenotype) [35]; (5) autism spectrum disorder (ASD) [36]; 6)bipolar disorder (BP) [37]; 7) insomnia [38] 8) major depression disorders (MD) [39]; 9) obsessive-compulsory disorder (OCD) [40]; 10) posttraumatic stress disorder (PTSD) [41]; 11) schizophrenia (SCZ) [42]; 12) Tourette disorder (TD) [43] (Fig. 1). For GWAS from Psychiatric Genetic Consortium (PGC), there were few samples from UKB. For AD GWAS, the UKBB participants were not included in the case-control analysis. For other GWAS, we could not quantify the extent of sample overlap due to the lack of individual information. Since all GWAS used in the current study were conducted in European ancestry, we did not further adjust for the impact of population stratification.
For each DTI and ROI measurement, we retained single nucleotide polymorphisms (SNPs) with P < 5 × 10− 8 as strong instruments for MR; measurements without a strong instrument were discarded. We removed SNPs with linkage disequilibrium (LD) r2 ≥ 0.001 for each measurement using reference LD data from the 1000 Genomes Project [44]. Data harmonization was applied independently for each DTI-disease (DD) and ROI-disease (RD) pair with the TwoSampleMR R package [45]. Since many of the GWAS summary statistics analyzed in this study did not provide allele frequency information, we did not exclude SNPs based on ambiguous strand error. For all binary phenotypes, we log-transformed the odds ratio to generate the β value.
Power calculation
We calculated the variance in phenotype explained by each instrument by
$$ < mathdollar>{R}^2=\frac{2\ast EAF\ast \left(1- EAF\right)\ast {\beta}^2}{2\ast EAF\ast \left(1- EAF\right)\ast {\beta}^2+2\ast EAF\ast \left(1- EAF\right)\ast N\ast se{\left(\beta \right)}^2} $$
Where EAF was the effect allele frequency, β was the effect size, N was the sample size, and se(β) was the standard error of effect size. The F statistic was then denoted as
$$F=\frac{{R}^{2}*(N-2)}{1-{R}^{2}}$$
R2 and F were used to evaluate power for each instrument. For each RD and DTI-Disorder pair, we calculated the overall MR power using mRnd tool [46], assuming OR = 1.3 and type I error = 0.05. The assumption of OR was based on the actual MR effects passing the significance threshold. We took this assumption because there is limited observational estimation of OR that is currently available. MR power and the number of valid instruments for each pair are recorded in Additional files 2 and 3. Details for all instruments are shown in Additional file 1.
Calculation of MR effects
For DD/RD pairs with at least two instruments, we performed a meta-analysis of each instrument's MR effect using the inverse variance–weighted (IVW) method. The results were considered preliminary results and were used for downstream analyses. For the top IVW findings, we additionally applied weighted mode [47], weighted median [13], and MR-Egger regression [48] approaches—which are relatively robust against horizontal pleiotropy [15]—to further confirm the validity of the MR effect. For DD/RD pairs with only one instrument, estimates based on the Wald ratio were considered preliminary results. For the top Wald ratio finding, we used summary data-based MR (SMR) [49] to confirm the existence of the MR effect and heterogeneity in dependent instruments (HEIDI) [49] to rule out the probability that the MR effect was driven by colocalization of the instrument with the effective locus.
Since both IVW and Wald ratio results were taken into account, the p-value was adjusted by the Bonferroni method by numbers of all DTI-Disorder pairs (1320) or ROI-Disorder pairs (1212).
The intercept of the Egger regression was used as an indicator of potential horizontal pleiotropy, while modified Cochran's Q for IVW and Rucker's Q for Egger regression [50] were used as an indicator of heterogeneity. We used MR pleiotropy residual sum and outlier (MR-PRESSO) [19] with the number of permutations = 2500 to detect potential outlier SNPs for each DD/RD pair and generate an overall p-value for heterogeneity; those with outlier(s) were reanalyzed by IVW after removing the outlier(s). We also applied leave-one-out tests for all top findings to further evaluate the effects of unknown outliers.
For all MR results, we tested the reverse MR effect (i.e., neuropsychiatric disorders as exposure and DTI/ROI as outcome) by selecting SNPs with P < 5 × 10− 8 for each neuropsychiatric disorder as instruments. The absence of a reverse MR effect (IVW P > 0.05) was considered as evidence for the validity of directionality.
Analysis of the general causal contribution
To assess the general contribution of DTI (ROI) polymorphisms to neuropsychiatric disorder, we pooled the IVW P values for all DTI-Disorder (RD) pairs and compared them to the null uniform distribution using quantile-quantile plots. A positive bias (inflation) from uniform distribution was considered as evidence for a general contribution. The significance of inflation was evaluated with the Kolmogorov–Smirnov (KS) test, while the extent of inflation was assessed with the inflation factor λ, which was calculated by chi-square regression using the GenABEL R package [51]. Since the inflation factor might be overestimated due to the small number of P values, we shuffled SNP labels for BP and AN 1,000 times to carry out a permutation test. Permutation P < 0.05 was considered evidence for significant inflation. These tests were also separately applied to each disease and repeated after removing heterogeneous results (those with Cochran's P < 0.05) or outlier SNPs (those with MR-PRESSO P < 0.05).
Disease GWAS summary data were downloaded from https://www.med.unc.edu/pgc/download-results/. GWAS summary of neuroimaging data were downloaded from https://github.com/BIG-S2/GWAS.
2SMR:
2-sample Mendelian randomization.
DTI:
Diffusion tensor imaging
ROI:
Region-of-interest
Etkin A. A reckoning and research agenda for neuroimaging in psychiatry. Am J Psychiatry. 2019;176:507–11.
Lui S, Zhou XJ, Sweeney JA, Gong Q. Psychoradiology: The frontier of neuroimaging in psychiatry. Radiology. 2016;281:357–72.
Aydin O, Unal Aydin P, Arslan A. Development of Neuroimaging-Based Biomarkers in Psychiatry. Adv Exp Med Biol. 2019;1192:159–95.
Etkin A. Addressing the causality gap in human psychiatric neuroscience. JAMA Psychiatry. 2018;75:3–4.
Haig BD. What Is a Spurious Correlation? Underst Stat. 2003;2:125–32.
Weinberger DR, McClure RK. Neurotoxicity, neuroplasticity, and magnetic resonance imaging morphometry: What is happening in the schizophrenic brain? Arch Gen Psychiatry. 2002;59:553–8.
DeLisi LE. Defining the course of brain structural change and plasticity in schizophrenia. Psychiatry Res. 1999;92:1–9.
Hoogman M, Bralten J, Hibar DP, Mennes M, Zwiers MP, Schweren LSJ, et al. Subcortical brain volume differences in participants with attention deficit hyperactivity disorder in children and adults: a cross-sectional mega-analysis. The Lancet Psychiatry. 2017;4:310–9.
van Erp TGM, Walton E, Hibar DP, Schmaal L, Jiang W, Glahn DC, et al. Cortical Brain Abnormalities in 4474 Individuals With Schizophrenia and 5098 Control Subjects via the Enhancing Neuro Imaging Genetics Through Meta Analysis (ENIGMA) Consortium. Biol Psychiatry. 2018;84:644–54.
Pantelis C, Velakoulis D, McGorry PD, Wood SJ, Suckling J, Phillips LJ, et al. Neuroanatomical abnormalities before and after onset of psychosis: a cross-sectional and longitudinal MRI comparison. Lancet. 2003;361:281–8.
Tang Y, Pasternak O, Kubicki M, Rathi Y, Zhang T, Wang J, et al. Altered cellular white matter but not extracellular free water on diffusion MRI in individuals at clinical high risk for psychosis. Am J Psychiatry. 2019;176:820–8.
Roberts G, Perry A, Lord A, Frankland A, Leung V, Holmes-Preston E, et al. Structural dysconnectivity of key cognitive and emotional hubs in young people at high genetic risk for bipolar disorder. Mol Psychiatry. 2018;23:413–21.
Bowden J, Davey Smith G, Haycock PC, Burgess S. Consistent Estimation in Mendelian Randomization with Some Invalid Instruments Using a Weighted Median Estimator. Genet Epidemiol. 2016;40:304–14.
Hartwig FP, Borges MC, Horta BL, Bowden J, Davey Smith G. Inflammatory Biomarkers and Risk of Schizophrenia. JAMA Psychiatry. 2017;74:1226.
Choi KW, Chen CY, Stein MB, Klimentidis YC, Wang MJ, Koenen KC, et al. Assessment of Bidirectional Relationships between Physical Activity and Depression among Adults: A 2-Sample Mendelian Randomization Study. JAMA Psychiatry. 2019;76:399–408.
Hemani G, Bowden J, Davey Smith G. Evaluating the potential role of pleiotropy in Mendelian randomization studies. Hum Mol Genet. 2018;27:R195–208.
Zhao B, Zhang J, Ibrahim JG, Luo T, Santelli RC, Li Y, et al. Large-scale GWAS reveals genetic architecture of brain white matter microstructure and genetic overlap with cognitive and mental health traits (n = 17,706). Mol Psychiatry. 2019. https://doi.org/10.1038/s41380-019-0569-z.
Zhao B, Luo T, Li T, Li Y, Zhang J, Shan Y, et al. Genome-wide association analysis of 19,629 individuals identifies variants influencing regional brain volumes and refines their genetic co-architecture with cognitive and mental health traits. Nat Genet. 2019;51:1637–44.
Verbanck M, Chen CY, Neale B, Do R. Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases. Nat Genet. 2018;50:693–8.
Staley JR, Blackshaw J, Kamat MA, Ellis S, Surendran P, Sun BB, et al. PhenoScanner: a database of human genotype-phenotype associations. Bioinformatics. 2016;32:3207–9.
Friston K, Brown HR, Siemerkus J, Stephan KE. The dysconnection hypothesis (2016). Schizophr Res. 2016;176:83–94.
Perry A, Roberts G, Mitchell PB, Breakspear M. Connectomics of bipolar disorder: a critical review, and evidence for dynamic instabilities within interoceptive networks. Mol Psychiatry. 2019;24:1296–318.
Phillipou A, Carruthers SP, Di Biase MA, Zalesky A, Abel LA, Castle DJ, et al. White matter microstructure in anorexia nervosa. Hum Brain Mapp. 2018;39:4385–92.
Barona M, Brown M, Clark C, Frangou S, White T, Micali N. White matter alterations in anorexia nervosa: Evidence from a voxel-based meta-analysis. Neurosci Biobehav Rev. 2019;100:285–95.
Treasure J, Zipfel S, Micali N, Wade T, Stice E, Claudino A, et al. Anorexia nervosa. Nat Rev Dis Prim. 2015;1:1–21.
Gaudio S, Quattrocchi CC. Neural basis of a multidimensional model of body image distortion in anorexia nervosa. Neurosci Biobehav Rev. 2012;36:1839–47.
Di Paola M, Spalletta G, Caltagirone C. In vivo structural neuroanatomy of corpus callosum in Alzheimer's disease and mild cognitive impairment using different MRI techniques: A review. J Alzheimer's Dis. 2010;20:67–95.
Burgess S, Bowden J, Fall T, Ingelsson E, Thompson SG. Sensitivity analyses for robust causal inference from mendelian randomization analyses with multiple genetic variants. Epidemiology. 2017;28:30–42.
Iwashiro N, Suga M, Takano Y, Inoue H, Natsubori T, Satomura Y, et al. Localized gray matter volume reductions in the pars triangularis of the inferior frontal gyrus in individuals at clinical high-risk for psychosis and first episode for schizophrenia. Schizophr Res. 2012;137:124–31.
Jahanshad N, Kochunov PV, Sprooten E, Mandl RC, Nichols TE, Almasy L, et al. Multi-site genetic analysis of diffusion images and voxelwise heritability analysis: A pilot project of the ENIGMA-DTI working group. Neuroimage. 2013;81:455–69.
Klein A, Ghosh SS, Bao FS, Giard J, Häme Y, Stavsky E, et al. Mindboggling morphometry of human brains. PLoS Comput Biol. 2017;13:e1005350.
Jansen IE, Savage JE, Watanabe K, Bryois J, Williams DM, Steinberg S, et al. Genome-wide meta-analysis identifies new loci and functional pathways influencing Alzheimer's disease risk. Nat Genet. 2019;51:404–13.
Demontis D, Walters RK, Martin J, Mattheisen M, Als TD, Agerbo E, et al. Discovery of the first genome-wide significant risk loci for attention deficit/hyperactivity disorder. Nat Genet. 2019;51:63–75.
Watson HJ, Yilmaz Z, Thornton LM, Hübel C, Coleman JRI, Gaspar HA, et al. Genome-wide association study identifies eight risk loci and implicates metabo-psychiatric origins for anorexia nervosa. Nat Genet. 2019;51:1207–14.
Otowa T, Hek K, Lee M, Byrne EM, Mirza SS, Nivard MG, et al. Meta-analysis of genome-wide association studies of anxiety disorders. Mol Psychiatry. 2016;21:1391–9.
Grove J, Ripke S, Als TD, Mattheisen M, Walters RK, Won H, et al. Identification of common genetic risk variants for autism spectrum disorder. Nat Genet. 2019;51:431–44.
Stahl EA, Breen G, Forstner AJ, McQuillin A, Ripke S, Trubetskoy V, et al. Genome-wide association study identifies 30 loci associated with bipolar disorder. Nat Genet. 2019;51:793–803.
Jansen PR, Watanabe K, Stringer S, Skene N, Bryois J, Hammerschlag AR, et al. Genome-wide analysis of insomnia in 1,331,010 individuals identifies new risk loci and functional pathways. Nat Genet. 2019;51:394–403.
Howard DM, Adams MJ, Clarke TK, Hafferty JD, Gibson J, Shirali M, et al. Genome-wide meta-analysis of depression identifies 102 independent variants and highlights the importance of the prefrontal brain regions. Nat Neurosci. 2019;22:343–52.
Arnold PD, Askland KD, Barlassina C, Bellodi L, Bienvenu OJ, Black D, et al. Revealing the complex genetic architecture of obsessive-compulsive disorder using meta-analysis. Mol Psychiatry. 2018;23:1181–8.
Nievergelt CM, Maihofer AX, Klengel T, Atkinson EG, Chen C-Y, Choi KW, et al. International meta-analysis of PTSD genome-wide association studies identifies sex- and ancestry-specific genetic risk loci. Nat Commun. 2019;10:4558.
Ripke S, Neale BM, Corvin A, Walters JTR, Farh KH, Holmans PA, et al. Biological insights from 108 schizophrenia-associated genetic loci. Nature. 2014;511:421–7.
Article CAS PubMed Central Google Scholar
Yu D, Sul JH, Tsetsos F, Nawaz MS, Huang AY, Zelaya I, et al. Interrogating the Genetic Determinants of Tourette's Syndrome and Other Tic Disorders Through Genome-Wide Association Studies. Am J Psychiatry. 2019;176:217–27.
1000 Genomes Project Consortium. A global reference for human genetic variation. Nature. 2015;526:68–74.
Hemani G, Zheng J, Elsworth B, Wade KH, Haberland V, Baird D, et al. The MR-base platform supports systematic causal inference across the human phenome. Elife. 2018;7:e34408.
Brion M-JA, Shakhbazov K, Visscher PM. Calculating statistical power in Mendelian randomization studies. Int J Epidemiol. 2013;42(5):1497-501.
Hartwig FP, Smith GD, Bowden J. Robust inference in summary data Mendelian randomization via the zero modal pleiotropy assumption. Int J Epidemiol. 2017;46:1985–98.
Bowden J, Davey Smith G, Burgess S. Mendelian randomization with invalid instruments: effect estimation and bias detection through Egger regression. Int J Epidemiol. 2015;44:512–25.
Wu Y, Zeng J, Zhang F, Zhu Z, Qi T, Zheng Z, et al. Integrative analysis of omics summary data reveals putative mechanisms underlying complex traits. Nat Commun. 2018;9:918.
Ucker GR¨, Schwarzer G, Carpenter JR, Binder H, Schumacher M. Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis. Biostatistics. 2011;12:122–42.
Aulchenko YS, Ripke S, Isaacs A, van Duijn CM. GenABEL: An R library for genome-wide association analysis. Bioinformatics. 2007;23:1294–6.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 22 Supplement 3, 2021: 19th International Conference on Bioinformatics 2020 (InCoB2020): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-22-supplement-3.
The study was supported by National Natural Science Foundation of China (No: 81971292 and 81671328); Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning (Grant No. 1610000043); Innovation Research Plan supported by Shanghai Municipal Education Commission (Grant No. ZXWF082101). The funding bodies had no role in the study and collection, analysis, and interpretation of data and in writing the manuscript. Publication was funded by National Natural Science Foundation of China (No: 81971292).
Weichen Song and Wei Qian contributed equally to the manuscript.
Shanghai Mental Health Center, School of Biomedical Engineering, Shanghai Jiao Tong University School of Medicine, Shanghai Jiao Tong University, 200030, Shanghai, China
Weichen Song, Wei Qian, Weidi Wang, Shunying Yu & Guan Ning Lin
Shanghai Key Laboratory of Psychotic Disorders, 200030, Shanghai, China
Weidi Wang, Shunying Yu & Guan Ning Lin
Weichen Song
Wei Qian
Weidi Wang
Shunying Yu
Guan Ning Lin
G.N.L designed and supervised the study. S.Y, W.W collected the data. W.Q and W.S preprocessed and analyzed the data. W.S, W.Q, W.W, and S.Y interpreted the data. W.S wrote the manuscript. All authors read and approved the manuscript.
Correspondence to Guan Ning Lin.
Additional file 1.
Summary statistics for all instrument variables after data harmonization.
Full results for DTI-Disease association analysis.
Full results for ROI-Disease association analysis.
Funnel plots for top MR findings, leave-one-out results for top DTI-Disorder pairs, general contribution of DTI to neuropsychiatric disease after sensitivity test adjustment, Rank-Rank overlaps between MR effect and heterogeneity for ROI-Disease analysis.
Song, W., Qian, W., Wang, W. et al. Mendelian randomization studies of brain MRI yield insights into the pathogenesis of neuropsychiatric disorders. BMC Genomics 22 (Suppl 3), 342 (2021). https://doi.org/10.1186/s12864-021-07661-8
Received: 25 April 2021
Neuropsychiatric disorders
Dysconnectivity | CommonCrawl |
A novel interaction perturbation analysis reveals a comprehensive regulatory principle underlying various biochemical oscillators
Jun Hyuk Kang1, 2 and
Kwang-Hyun Cho1, 2Email author
Received: 6 March 2017
Accepted: 2 October 2017
Biochemical oscillations play an important role in maintaining physiological and cellular homeostasis in biological systems. The frequency and amplitude of oscillations are regulated to properly adapt to environments by numerous interactions within biomolecular networks. Despite the advances in our understanding of biochemical oscillators, the relationship between the network structure of an oscillator and its regulatory function still remains unclear. To investigate such a relationship in a systematic way, we have developed a novel analysis method called interaction perturbation analysis that enables direct modulation of the strength of every interaction and evaluates its consequence on the regulatory function. We have applied this new method to the analysis of three representative types of oscillators.
The results of interaction perturbation analysis showed different regulatory features according to the network structure of the oscillator: (1) both frequency and amplitude were seldom modulated in simple negative feedback oscillators; (2) frequency could be tuned in amplified negative feedback oscillators; (3) amplitude could be modulated in the incoherently amplified negative feedback oscillators. A further analysis of naturally-occurring biochemical oscillator models supported such different regulatory features according to their network structures.
Our results provide a clear evidence that different network structures have different regulatory features in modulating the oscillation frequency and amplitude. Our findings may help to elucidate the fundamental regulatory roles of network structures in biochemical oscillations.
Biochemical oscillators
Network structure
Regulation of frequency and amplitude
Perturbation analysis
Oscillations are commonly observed phenomena in biological systems and perform crucial functions in regulating physiological or cellular processes [1]. The beating of the heart, the breathing motion of the lungs, and the circadian rhythm of sleep and wakefulness can be regarded as oscillations to maintain physiological homeostasis [2]. Glucose metabolism, cyclic adenosine monophosphate (cAMP) generation, mitogen-activated protein kinase (MAPK) signaling, and cell cycle progression are well-known cellular oscillations [3].
Oscillators appear to have different requirements for regulating the frequency and amplitude depending on their biological functions. Both frequency and amplitude of a circadian oscillator need to be regulated against fluctuations in order to maintain robust 24-h rhythms [4–6]. In the heart beating, the frequency has to be increased according to the intensity of physical activities [7]. In neuronal firings, proper regulation of the frequency is essential for information transmission in the brain. However, both the heart beating and neuronal firings are seldom required to modulate the amplitude. On the other hand, in glycolytic and cAMP oscillators, the regulation of the amplitude is as important as the regulation of the frequency since the amplitude plays a significant role in the activities of glycolysis and the protein kinase A (PKA) signaling pathway [8, 9].
Thus, how do these oscillators meet such different requirements for regulating the frequency and amplitude? Novak et al. classified biochemical oscillators into three classes: class I oscillators (delayed negative feedback oscillators), class II oscillators (amplified negative feedback oscillators), and class III oscillators (incoherently amplified negative feedback oscillators) [10]. This classification was based solely on the network structure of the oscillator. However, interestingly, the different regulatory requirements seem to be reflected in this classification in view of the fact that (i) the circadian rhythm oscillator belongs to class I oscillators; (ii) the sinus node oscillator and neuronal oscillator belong to class II oscillators; and (iii) the glycolytic and cAMP oscillators fall into class III oscillators. Therefore, a particular type of network structure appears to serve a particular regulatory requirement better than other types, and this implies that there is an association between network structures and regulatory functions.
Such an association between them could also be inferred from the previous study by Tsai et al. in which it was revealed that an interlinked positive and negative feedback structure outperforms a simple negative feedback structure in tuning the frequency of an oscillator [11]. In addition, a positive feedback was revealed to promote the oscillation of a negative feedback oscillator [12]. However, the detailed relationship between various network structures and regulatory functions has only been partially explored till now. To investigate the relationship in detail from a systems perspective, we constructed all possible three-node oscillator models of maximum four links using ordinary differential equations (ODEs) to represent six conceptual network structures of biochemical oscillators, and then performed interaction perturbation to systematically analyze the regulatory pattern of the frequency and amplitude of each model.
So far, the parameter perturbation method has been used to study the properties of oscillators. However, this method might not be adequate to analyze the network-level characteristics of oscillators. Because a parameter can represent various biological functions (e.g., the rate of synthesis or degradation of a molecule, the strength of binding between two molecules and the sensitivity of a reaction), perturbation of a parameter may not correspond to the variation of an interaction in the network. Moreover, the same molecular interaction can be represented in multiple ways (see Additional file 1: Notes). For instance, the interaction 'X activates Y' can be represented in several ways:
$$ \frac{dY}{dt}={k}_1\cdot X $$
$$ \frac{dY}{dt}={k}_1\cdot X\cdot Y $$
$$ \frac{dY}{dt}=\frac{k_1\cdot X}{K_m+X} $$
In the three equations, the parameter k 1 represents different biological processes, and thus, the perturbation of k 1 will yield various results. In particular, a single parameter can be involved in two interactions (eq. (2) represents two interactions, 'X on Y' and 'Y on Y'), and more than two parameters can represent a single interaction (in eq. (3), k 1 and K m are involved in the interaction 'X on Y'). In these cases, the role of an interaction cannot be assessed independently through parameter perturbation analysis. A solution to such limitations could be to modulate the interactions rather than the parameters. For this purpose, we developed a novel perturbation strategy called the interaction perturbation method which directly modulates the strength of an interaction between two nodes in a network. By using this method, we found that a strong association exists between network structures and regulatory patterns of the frequency and amplitude of biochemical oscillators. In simple negative feedback oscillators, both the frequency and amplitude were found to be rarely modulated. In contrast, the frequency could be tuned in amplified negative feedback oscillators while the amplitude could be modulated in incoherently amplified negative feedback oscillators. Our analysis shows that different regulatory properties can emerge from different network structures of biochemical oscillators.
Analyses of 3-node biochemical oscillators
Because biochemical oscillator models are too diverse in their size and complexity to be investigated individually, we constructed all possible representative three-node oscillator models that consist of maximum four links. We began by determining the parameter sets and then conducted analyses of each model based on the interaction perturbation method. The procedures are provided in detail in the METHODS (Fig. 1 and Additional file 1: Figure S1).
Analysis workflows for three-node oscillator models. Step 1. Construct six 3-node oscillator models using first-order ODEs; Step 2. Generate 1000 random parameter sets for each ODE; Step 3. Reformulate first-order ODEs into second-order ODEs by differentiation with respect to time and locate elements of Jacobian matrix by decomposition of the reformulated second-order ODEs; Step 4. Establish the conditions for the perturbations: determine the type of interactions to be perturbed and the strength of the perturbations; Step 5. Conduct perturbations under the established conditions; Step 6. Measure the resultant frequency and amplitude; and Step 7. Create density plots to depict the results schematically
Network structures of six 3-node biochemical oscillator models
Each model includes three nodes (X, Y, and Z) and nine possible types of interactions (Lxx, Lxy, Lxz, Lyx, Lyy, Lyz, Lzx, Lzy, and Lzz) (Fig. 2). In all six models, the term X, Y, and Z denote the 'activator', 'inhibitor', and 'mediator', respectively, that is, X activates Y; Y inhibits X, and Z mediates the activation or inhibition.
Schematic diagrams of the network structures of the six 3-node biochemical oscillator models. Each model includes three nodes (X, Y, and Z) and nine possible interactions (Lxx, Lxy, Lxz, Lyx, Lyy, Lyz, Lzx, Lzy, and Lzz). The term 'Lpq' denotes an interaction in which node P is influenced by node Q. For instance, Lxy denotes an interaction in which node X is influenced by node Y. The figure shows the six network structures: a simple NFO; b activator-amplified NFO; c inhibitor-amplified NFO; d type 1 incoherently-amplified NFO; e type 2 incoherently-amplified NFO; and (f) type 3 incoherently-amplified NFO in order of appearance
The simple negative feedback oscillator (simple NFO) has a negative feedback loop only (Fig. 2a). For the simple NFO to be able to oscillate, at least three nodes have to be included in its negative feedback loop since the time delay required for the sustenance of the oscillation cannot be sufficiently provided with only two nodes. Adding a third node (denoted by Z in Fig. 2a) generates an appropriate time delay. In this structure, Lyx, Lzy, and Lxz form a negative feedback loop where X activates Y directly, and Y inhibits X through Z, which is consistent with the denoted function of X and Y: X as 'activator', and Y as 'inhibitor'. In the activator-amplified negative feedback oscillator (activator-amplified NFO), X and Y form a two-node negative feedback loop (Lxy and Lyx), and Z amplifies X (Lxz and Lzx) (Fig. 2b). This amplification by Z plays an important role in maintaining oscillation [13]. In the inhibitor-amplified negative feedback oscillator (inhibitor-amplified NFO), X and Y form a two-node negative feedback loop (Lxy and Lyx), and Z amplifies Y (Lyz and Lzy) (Fig. 2c). Like the activator-amplified NFO, the amplification by Z is essential for the maintenance of oscillation [13]. Each incoherently-amplified negative feedback oscillator (incoherently-amplified NFO) has a negative feedback loop containing one incoherent link. We constructed three incoherently-amplified NFOs: type 1 incoherently-amplified NFO; type 2 incoherently-amplified NFO; and type 3 incoherently-amplified NFO (Fig. 2d, e, and f). Among all the types of incoherently-amplified NFOs, Lyx, Lzy and Lxz form a negative feedback loop containing an incoherent link. An incoherent link makes the function of a node become inconsistent with its denoted function as an activator or inhibitor. For instance, in the type 1 incoherently-amplified NFO, Y inhibits X via Z (Lzy and Lxz) and directly activates X (Lxy) simultaneously. Thus, Y is no longer an 'inhibitor'. The incoherent link is Lxy in the type 1 and type 2 incoherently-amplified NFO and Lyz in the type 3 incoherently-amplified NFO.
According to the classification by Novak et al., the above six models can be classified into three classes: the simple NFO belongs to class I oscillators; the activator-amplified NFO and the inhibitor-amplified NFO belong to class II oscillators; and all types of incoherently-amplified NFOs belong to class III oscillators [10].
Analysis results of the six 3-node oscillation models
To investigate the regulatory patterns of the three-node oscillators, we performed perturbations by weakening each interaction by 1%, 2%, 4%, and 8% (a weakening factor of 0.99, 0.98, 0.96, and 0.92, respectively) and observed the changes in the frequency and amplitude. The interaction perturbation was implemented by multiplying a weakening factor to the element of Jacobian matrix that is to be perturbed during one period of oscillation. Fig. 3 shows the results of the perturbations in the density plots (see the METHODS for details). In these plots, the regulatory characteristics of the frequency and amplitude are represented by the distribution patterns of the density. The concentrated density near the point (1, 1) indicates that the frequency and amplitude are robust to perturbations. The horizontal distribution of the density denotes that the change in the frequency is larger than the change in the amplitude, and the vertical distribution of the density denotes the opposite.
The density plots of the six 3-node oscillator models. In these plots, the regulatory characteristics of the frequency and amplitude are represented by the distribution patterns of the density. The density is increasing over a continuum starting from white followed by yellow, red, and black. This figure shows the six network structures: a simple NFO; b activator-amplified NFO; c inhibitor-amplified NFO; d type 1 incoherently-amplified NFO; e type 2 incoherently-amplified NFO; and (f) type 3 incoherently-amplified NFO in order of appearance
To represent the results quantitatively, we grouped the changes in the frequency and amplitude into three patterns: In pattern R, both the frequency and amplitude changed by less than 1%; in pattern F, either the frequency or the amplitude changed by more than 1% and the changes in the frequency were greater than the changes in the amplitude; in pattern A, either the frequency or the amplitude changed by more than 1% and the changes in the amplitude were greater than the changes in the frequency. Table 1 shows the distribution of the patterns in each 3-node oscillator model (a full description of the distribution of the patterns generated by the perturbation of each interaction is provided in Additional file 1: Table S1).
Regulatory patterns of the frequency and amplitude arising from interaction perturbation
Network structure of oscillators
Distribution of patterns (%)
Pattern R
Pattern F
Pattern A
Simple NFO
Activator-amplified NFO
Inhibitor-amplified NFO
Type 1 incoherently-amplified NFO
The simple NFO showed the highest robustness to perturbations regardless of the types of perturbed interactions (Fig. 3a and Additional file 1: Figure S2). The rates of change in both the frequency and amplitude were less than 1% in 92.9% of the perturbation results (Table 1) and are depicted as the darkest density concentrated on the point (1, 1) (Fig. 3a).
In the activator-amplified NFO and inhibitor-amplified NFO, the change in the frequency was larger than the change in the amplitude. The results are shown in Fig. 3b and c, in which the density is spread in a nearly horizontal direction. These changes in the frequency were caused by the perturbations of Lxx or Lxy (Additional file 1: Figure S3 and Additional file 1: Figure S4). In both oscillators, pattern F was observed in more than 30% of the perturbation results.
In contrast to the regulatory patterns observed in the activator-amplified NFO and the inhibitor- amplified NFO, all the incoherently-amplified NFOs showed moderate changes in the amplitude. In the type 1 incoherently-amplified NFO, the amplitude was slightly more adjustable to perturbations than the frequency while in the type 2 incoherently-amplified NFO, the amplitude was changed to a large extent (Fig. 3d and e). In the type 3 incoherently-amplified NFO, the frequency and amplitude were changed to various extents (Fig. 3f). Both in the type 2 incoherently-amplified NFO and the type 3 incoherently-amplified NFO, pattern A was observed in more than 50% of the perturbation results.
For the six 3-node oscillator models, the perturbation results obtained by weakening each interaction by 2%, 4%, or 8% had qualitatively the same regulatory patterns as those obtained by weakening each interaction by 1% (Additional file 1: Figures S2-S7).
In summary, a distinct regulatory pattern was observed in each 3-node oscillator. Class I oscillator (the simple NFO) is robust to perturbations while for class II oscillators (the activator-amplified NFO and the inhibitor-amplified NFO), the frequency can be selectively regulated. In class III oscillators (types 1, 2, and 3 incoherently-amplified NFOs), the amplitude can be regulated. Based on these observations, we deduced the regulatory principle that the differences in network structures give rise to different regulatory patterns of the frequency and amplitude.
Mathematically controlled comparisons between structurally related biochemical oscillators
Class I, class II and class III oscillators are structurally related to one another, and their structural differences arise from an added link. A class II oscillator can be formed by adding a link to the activator (X) or inhibitor (Y) of a class I oscillator. A class III oscillator can be formed by adding an incoherent link to a class I oscillator.
This prompted us to assume that the added links could bring about changes to the regulatory patterns of the oscillators. To examine this idea, we performed mathematically controlled comparisons between structurally related oscillators.
Construction of structurally related three-node models for mathematically controlled comparisons
We developed three additional three-node oscillator models which contain one additional link to the backbone of a simple NFO. A self-activating positive feedback link was added to the activator (X) and inhibitor (Y) of the simple NFO to generate an activator-amplified NFO variant and an inhibitor-amplified NFO variant, both of which belong to class II oscillators. Adding Lxy to the simple NFO generated a variant of the type 1 incoherently-amplified NFO (hereinafter called the type 1 incoherently-amplified NFO variant), which belongs to class III oscillators. Simulation of the simple NFO was performed with a representative parameter set suggested by Novak et al. [10]. The parameters of the newly generated oscillators (the activator-amplified NFO variant, the inhibitor-amplified NFO variant and the type 1 incoherently-amplified NFO variant) were determined with methods for mathematically controlled comparisons [14]. The full ODEs are provided in Additional file 1: Eq. A1, and the full parameters are provided in Additional file 1: Table S3.
Analysis results of the structurally related models
Perturbations on the oscillators were performed by weakening each interaction by 1% during one period of oscillation. A distinct regulatory pattern for each model could be identified despite the fact that the frequency and amplitude changed by less than 1% compared to the unperturbed cases in all four models (Fig. 4). In the activator-amplified NFO variant and the inhibitor-amplified NFO variant, the frequency was more adjustable than the amplitude, whereas in the type 1 incoherently-amplified NFO variant, the amplitude was more adjustable than the frequency. Overall, adding an amplifying link could enhance the ability to regulate the frequency of the oscillator whereas adding an incoherent link could enhance the ability to regulate the amplitude of the oscillator.
Mathematically controlled comparisons among the simple NFO, the activator-amplified NFO variant, the inhibitor-amplified NFO variant, and the type 1 incoherently-amplified NFO variant. Schematic representations and frequency-amplitude plots for the four oscillator models are presented for comparison. In these plots, the changes in the frequency and amplitude due to the perturbations are expressed as a ratio to the frequency and amplitude before the perturbations
Analyses of naturally-occurring biochemical oscillator models
To explore whether the regulatory principle suggested here could also apply to naturally-occurring biochemical oscillator models, we performed analyses of nine well-known biochemical oscillator models which were constructed based on experimental data. For each model, perturbations were conducted by weakening each interaction by 1% during one-period of oscillation. The subsequent changes in the frequency and amplitude are shown in Fig. 5. The ODE equations and the parameters are provided in the Additional file 1: Eq. A2.
Analyses of naturally-occurring biochemical oscillator models. The changes in the frequency and amplitude are represented in the frequency-amplitude plots for the following: a circadian rhythm model by Leloup et al.; b circadian rhythm model by Goldbeter; c repressilator by Elowitz and Leibler; d sinus node model by Yanagihara et al.; e neuronal model by Hodgkin and Huxley; f cell cycle model by Pomerening et al.; g cAMP model by Martiel and Goldbeter; h glycolysis model by Sel'kov; and (i) glycolysis model by Higgins. In these plots, the changes in the frequency and amplitude due to perturbations are expressed as a ratio to the frequency and amplitude before the perturbations
The circadian rhythm model by Goldbeter [15], the circadian rhythm model by Leloup et al. [16] and the repressilator [17] are well-known examples of class I oscillators (a class I oscillator consists of a negative feedback only). These oscillators showed the highest robustness to perturbations: both the frequency and amplitude rarely changed in response to the perturbations. The sinus node model by Yanagihara et al. [18], the neuronal excitation model by Hodgkin and Huxley [19] and the cell cycle model by Pomerening et al. [20] can be classified as class II oscillators (a class II oscillator includes a positive feedback). These class II oscillators worked better in adjusting the frequency than in adjusting the amplitude; the perturbations induced more changes to the frequency than to the amplitude. On the other hand, in the cAMP oscillator model [21] and the glycolysis models [22, 23] that belong to class III oscillators (a class III oscillator includes an incoherent link), the amplitude was more adjustable than the frequency: the amplitude changed more than the frequency.
In summary, the regulatory principle suggested here in the three-node oscillator models could also apply to naturally-occurring biochemical oscillator models.
Our analysis based on the interaction perturbation method revealed the regulatory principle that different network structures of biochemical oscillators give rise to different regulatory patterns of the frequency and amplitude; for class I oscillators, the frequency and amplitude are seldom regulated; for class II oscillators, the frequency is more adjustable than the amplitude; for class III oscillators, the amplitude is more adjustable than the frequency. The results of the mathematically controlled comparisons further demonstrated the reliability of this regulatory principle and its potential for application to naturally-occurring biochemical oscillator models.
In systems biological studies, the parameter perturbation method has been widely used to investigate the relationship between network structures and their biological functions [24–29]. In addition, various mathematical methods have been developed to analyze the characteristics of oscillators. The sensitivity heat map and parameter sensitivity spectrum developed by Rand et al. have been utilized to provide a more integrated picture of the overall sensitivities of a system and to probe how the function of a network depends upon its structure and parameters [30]. Irene et al. proposed an optimization-based approach to investigate what environmental conditions drive specific oscillatory network [31]. The state sensitivity decomposition method developed by Wilkins et al. is useful in analyzing the influence of parameter changes on period, amplitude and relative phase of oscillation [32]. In addition, robustness and dynamical characteristics between various oscillatory systems could be effectively compared by using bifurcation analysis, parameter sensitivity analysis, and stochastic simulation [33, 34]. However, since all these methods somehow focus on "parameters", they can be classified as a parameter perturbation analysis in a broad sense. On the other hand, as far as the network topology is concerned, the biological meaning of a parameter does not always correspond to specific interaction. Hence, there is still difficulty in attributing the perturbation of a particular interaction in a regulatory network to the perturbation of a parameter in the corresponding mathematical model.
The interaction perturbation method proposed in this study has several advantages over the parameter perturbation method. First, the result of an interaction perturbation analysis can be properly interpreted in the context of a biological network since the perturbation directly modulates a link of the network structure. Second, this method can provide a more pertinent comparison between different network structures by allowing the focus of the comparison to be placed on the difference of the interaction in the network, not on the indirect difference of the underlying biological process. If the network structures have the same number of nodes, the comparison can be performed more effectively between them as they have a Jacobian matrix of the same size (three-node networks have a Jacobian matrix of 3 by 3). Third, this method can simplify analysis procedures. Previously suggested methods (e.g., optimization-based method, state sensitivity decomposition method, etc) for the analysis of biochemical oscillators can provide meaningful insights into the nature of oscillators, but most of them require a certain level of expert knowledge on mathematics [31, 32]. In contrast, with a given parameter set, we just need to transform first-order ODEs into second-order ODEs and integrate the second-order ODEs using a perturbed Jacobian matrix without going through any other complicated procedures such as selection of a bifurcation parameter, identification of a Hopf bifurcation point and numerical continuation [35].
In this study, we demonstrated that the regulatory patterns of the frequency and amplitude depend on the network structures of the biochemical oscillators. Notably, even for the same class of network structures, different regulatory patterns were observed. For instance, for the activator-amplified NFO, the amplitude was adjustable although the range was narrow whereas for the inhibitor-amplified NFO, modulation of the amplitude was negligible. The regulatory range of the amplitude was wider for the type 2 incoherently-amplified NFO than that for the type 1 or a type 3 incoherently-amplified NFO. For the type 3 incoherently-amplified NFO, both the amplitude and frequency could be regulated to a various extent. Thus, not only overall network topologies but the interlinkage of nodes appear to be involved in the formation of regulatory patterns of the frequency and amplitude.
It may also be noteworthy to mention that, in this study, the chosen parameter set and the kind of interactions were identified as a minor contributory factor that could affect the regulatory patterns of the frequency and amplitude, though not significantly. For class I oscillators, the frequency and amplitude were changed by less than 1% for most of the parameter sets (pattern R) except for a few parameter sets where the amplitude was changed by more than 1% (pattern A). For class II oscillators, the frequency was adjustable for more than one third of the parameter sets (pattern F) whereas, for the others, the frequency was not changed (patterns R and A). For class III oscillators, the amplitude was adjustable for a relatively greater part of the parameter sets (pattern A) whereas, for the others, the amplitude was not changed (patterns R and F).
For the activator-amplified NFO and inhibitor-amplified NFO, by perturbation of Lxx or Lxy, the frequency was adjusted whereas the perturbations did not significantly change the frequency or amplitude. In the incoherently amplified NFOs, only some kinds of interactions seem to be involved in modulation of the amplitude.
Our analyses of naturally-occurring biochemical oscillator models showed that the regulatory principle suggested here may have applications in naturally-occurring biochemical oscillation models. A question then might arise as to what functional benefits can be derived from a particular network structure in naturally-occurring biochemical oscillators? Both frequency and amplitude of a circadian oscillator need to be regulated against fluctuations in order to maintain robust 24-h rhythms [36, 37]. This requirement can be satisfied by the network structure of a class I oscillator. When an incoherent feedforward structure is added to such a circadian oscillator, a stable oscillator with a different frequency can be generated and used to meet other biological needs [38]. Sinus nodal cells and neurons should be able to tune their frequency to transmit the information to neighboring cells appropriately [39] and cell cycles have to regulate the rate of their progression appropriately in response to environmental changes. To this end, the network structure of a class II oscillator might be a suitable one. In the glycolysis and cAMP models, regulation of the amplitude has greater importance since the amplitude of phosphofructokinase and cAMP, which are actively involved in metabolic and signaling processes respectively, has a significant role in cellular functions. Therefore, the network structure of a class III oscillator can be a better choice for these cases.
From an evolutionary point of view, the fact that class II and class III oscillators are frequently found in natural biological examples can be considered as important evidence implicating that they might have some performance advantages over class I oscillators. This study suggests that such advantages might be found from the ability to tune the frequency in class II oscillators and the ability to regulate the amplitude in class III oscillators.
Our study has the following limitations. Firstly, our study employed the Jacobian matrix which describes the interactions between state variables instead of employing the monodromy matrix (i.e., the state transition matrix over one period) or Floquet multipliers (i.e., eigenvalues of the monodromy matrix) which have been widely used to determine the oscillatory properties of a system with a limit cycle. Therefore, our scope of analysis was largely confined to examining influences of interaction perturbation on the oscillatory properties. Secondly, interaction perturbation was performed during one cycle of oscillation because longer duration of perturbation destabilized the oscillation in most cases. Thirdly, our analysis of structurally related models may not be sufficient to investigate the general characteristics of each structure in greater detail since it was performed under the pre-defined parameter combinations.
From the analyses based on the interaction perturbation method, we found a new regulatory principle that differences in network structures can give rise to different regulatory patterns of the frequency and amplitude. This finding could serve as a basis for further investigation into the underlying mechanism for the regulation of the frequency and amplitude in existing biochemical oscillators as well as for designing synthetic oscillators with a specific regulatory function.
Analysis procedures for 3-node biochemical oscillators
We constructed six representative oscillator models and generated random parameter sets for each model ensuring its sustained oscillation under the parameter sets. Then, we converted the interactions in the model into corresponding elements of the Jacobian matrix and performed perturbations of the elements. After the perturbations, we measured resulting changes of the frequency and amplitude. The analysis workflows are described in Fig. 1 and Additional file 1: Figure S1.
Construction of six representative models for biochemical oscillators
Each three-node oscillator model was described in terms of three coupled ODEs with the combinatorial use of mass action laws and Michaelis-Menten kinetics. The ODEs of the six 3-node oscillator models are provided in the Additional file 1: Eq. A1. In every oscillator model, the oscillations were sustained under specific parameter sets. After an initial transient, integrations that started under different initial conditions quickly converged to a common trajectory with the same frequency and amplitude, namely, a limit-cycle oscillation.
Random parameter generation for the six 3-node biochemical oscillator models
Determining the parameter values constitutes an important process to create sustained oscillations. We chose to determine parameter values by extensive search of the parameter space because an analytical approach does not lend itself to dealing with a large number of parameters.
Random parameter sets were generated for each three-node oscillator model by selecting parameters from an exponential distribution within the range of 0.001 to 1000, using the Latin hypercube sampling method [40]. This range corresponds to biologically reasonable values typically used to model biological systems [41–45]. All parameters except for the Hill coefficients were randomly generated. For each parameter set, we verified whether the model produced a limit-cycle oscillation [46]. Through this process, a total of 1000 parameter sets were generated for each model, and consequently, each three-node oscillator model yielded 1000 parameter-value-assigned models.
Algebraic representation of interaction using Jacobian matrix
A network topology shows clearly whether an interaction is activating or inhibiting. However, when this network is represented by the system of ODEs, the function of interaction is not easily identifiable. To represent an interaction as an algebraic object, the Jacobian matrix will be a reasonable choice since an element of the Jacobian matrix corresponds to an interaction. Because an element of the Jacobian matrix cannot be obtained directly from first-order ODEs, we differentiated the first-order ODEs with respect to time to generate second-order ODE systems (step 3 in Fig. 1). Thus, this system can be represented by the matrix product of the Jacobian matrix and first-order ODEs. Here, the Jacobian matrix is defined as a ij = ∂f i /∂x j , where i and j denote the row and column indices of the Jacobian matrix, respectively. A non-zero value of a ij means that variable x j influences the evolution of variable x i ; in other words, an interaction from node j to node i exists [47].
First-order ODEs were numerically integrated using various initial conditions until they converged to a limit-cycle oscillation. In order to confirm that second-order ODEs are good approximations to first-order ODEs, we simulated second-order ODEs and first-order ODEs using a point in a limit-cycle oscillation as an initial point. The results showed almost no differences (see Additional file 1: Table S2 for the mean differences between the two simulations).
Perturbation conditions
Because the aim of this study is to investigate the difference in the regulatory patterns of the frequency and amplitude between network structures, the following perturbation conditions for each parameter-value-assigned model were set so that the regulatory patterns should not be influenced by perturbation conditions: the type of interaction to be perturbed; the perturbation strength; the perturbation duration; and the perturbation starting point.
Every type of interaction was perturbed one by one since the function of an interaction may not be distinguishable if more than two interactions were perturbed simultaneously. The strength of the perturbations was weakened by 1%, 2%, 4%, and 8%, which correspond to weakening factors of 0.99, 0.98, 0.96, and 0.92, respectively. These weakening degrees were selected because weakening perturbations by more than 8% often destabilized the limit cycle oscillation.
After determining the type of interactions and weakening factors, we multiplied a weakening factor by the corresponding element of the Jacobian matrix to construct the Jacobian matrix of the perturbation. For instance, when we perturbed Lyx with a weakening factor of 0.99, we multiplied 0.99 to the element at the second row and first column of the Jacobian matrix leaving all other remaining elements the same.
The perturbations were performed during one period of oscillation since the results could vary according to the oscillatory phases. We established 40 starting points of perturbations which were evenly distributed along the time cycle to prevent the trajectories from being influenced by the positions where the perturbations began.
Perturbation processes
A first-order ODE was numerically integrated using various initial conditions until it reached a starting point of perturbation. We simulated a second-order ODE using the perturbed Jacobian matrix during one period of oscillation. After that, the second-order ODE was integrated using the unperturbed Jacobian matrix until the oscillation was stabilized.
Representation of perturbation results
We measured the frequency and amplitude when the oscillation was stabilized after completion of a perturbation while excluding the case of damped oscillations. The percentages of the parameter sets out of total parameter sets for 3-node oscillators that did not show sustained limit-cycle oscillations are provided in Additional file 1: Table S4. The change in the frequency and amplitude is presented as a ratio to the frequency and amplitude of the oscillation before the perturbation. For instance, the ratio (2, 0.5) means that the frequency doubled and the amplitude halved.
The perturbations of a three-node oscillator model generated around 1000,000 results. To make the results more intuitive and easily understood, we depicted them in density plots. We divided the whole frequency-amplitude domain into 10,000 equal-sized sub-domains, and calculated the number of results that belong to each sub-domain and the percentage that it occupies in the total number of results, and then converted the calculated percentage into density and each color-coded sub-domain according to its density; in the plots, the density is increasing over a continuum starting from white followed by yellow, red and black.
Mathematically controlled comparisons
To determine the parameter sets for the activator-amplified NFO variant, the inhibitor-amplified NFO variant, and the type 1 incoherently-amplified NFO variant, we adopted a method for mathematically controlled comparisons proposed by Michael A. Savageau [14]. First, the values of the parameters for the unaltered processes of one system are assumed to be identical with those of the corresponding parameters of the other system. For instance, in the activator-amplified NFO variant, degradation of X, synthesis of Y, degradation of Y, synthesis of Z, and degradation of Z are unaltered processes in comparison to the simple NFO. So, parameters for those processes (kdx, k1, k2, Km, and k3) in the activator-amplified NFO variant are equal to the corresponding parameters of the simple NFO. Second, parameters associated with altered processes are free to assume any values. In the activator-amplified NFO variant, parameters related with the synthesis of X are assumed to have any values. Third, the parameters of the altered processes are determined by imposing constraints on the external behavior of the system. For the above three oscillation models, the following two constraints were imposed to determine the free parameters: (i) integration of the ODE models with the specified parameter sets have to be able to generate a limit-cycle oscillation; (ii) the frequency and amplitude of the oscillation have to be similar to those of the simple NFO.
After determination of the parameters, perturbations were performed in the same manner as in the conceptual three-node models. Then, the changes in the frequency and amplitude in the four oscillator models (the simple NFO, the activator-amplified NFO variant, the inhibitor-amplified NFO variant, and the type 1 incoherently-amplified NFO variant) were compared.
cAMP:
cyclic adenosine monophosphate
MAPK:
NFO:
negative feedback oscillator
ODE:
ordinary differential equation
PKA:
protein kinase A
We would like to thank Ho-Sung Lee and Je-Hoon Song for their valuable discussions on the initial manuscript.
This work was supported by the National Research Foundation of Korea (NRF) grants funded by the Korea Government, the Ministry of Science, ICT & Future Planning (2017R1A2A1A17069642, 2015M3A9A7067220, and 2013M3A9A7046303). It was also supported by the KAIST Future Systems Healthcare Project from the Ministry of Science, ICT & Future Planning, the KUSTAR-KAIST Institute, Korea under the R&D program supervised by the KAIST, and the grant of the Korean Health Technology R&D Project, Ministry of Health & Welfare, Republic of Korea (HI13C2162).
All model equations and parameters for the replication of the results are provided in the Additional file 1.
K-HC supervised this study; JHK and K-HC designed the study; JHK and K-HC conducted modeling, simulations, and analysis of the simulation results; JHK and K-HC wrote the manuscript. All authors read and approved the final manuscript.
Additional file 1: Details on methods and results of the interaction perturbation analysis for comprehensive assessment of the regulatory principle underlying various biochemical oscillators. (PDF 3894 kb)
Laboratory for Systems Biology and Bio-inspired Engineering, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, Republic of Korea
Hess B, Boiteux A. Oscillatory phenomena in biochemistry. Annu Rev Biochem. 1971;40:237–58.View ArticlePubMedGoogle Scholar
Glass L. Synchronization and rhythmic processes in physiology. Nature. 2001;410:277–84. Nature Publishing GroupView ArticlePubMedGoogle Scholar
Maini P. Biochemical oscillations and cellular rhythms: the molecular basis of periodic and chaotic behaviour by Albert Goldbeter, Cambridge University press, 1996. ISBN 0 521 40307 3. Trends Biochem Sci. 1996;21:403.Google Scholar
Reppert SM, Weaver DR. Coordination of circadian timing in mammals. Nature. 2002;418:935–41.View ArticlePubMedGoogle Scholar
Dunlap JC. Molecular bases for circadian clocks. Cell. 1999;96:271–90. ElsevierView ArticlePubMedGoogle Scholar
Rand DA, Shulgin BV, Salazar D, Millar AJ. Design principles underlying circadian clocks. J R Soc Interface. 2004;1:119–30. The Royal SocietyGoogle Scholar
Hall JE, Guyton AC. The circulation. Guyton and hall physiology review. Philadelphia: Elsevier. 2011; p.41–70.Google Scholar
Tasken K, Aandahl EM. Localized effects of cAMP mediated by distinct routes of protein kinase a. Physiol Rev. 2004;84:137–67.View ArticlePubMedGoogle Scholar
Pilkis SJ, Granner DK. Molecular physiology of the regulation of hepatic gluconeogenesis and glycolysis. Annu Rev Physiol. 1992;54:885–909.View ArticlePubMedGoogle Scholar
Novák B, Tyson JJ. Design principles of biochemical oscillators. Nat Rev Mol Cell Biol. 2008;9:981–91. Nature Publishing GroupView ArticlePubMedPubMed CentralGoogle Scholar
Tsai TY-C, Choi YS, Ma W, Pomerening JR, Tang C, Ferrell JE. Robust, tunable biological oscillations from interlinked positive and negative feedback loops. Science. 2008;321:126–9.View ArticlePubMedPubMed CentralGoogle Scholar
Ananthasubramaniam B, Herzel H. Positive feedback promotes oscillations in negative feedback loops. PLoS One. 2014;9:e104761. Thattai M, editorView ArticlePubMedPubMed CentralGoogle Scholar
Ferrell JE, Tsai TY-C, Yang Q. Modeling the cell cycle: why do certain circuits oscillate? Cell. 2011;144:874–85. ElsevierView ArticlePubMedGoogle Scholar
Savageau MA. Design principles for elementary gene circuits: elements, methods, and examples. Chaos. 2001;11:142. American Institute of Physics AIPView ArticlePubMedGoogle Scholar
Goldbeter A. A model for circadian oscillations in the drosophila period protein (PER). Proc R Soc B Biol Sci. 1995;261:319–24.View ArticleGoogle Scholar
Leloup J-C, Goldbeter A. A model for circadian rhythms in drosophila incorporating the formation of a complex between the PER and TIM proteins. J Biol Rhythm. 1998;13:70–87. Sage Publications Sage CA: Thousand Oaks, CAView ArticleGoogle Scholar
Elowitz MB, Leibler S. A synthetic oscillatory network of transcriptional regulators. Nature. 2000;403:335–8. Nature Publishing GroupView ArticlePubMedGoogle Scholar
Yanagihara K, Akinori N, Irisawa H. Reconstruction of sino-atrial node pacemaker potential based on the voltage clamp experiments. Jpn J Physiol. 1980;30:841–57.View ArticlePubMedGoogle Scholar
Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol. 1952;117:500–44.View ArticlePubMedPubMed CentralGoogle Scholar
Pomerening JR, Kim SY, Ferrell JE. Systems-level dissection of the cell-cycle oscillator: bypassing positive feedback produces damped oscillations. Cell. 2005;122:565–78. ElsevierView ArticlePubMedGoogle Scholar
Martiel JL, Goldbeter A. A model based on receptor desensitization for cyclic AMP signaling in Dictyostelium cells. Biophys J. 1987;52:807–28. ElsevierView ArticlePubMedPubMed CentralGoogle Scholar
Sel'kov EE. Self-oscillations in Glycolysis. 1. A simple kinetic model. Eur J Biochem. 1968;4:79–86. Blackwell Publishing LtdView ArticlePubMedGoogle Scholar
Higgins J. A chemical mechanism for oscillation of glycolytic intermediates in yeast cells. Proc Natl Acad Sci. 1964;51:989–94.View ArticlePubMedPubMed CentralGoogle Scholar
Kim D, Rath O, Kolch W, Cho K-H. A hidden oncogenic positive feedback loop caused by crosstalk between Wnt and ERK pathways. Oncogene. 2007;26:4571–9. Nature Publishing GroupView ArticlePubMedGoogle Scholar
Shin SY, Rath O, Zebisch A, Choo SM, Kolch W, Cho K-H. Functional roles of multiple feedback loops in extracellular signal-regulated kinase and Wnt signaling pathways that regulate epithelial-Mesenchymal transition. Cancer Res. 2010;70:6715–24.View ArticlePubMedPubMed CentralGoogle Scholar
Shin S-Y, Yang HW, Kim J-R, Do Heo W, Cho K-H. A hidden incoherent switch regulates RCAN1 in the calcineurin–NFAT signaling network. J Cell Sci. 2011;124:82–90. The Company of Biologists LtdView ArticlePubMedGoogle Scholar
Kwon Y-K, Cho K-H. Coherent coupling of feedback loops: a design principle of cell signaling networks. Bioinformatics. 2008;24:1926–32. Oxford University PressView ArticlePubMedGoogle Scholar
Shin S-Y, Choo S-M, Kim D, Baek SJ, Wolkenhauer O, Cho K-H. Switching feedback mechanisms realize the dual role of MCIP in the regulation of calcineurin activity. FEBS Lett. 2006;580:5965–73.View ArticlePubMedGoogle Scholar
Lee HS, Hwang CY, Shin SY, Kwon KS, Cho K-H. MLK3 is part of a feedback mechanism that regulates different cellular responses to reactive oxygen species. Sci Signal. 2014;7:ra52.View ArticlePubMedGoogle Scholar
Rand DA. Mapping global sensitivity of cellular network dynamics: sensitivity heat maps and a global summation law. J R Soc Interface. 2008;5(Suppl 1):S59–69.View ArticlePubMedPubMed CentralGoogle Scholar
Otero-Muras I, Banga JR. Design principles of biological oscillators through optimization: forward and reverse analysis. PLoS One. 2016;11:e0166867. Poyatos JF, editor.View ArticlePubMedPubMed CentralGoogle Scholar
Wilkins AK, Tidor B, White J, Barton PI. Sensitivity analysis for oscillating dynamical systems. SIAM J Sci Comput. 2009;31:2706–32.View ArticlePubMedPubMed CentralGoogle Scholar
Caicedo-Casso A, Kang H-W, Lim S, Hong CI. Robustness and period sensitivity analysis of minimal models for biochemical oscillators. Sci Rep. 2015;5:13161. Nature Publishing GroupView ArticlePubMedPubMed CentralGoogle Scholar
Caicedo-Casso A, Kang H-W, Lim S, Hong CI. Corrigendum: robustness and period sensitivity analysis of minimal models for biochemical oscillators. Sci Rep. 2016;6:18504. Nature Publishing GroupView ArticlePubMedPubMed CentralGoogle Scholar
Rinaldi S, Muratori S, Kuznetsov Y. Multiple attractors, catastrophes and chaos in seasonally perturbed predator-prey communities. Bull Math Biol. 1993;55:15–35.View ArticleGoogle Scholar
Hastings JW, Sweeney BM. On the mechanism of temperature independence in a biological clock. Proc Natl Acad Sci. 1957;43:804–11.View ArticlePubMedPubMed CentralGoogle Scholar
Pittendrigh CS. On temperature independence in the clock system controlling emergence time in drosophila. Proc Natl Acad Sci. 1954;40:1018–29.View ArticlePubMedPubMed CentralGoogle Scholar
Martins BM, Das AK, Antunes L, Locke JC. Frequency doubling in the cyanobacterial circadian clock. Mol Syst Biol. 2016;12:896.View ArticlePubMedPubMed CentralGoogle Scholar
Izhikevich EM, Desai NS, Walcott EC, Hoppensteadt FC. Bursts as a unit of neural information: selective communication via resonance. Trends Neurosci. 2003;26:161–7.View ArticlePubMedGoogle Scholar
Dubitzky W, Wolkenhauer O, Cho K-H, Yokota H, editors. Latin hypercube sampling. Encyclopedia of systems biology. New York, NY: Springer; 2013. p. 1105.Google Scholar
Chickarmane V, Kholodenko BN, Sauro HM. Oscillatory dynamics arising from competitive inhibition and multisite phosphorylation. J Theor Biol. 2007;244:68–76.View ArticlePubMedGoogle Scholar
Liu P, Kevrekidis IG, Shvartsman SY. Substrate-dependent control of ERK phosphorylation can lead to oscillations. Biophys J. 2011;101:2572–81. ElsevierView ArticlePubMedPubMed CentralGoogle Scholar
Markevich NI, Hoek JB, Kholodenko BN. Signaling switches and bistability arising from multisite phosphorylation in protein kinase cascades. J Cell Biol. 2004;164:353–9. Rockefeller University PressView ArticlePubMedPubMed CentralGoogle Scholar
Qiao L, Nachbar RB, Kevrekidis IG, Shvartsman SY. Bistability and oscillations in the Huang-Ferrell model of MAPK signaling. PLoS Comput Biol. Public Library of Science. 2007;3:e184.View ArticlePubMed CentralGoogle Scholar
Shankaran H, Ippolito DL, Chrisler WB, Resat H, Bollinger N, Opresko LK, et al. Rapid and sustained nuclear–cytoplasmic ERK oscillations induced by epidermal growth factor. Mol Syst Biol. EMBO Press. 2009;5:332.View ArticlePubMedPubMed CentralGoogle Scholar
Dhooge A, Govaerts W, Kuznetsov YA. MATCONT: a MATLAB package for numerical bifurcation analysis of ODEs. ACM transactions on mathematical software (TOMS). ACM. 2003;29:141–64.Google Scholar
Thomas R. Circular causality. IEE Proc Syst Biol. IET Digital Library. 2006;153:140–53.View ArticleGoogle Scholar
Networks and information flow | CommonCrawl |
How to obtain division algebras used for fast-decodable space-time block codes
AMC Home
Sets of frequency hopping sequences under aperiodic Hamming correlation: Upper bound and optimal constructions
August 2014, 8(3): 343-358. doi: 10.3934/amc.2014.8.343
A general construction for monoid-based knapsack protocols
Giacomo Micheli 1, and Michele Schiavina 1,
Institut für Mathematik, Winterthurerstrasse 190, Zürich, CH8057, Switzerland, Switzerland
Received November 2013 Revised February 2014 Published August 2014
We present a generalized version of the knapsack protocol proposed by D. Naccache and J. Stern at the Proceedings of Eurocrypt (1997). Our new framework will allow the construction of other knapsack protocols having similar security features. We will outline a very concrete example of a new protocol using extension fields of a finite field of small characteristic instead of the prime field $\mathbb{Z}/p\mathbb{Z}$, but more efficient in terms of computational costs for asymptotically equal information rate and similar key size.
Keywords: polynomials over finite fields, Naccache-Stern protocol., monoids, Public key encryption, knapsack protocols.
Mathematics Subject Classification: Primary: 94A60; Secondary: 11T7.
Citation: Giacomo Micheli, Michele Schiavina. A general construction for monoid-based knapsack protocols. Advances in Mathematics of Communications, 2014, 8 (3) : 343-358. doi: 10.3934/amc.2014.8.343
B. Chevallier-Mames, D. Naccache and J. Stern, Linear bandwidth Naccache-Stern encryption,, in Security and Cryptography for Networks, (2008), 337. doi: 10.1007/978-3-540-85855-3_22. Google Scholar
W. Diffie and M. Hellman, New directions in cryptography,, IEEE Trans. Inf. Theory, 22 (1976), 644. Google Scholar
T. ElGamal, A public-key cryptosystem and a signature scheme based on discrete logarithms,, IEEE Trans. Inf. Theory, 31 (1985), 469. doi: 10.1109/TIT.1985.1057074. Google Scholar
P. Erdös, Ramanujan and I,, Resonance, 3 (1998), 81. Google Scholar
M. E. Hellman, An overview of public key cryptography,, IEEE Commun. Soc. M., 16 (1978), 24. doi: 10.1109/MCOM.1978.1089772. Google Scholar
J. Hoffstein, J. Pipher and J. H. Silverman, A ring based public key cryptosystem,, in Algorithmic Number Theory (ANTS III), (1998), 267. doi: 10.1007/BFb0054868. Google Scholar
S. Kiuchi, Y. Murakami and M. Kasahara, High rate mulitiplicative knapsack cryptosystem (in Japanese),, IEICE Tech. Report, ISEC98-26 (1998), 98. Google Scholar
S. Kiuchi, Y. Murakami and M. Kasahara, New mulitiplicative knapsack-type public key cryptosystems,, IEICE Trans., E84-A (2001), 188. Google Scholar
G. Maze, C. Monico and J. Rosenthal, Public key cryptography based on semigroup actions,, Adv. Math. Commun., 1 (2007), 489. doi: 10.3934/amc.2007.1.489. Google Scholar
R. J. McEliece, A public-key cryptosystem based on algebraic coding theory,, DSN Progress Report, 114 (1978), 42. Google Scholar
M. Morii and M. Kasahara, New public key cryptosystem using discrete logarithms over GF(P) (in Japanese),, IEICE Trans., J71-D (1988), 448. Google Scholar
D. Naccache and J. Stern, New public key cryptosystem,, in Proceedings of Eurocrypt 97, (1997), 27. doi: 10.1007/3-540-69053-0_3. Google Scholar
T. Okamoto, K. Tamaka and S. Uchiyama, Quantum public key cryptosystem,, in Advances in cryptology - CRYPTO 2000, (2000), 147. doi: 10.1007/3-540-44598-6_9. Google Scholar
J. Patarin, Hidden field equations HFE and isomorphisms of polynomials IP: two new families of asymmetric algorithms,, in Advances in Cryptology - EUROCRYPT '96, (1996), 33. Google Scholar
R. Rivest, A. Shamir and L. Adleman, A method for obtaining digital signatures and public-key cryptosystems,, Commun. ACM, 21 (1978), 120. doi: 10.1145/359340.359342. Google Scholar
M. Rosen, Number Theory in Function Fields,, Springer, (2002). doi: 10.1007/978-1-4757-6046-0. Google Scholar
A. Salomaa, Public Key Cryptography,, Springer-Verlag, (1990). doi: 10.1007/978-3-662-02627-4. Google Scholar
V. Shoup, New algorithms for finding irreducible polynomials over finite fields,, Math. Comput., 54 (1990), 435. doi: 10.2307/2008704. Google Scholar
Joan-Josep Climent, Juan Antonio López-Ramos. Public key protocols over the ring $E_{p}^{(m)}$. Advances in Mathematics of Communications, 2016, 10 (4) : 861-870. doi: 10.3934/amc.2016046
Jean-François Biasse, Michael J. Jacobson, Jr.. Smoothness testing of polynomials over finite fields. Advances in Mathematics of Communications, 2014, 8 (4) : 459-477. doi: 10.3934/amc.2014.8.459
Giacomo Micheli. Cryptanalysis of a noncommutative key exchange protocol. Advances in Mathematics of Communications, 2015, 9 (2) : 247-253. doi: 10.3934/amc.2015.9.247
Stefania Fanali, Massimo Giulietti, Irene Platoni. On maximal curves over finite fields of small order. Advances in Mathematics of Communications, 2012, 6 (1) : 107-120. doi: 10.3934/amc.2012.6.107
Shengtian Yang, Thomas Honold. Good random matrices over finite fields. Advances in Mathematics of Communications, 2012, 6 (2) : 203-227. doi: 10.3934/amc.2012.6.203
Francis N. Castro, Carlos Corrada-Bravo, Natalia Pacheco-Tallaj, Ivelisse Rubio. Explicit formulas for monomial involutions over finite fields. Advances in Mathematics of Communications, 2017, 11 (2) : 301-306. doi: 10.3934/amc.2017022
Gerhard Frey. Relations between arithmetic geometry and public key cryptography. Advances in Mathematics of Communications, 2010, 4 (2) : 281-305. doi: 10.3934/amc.2010.4.281
Gérard Maze, Chris Monico, Joachim Rosenthal. Public key cryptography based on semigroup actions. Advances in Mathematics of Communications, 2007, 1 (4) : 489-507. doi: 10.3934/amc.2007.1.489
Felipe Cabarcas, Daniel Cabarcas, John Baena. Efficient public-key operation in multivariate schemes. Advances in Mathematics of Communications, 2019, 13 (2) : 343-371. doi: 10.3934/amc.2019023
Joseph H. Silverman. Local-global aspects of (hyper)elliptic curves over (in)finite fields. Advances in Mathematics of Communications, 2010, 4 (2) : 101-114. doi: 10.3934/amc.2010.4.101
Liren Lin, Hongwei Liu, Bocong Chen. Existence conditions for self-orthogonal negacyclic codes over finite fields. Advances in Mathematics of Communications, 2015, 9 (1) : 1-7. doi: 10.3934/amc.2015.9.1
Uwe Helmke, Jens Jordan, Julia Lieb. Probability estimates for reachability of linear systems defined over finite fields. Advances in Mathematics of Communications, 2016, 10 (1) : 63-78. doi: 10.3934/amc.2016.10.63
David Grant, Mahesh K. Varanasi. Duality theory for space-time codes over finite fields. Advances in Mathematics of Communications, 2008, 2 (1) : 35-54. doi: 10.3934/amc.2008.2.35
Amin Sakzad, Mohammad-Reza Sadeghi, Daniel Panario. Cycle structure of permutation functions over finite fields and their applications. Advances in Mathematics of Communications, 2012, 6 (3) : 347-361. doi: 10.3934/amc.2012.6.347
Fatma-Zohra Benahmed, Kenza Guenda, Aicha Batoul, Thomas Aaron Gulliver. Some new constructions of isodual and LCD codes over finite fields. Advances in Mathematics of Communications, 2019, 13 (2) : 281-296. doi: 10.3934/amc.2019019
Nian Li, Qiaoyu Hu. A conjecture on permutation trinomials over finite fields of characteristic two. Advances in Mathematics of Communications, 2019, 13 (3) : 505-512. doi: 10.3934/amc.2019031
Mohammad Sadeq Dousti, Rasool Jalili. FORSAKES: A forward-secure authenticated key exchange protocol based on symmetric key-evolving schemes. Advances in Mathematics of Communications, 2015, 9 (4) : 471-514. doi: 10.3934/amc.2015.9.471
Amita Sahni, Poonam Trama Sehgal. Enumeration of self-dual and self-orthogonal negacyclic codes over finite fields. Advances in Mathematics of Communications, 2015, 9 (4) : 437-447. doi: 10.3934/amc.2015.9.437
Ekkasit Sangwisut, Somphong Jitman, Patanee Udomkavanich. Constacyclic and quasi-twisted Hermitian self-dual codes over finite fields. Advances in Mathematics of Communications, 2017, 11 (3) : 595-613. doi: 10.3934/amc.2017045
David Grant, Mahesh K. Varanasi. The equivalence of space-time codes and codes defined over finite fields and Galois rings. Advances in Mathematics of Communications, 2008, 2 (2) : 131-145. doi: 10.3934/amc.2008.2.131
Giacomo Micheli Michele Schiavina | CommonCrawl |
Exploring factors influencing pregnant Women's attitudes, perceived subjective norms and perceived behavior control towards male involvement in maternal services utilization: a baseline findings from a community based interventional study from Rukwa, rural Tanzania
Fabiola V. Moshi ORCID: orcid.org/0000-0001-8829-27461,
Stephen M. Kibusi2 &
Flora Fabian3
Although male involvement enhances obstetric care-seeking behavior, the practice of male involvement in developing countries remains unacceptably low. Male involvement in maternal services utilization can be influenced by the attitude, subjective norm, and perceived behavior control of their female partners. Little is known about factors influencing pregnant women's attitudes, perceived subjective norms, and perceived behavior control towards male involvement in maternal services utilization.
A baseline community-based cross-sectional study whose target was pregnant women were performed from 1st June until 30th October 2017. A three-stage probability sampling technique was employed to obtain a sample of 546 pregnant women. A structured questionnaire that hinged the Theory of Planned Behavior was used. The questionnaire explored three main determinants of male involvement, which were: attitudes towards male involvement, perceived subjective norms towards male involvement, and perceived behavior control towards male involvement.
After adjusting for the confounders, factors influencing positive attitude towards male involvement were age at marriage [19 to 24 yrs.,(AOR = 1.568 at 95% CI =1.044–2.353), more than 24 yrs. (AOR = 2.15 at 95% CI = 1.150–1.159)]; education status [primary school (AOR = 1.713 at 95% CI = 1.137–2.58)] and economic status [earning more than one dollar per day (AOR = 1.547 at 95% CI = 1.026–2.332)]. Factors influencing perceived subjective norms was only age at marriage [19 to 24 yrs., (AOR = 1.447 at 95% CI = 0.970–2.159), more than 24 years, (AOR = 2.331 at 95% CI = 1.261–4.308)]; factors influencing perceived behavior control were age at marriage [more than 24 years (AOR = 2.331 at 95%CI = 1.261–4.308)], and the intention to be accompanied by their male partners (AOR = 1.827 at 95%CI = 1.171–2.849).
The study revealed that women who were married at an older age were more likely to have a positive attitude, subjective norms, and perceived behavior control towards male involvement in maternal services utilization than those who were married at a young age. Pregnant women who had primary education and earn more than a dollar per day were more likely to have positive attitudes towards male involvement than poor and uneducated pregnant women. The study recommends an interventional study to evaluate the influence attitude, subjective norms, and perceived behavior control on male involvement in maternal services utilization.
Maternal mortality is a public health challenge worldwide. In 2015, 303,000 maternal deaths were estimated to have occurred globally [1]. Nearly all of these deaths occurred in low resources countries [1]. In Tanzania, the reckoned maternal mortality ratio was 556/100,000 [2], meaning that for every 1000 live births, about 5 women died due to pregnancy-related causes which amounted to 8000 maternal deaths per year. Therefore, Tanzania was categorized to be among the countries in Africa with the highest maternal mortalities.
Low male involvement in maternal services utilization in low resources countries has been cited as one of the factors contributing to high maternal mortality in these countries [3]. Male involvement in maternal services utilization has been expressed as a practice of social and behavioral change that is needed for men to take more responsibility in maternal services utilization with the focus of ensuring women's and children's health [4].
There are complex behavioral and cultural factors influencing male partner's involvement in the care of their expecting wives/partners in Tanzania [5]. The evidence indicates that efforts that embrace male partners and uphold gender-equitable relationships between men and women are more efficient in producing behavior change than narrowly focused interventions [6].
The practice of male involvement in developing countries including Tanzania remains unacceptably low [7,8,9]. The previous study was done by Sokoya Masunmola et al., [10] has reported that although both men and women are in support of male involvement during pregnancy and childbirth surprisingly very few men were involved in maternal services utilization. The low male involvement practice could be rooted in cultural gender roles where pregnancy care and childbirth are believed to be women's responsibility [3] while men's responsibility is to provide financial support [5].
The effect of gender roles and responsibilities does matter in actual male involvement in maternal services utilization in low resources settings including Tanzania [5]. It has been a norm in rural settings that pregnant care, childbirth, and post-delivery care are solemnly responsible for women [5]. The responsibility of the male partner is to provide financial support [5]. The trend in low resources settings is now struggling to change from the traditional maternal services delivery from addressing a pregnant woman to addressing couples. If pregnant women understand and accept positively on involving their male partners in maternal services utilization, the state of male involvement in maternal services utilization will improve dramatically.
As well, pregnant women's negative attitude towards male involvement [8] is among the barriers towards male involvement. The negative attitude is due to three aspects, the perception that pregnancy and childbirth are the responsibility of women [5, 8], avoiding negative stereotyping [8], and fear male involvement may decrease their superior power and end up being insecure like women [8].
Studies have also reported unfair reproductive health programmes for women without partners as a contributing factor for low male involvement in maternal services utilization [9]. This means the struggle to bring male partners in maternal health services utilization must go hand in hand with creating male-centered services. It should go beyond physical presence. Although the center of care is a pregnant woman, a man should feel involved by either his vital signs taken or given health education specifically for him. Tanzania is among the countries with low male involvement in maternal health services especially the rural communities [10]. There is a direct relationship between male involvement and cultural beliefs which means that the societal perception and beliefs about male involvement do affect male involvement [11]. When there is a cultural belief that disapproves of the involvement of male partners in maternal services utilization, there is low male involvement despite educational interventions and mobilizations to improve male involvement.
Male partners may be willing to learn their roles in maternal services utilization but the existing perception that pregnant care issues are solemnly responsibility of women may act as a barrier towards their involvement in maternal services utilization [12, 13]. This study invested in studying the attitudes perceived subjective norms and perceived behavior control towards male involvement in maternal services utilization among pregnant women.
According to the Theory of Planned Behavior, behavior intention is influenced by three predictors which are attitude, subjective norms, and perceived behavior control [14]. Attitude is influenced by individual beliefs and the evaluation of behavioral outcomes. The perceived subjective norms are the way a pregnant woman perceives her society approves of disapproves male involvement in maternal services utilization. If she perceived her society approves for her to be accompanied by her male partner, she will act in favor of male involvement but if she perceives her society disapproves of the act then she will act accordingly. The perceived behavior control is influenced by control beliefs and perceived power.
Therefore, there was a need to determine pregnant women's attitudes, perceived subjective norms, and perceived behavior control towards male involvement in maternal services utilization. The study also went further to explore factors that are associated with attitudes, subjective norms, and perceived behavior control towards male involvement in maternal services utilization.
Study design and setting
It was a community-based cross-sectional study done in Rukwa Region from 1st June to 30th October 2017, among pregnant women from forty-five villages. According to the national census of 2012, Rukwa had a population of 1,004,539 people; 487,311 males, and 517,228 females. The region has the lowest mean age at a marriage where male marry at the age of 23.3 years and female marry at age of 19.9 years and has a fertility rate of 7.3 [15].
Sampling method and sample size
Sampling technique
Rukwa region has four administrative districts. Two districts (Sumbawanga and Kalambo Districts) were purposively selected from the four districts due to the high proportion of home birth assisted by unskilled birth attendants [16]. Three stages of sampling technique were used to obtain study participants. In the first random samplings, a simple random sampling technique was used to obtain five wards from 12 wards of Sumbawanga district and ten wards from 17 wards of Kalambo district. In the second stage random samplings, all villages in the selected wards were listed separately from each district and a simple random sampling technique using the lottery method was used to select 15 villages from Sumbawanga rural district and thirty villages from Kalambo district. A systematic sampling technique was used in the third stage sampling. Households with pregnant women of 24 weeks gestation age or less and living with a male partner were systematically selected. The first household was randomly selected; a female partner was assessed for signs and symptoms of pregnancy. For a female partner who had missed her period for 2 months was requested to test for pregnancy. For those with positive tests and consented to participate were enrolled in the study. If a selected household had no eligible participants, the household was skipped and researchers entered into the next household in the predetermined direction.
Sample size calculation
The sample size was calculated using the following formula [17].
$$ \mathrm{n}=\frac{{\left\{\mathrm{Z}\upalpha \sqrt{\left[\uppi \mathrm{o}\left(1-\uppi \mathrm{o}\right)\right]}+2\upbeta \sqrt{\left[\uppi 1\left(1-\uppi 1\right)\right]}\right\}}^2}{{\left(\uppi 1-\uppi \mathrm{o}\right)}^2} $$
n = maximum sample size.
Ζα = Standard normal deviation (1.96) at 95% confidence level for this study.
2β = standard normal deviate (0.84) with a power of demonstrating a statistically significant difference before and after the intervention between the two groups at 90%.
πο = Proportion at pre- intervention (Use of skilled delivery in Rukwa region 30.1%) [16].
π1= proportion after intervention (Proportion of families which would access skilled birth attendant 51%) [16].
$$ \mathrm{n}=\frac{{\left\{1.96\sqrt{\left[0.301\left(1-0.301\right)\right]+0.84\;\sqrt{\left[0.51\left(1-0.51\right)\right]}}\right\}}^2}{{\left(0.6-0.51\right)}^2} $$
$$ \mathrm{n}=162\ \mathrm{couples}+10\%=180 $$
The required sample size in the intervention group = 180 pregnant women.
Intervention: control ratio = 1:2. Sample size in the control group = 360 pregnant women. Therefore the total sample size was 546 pregnant women.
Data was collected using interviewer-administered questionnaires. The theory of planned behavior questionnaire guide was used to guide the development of the questionnaire [8]. The questionnaire was translated into Swahili and was pretested before actual administration. Four trained research assistants were recruited, trained, and participated in data collection. The tool had two parts; the social demographic characteristics and a Likert scale where respondents were supposed to strongly agree, agree, neutral, disagree, and strongly disagree. The Likert scale was subdivided into three subparts of the statements in the Likert scale which were; i) attitudes towards male involvement questions ii) perceived subjective norms towards male involvement iii) perceived behavior control towards male involvement in maternal services utilization.
Attitudes towards male involvement had five Likert scale statements which were if my husband participates in setting apart of the skilled birth attendant, he is doing a good and beneficial thing, if my husband accompanies me during antenatal clinics, he is doing a good and beneficial thing, if my husband tests for HIV with me during pregnancy, he is doing a good and beneficial thing, if my husband accompanies me during childbirth, he is doing a good thing which is beneficial and if my husband accompanies me for postnatal checkups, he is doing a good and beneficial thing. Likert scale statements involved in measuring perceived subjective norms towards male involvement were; the eminent person to me believe my husband should participate in earmarking of the skilled birth attendant, eminent person to me believe my husband to escort me during antenatal clinics, eminent person to me believe my husband has to test for HIV with me during antenatal visits, eminent person to me believe my husband has to accompany me during childbirth and eminent person to me believe my husband has to escort me during postnatal checkups. Perceived behavior control was measured using the following Likert scale statements: my husband to participate in earmarking of the skilled birth attendant is trouble-free, for me, my husband to escort me during antenatal clinics is trouble-free, for me, my husband to test for HIV/AIDS with me during antenatal visits is trouble-free, for me, my husband to accompany me during labor and childbirth is trouble-free and for me, my husband to escort me during a postnatal checkup is trouble-free.
The collected data were verified for integrity then coded and entered in to computer using statistical package IBM SPSS version 23. Descriptive statistics were used to generate frequency distribution and cross-tabulation used to describe the characteristic of the study participants. Factor analysis was done to measure attitude, subjective norms, and perceived behavior control. The normality test was tested and the mean score was established. The regression score above the mean was termed as positive and below mean negative (Table 1). Logistic regression was done to determine the factors which influence the attitude, perceived subjective norms, and perceived behavior control towards male involvement in maternal services utilization.
Table 1 Factor analysis
There were 25 different responses from five questions formulated based on the theory of planned behavior change for each predictor of intention. The responses were subjected to factor analysis and 15 statements of attitude and perceived subjective norms and 16 perceived behavior control had sample adequacy to test the three predictors of intention.
Validity and reliability
To ensure the validity of the tool, a pilot study was conducted to assess the accuracy of the data collection tools. A Cronbach's Alpha was conducted to establish the reliability of the tool. The Cronbach's Alpha for attitude towards male involvement was 0.947, Cronbach's Alpha for perceived subjective norms was 0.948 and a Cronbach's Alpha for perceived behavior control was 0.938.
The study enrolled 546 pregnant women at a turnover rate of 100%. The sample consisted of 546 pregnant women. The mean age was 25.57 years (sd = 6.810). The majority of the pregnant women were married (390, 71.4%), monogamous (469, 85.9%), live on less than 1 dollar per day (382, 70.0%), and receive their basic obstetric care services from dispensaries (452, 82.8). Ninety-five percent of the respondents had attained primary level education or less (Table 2).
Table 2 Socio-Demographic Characteristics of Respondents (n = 546)
Predictors of attitudes, subjective norms, and perceived behavior control towards male involvement in maternal services utilization.
Predictor of attitude towards male involvement
The variables which portrayed a significant relationship with attitudes towards male engagement in maternal services utilization were age at marriage (p < 0.001), education status (p < 0.001), ethnic group (p < 0.001), economic status (p < 0.05), and owning a mobile phone (p < 0.001) (Table 3).
Table 3 The relationship between pregnant women's characteristic and attitudes towards male involvement in maternal services utilization
After adjusting for the confounders the factors which influence attitude towards male involvement in maternal services utilization among pregnant women were age at marriage [19 to 24 years, (AOR = 1.568 at 95% CI 1.044–2.353, p < 0.05), more than 24 years AOR = 2.15 at 95% CI = 1.150–1.159, p < 0.05)], education status [primary school AOR = 1.713 at 95%CI = 1.137–2.58, p = 0.01], ethnic group [Mambwe (AOR = 2.743 at 95% CI = 1.726–4.359, p < 0.001), Others (AOR = 0.425 at 95%CI = 0.235–0.768, p < 0.01)] and economic status [earning at least one dollar per day (AOR = 1.547 at 95%CI = 1.026–2.332, p < 0.05)] Table 4.
Table 4 Predictors of attitude towards male involvement among pregnant women and their male partners
Predictor of subjective norms towards male involvement in maternal services utilization
The variables which showed a significant relationship with subjective norms towards male involvement in maternal services utilization among pregnant women were age at marriage (p < 0.001), education status (p < 0.01), ethnic group (p < 0.001), owning a mobile phone (p = 0.001) and having the intention to be accompanied by a male partner (p < 0.001) Table 5.
Table 5 The relationship between pregnant women's characteristic and subjective norms towards male involvement
After adjusting for the confounder, the predictors of subjective norms towards male involvement among pregnant women were age at marriage [19 to 24 years, (AOR = 1.447 at 95%CI = 0.970–2.159, p < 0.05), more than 24 years, (AOR = 2.331 at 95%CI = 1.261–4.308, p < 0.01), ethnic group [Mambwe, AOR = 2.278 at 95% CI =1.444–3.596, p < 0.001] Table 6.
Table 6 Predictors of subjective norms towards male involvement among pregnant women and their male partners
Predictor of perceived behavior control towards male involvement in maternal services utilization
Variables which showed a significant relationship with perceived behavior control among pregnant women were age at marriage (p < 0.001), education status (0.01), ethnic group (0.001), own mobile phone (p = 0.001) and having the intention to be accompanied during childbirth (p < 0.001) Table 7.
Table 7 The relationship between pregnant women's characteristic and perceived behavior control towards male involvement
After adjusting for confounders, the factors associated with confidence to involve their male partners in maternal services utilization were age at marriage [more than 24 years AOR = 2.331 at 95%CI = 1.261–4.308,p < 0.01], ethnic groups [Mambwe AOR = 2.278 at 95%CI = 1.444–3.596, p < 0.0001] and having the intention to be accompanied by their male partners AOR = 1.827 at 95%CI = 1.171–2.849,p < 0.01 (Table 8).
Table 8 Predictors of perceived behavior control among pregnant women and their male partners
Male involvement in maternal services utilization has been recognized as an effective strategy for the improvement of birth outcomes [18]. Many studies have reported male involvement and factors which influence male involvement focusing on males themselves [8, 9, 19]. Pregnant women's attitudes, subjective norms, and perceived behavior control towards male involvement in maternal health services is an important behavioral aspect which if well addressed has the potential to improve male involvement. A female partner may act as a barrier towards bringing men to pregnancy care and childbirth. Their attitude, perceived subjective norms, and perceived behavior control matters a lot in their intention to have their male partners with them in maternal services utilization [5].
The study found that majority of pregnant women had a negative attitude, perceived subjective norms, and perceived behavior control towards male involvement in maternal services utilization. Age at marriage predicted all three domains of intention, attitude, perceived subjective norms, and perceived behavior control. The attitude towards male involvement in maternal services utilization was also influenced by pregnant women's level of education and her economic status. In addition to age at marriage, the perceived behavior control was also influenced by pregnant women's intention to be accompanied by her male partner. These findings are discussed hereunder.
The high proportion of pregnant women with negative attitudes towards the involvement of male partners in maternal health services utilization could be rooted in cultural beliefs and traditions and customs [5, 19]. Traditions and customs in most African cultures have assigned the role of pregnancy care and childbirth to women (Antenatal women and their mother and mother in law). In with accordance to Theory of Planned Behavior, the attitude towards a certain behavior can be influenced by the belief an individual has on the behavior and the way an individual evaluates the outcome of the behavior [13]. When pregnant women evaluate the outcome of male partner's involvement in maternal health services utilization to have no contribution to the desired outcome, their attitude disregards male involvement in maternal health services utilization. Innovative interventions are highly recommended in this low resource setting to sensitize pregnant women on the benefits of male involvement in maternal health services utilization.
Likewise, the majority of pregnant women had negative perceived subjective norms towards male involvement in maternal health services utilization. This means that majority perceived that their community disregarded the accompaniment of their male partners in maternal health services. This perception is also stemmed from community beliefs and traditional gender roles [5, 19]. It sends a signal that insisting pregnant women come with their male partners during maternal services utilization alone without addressing their norms may delay male involvement in maternal services in our context. The effect of societal pressure on male involvement in maternal services utilization may act as a barrier towards male involvement in maternal services utilization. Innovative interventions are recommended to sensitize the community on the benefits of male involvement in maternal services utilization.
Similarly, the majority of pregnant women had negative perceived behavior control towards male involvement in the utilization of maternal health services. They perceive that they cannot bring their male partners in maternal health services utilization. Based on the Theory of Planned Behavior, perceived behavior control is influenced by control beliefs and perceived power [13]. Perceived behavior control could be affected by the low socio-economic status of the study community where a male partner has to engage in work to earn money for family sustainability.
The study found that factors which influence pregnant women's attitude towards male involvement were age at marriage, education status, and economic status. Pregnant women who were married at the elder age were more likely to have a positive attitude towards male involvement in maternal health services than those who were married at a younger age. The possible reason for this finding could be that women who were married at a younger age did not have the opportunity to be exposed to formal education as compared to those who were married at an older age. Exposure to formal education can dilute the cultural beliefs of a woman which may influence power relations between men and women [20].
Pregnant women who had primary education were 1.7 times more likely to have a positive attitude towards male involvement than pregnant women who had no formal education. The finding is in line with a previous study which has reported a direct relationship between education and male involvement in maternal services utilization [20].
The study further noted that pregnant women who earned at least one dollar per day were 1.5 times more likely to have a positive attitude towards male involvement than pregnant women who earned less than one dollar per day. This could be poor women are concerned about their husbands engaging in earning work to sustain their living rather than participating in pregnancy care. A similar finding is reported by a previous study which reported that family earning do influence male involvement [23].
Age at marriage predicted the perceived subjective norms towards male involvement in maternal services utilization. Women who were married at the elder age were more likely to have a positive perception of societal approval for male involvement than women who were married at a young age. This finding could be women who are married at a younger age are less likely to have exposure to other societal cultural practices as they grow in the same culture. Those married at the elder age could have exposure to both education and travels to different places.
The age at marriage also influenced the perceived behavior control towards male involvement in maternal services utilization. Similarly, pregnant women who were married at the elder age, perceived to be able to be accompanied by their male partners for maternal services utilization. This could be because pregnant women who were married at a younger age could have stronger cultural attachment than those married at an elder age.
It was also found that pregnant women with the intention to be accompanied by their male partners were more likely to have positive perceived behavior control than those without the intention to be accompanied by their male partners.
This study used baseline data from an intervention study where control and intervention were compared. The two samples were treated as one sample after comparing the outcome variables (attitudes, subjective norms and perceived behavior control) and found no significant difference existed between the two groups. Intervention group participants were matched with control in a ratio of one to two. Even though in both cases random sampling was employed, our analysis may have suffered bias from differences in sampling probabilities in the two groups. There is a chance that some group is over represented than the other so may limit the generalizability of findings. To minimize the effect of this limitation, the participants were matched (5 years age groups and parity). The study also included robust of background information (ethnicity, economic status, exposure to media, education level, covered with health insurance, religion) in the data collection tool and were included in the analysis to adjust for the confounders.
Both groups came from rural districts of Rukwa region. Because rural Rukwa districts share similar cultural and social economic status, our findings can be generalized within rural Rukwa and other rural settings within Tanzania with similar characteristics.
The study indicated that aged women are more likely to have a positive attitude, subjective norms, and perceived behavior control towards male involvement in maternal services than young pregnant women. Pregnant women with primary education, who earned more than a dollar per day were more likely to have positive attitudes towards male involvement than their counterparts. The intention to attend maternal services with their male partners significantly influenced positively the perceived behavior control. The study recommends a community based interventional study to address the community beliefs and traditional gender roles in maternal services utilization to improve pregnant women's attitudes, subjective norms, and perceived behavior control towards male involvement in maternal services utilization. Behavior theory integrated interventions to address deep-seated predictors of male involvement and health-seeking behaviors have not been well explored in the existing literature. To understand and address such factors there is a need for innovative high-impact interventions that utilize theories, to address modifiable predictors of intention to engage in a behavior (Attitude, subjective norms, and perceived behavior control. The findings from such studies can be useful in shaping antenatal care interventions such as male involvement in maternal services utilization.
Data set (supplementary file 1) and the questionnaire (supplementary file 2) are uploaded as supplementary material.
AIDS:
Acquired Immunodeficiency Syndrome
HIV:
Human Immunodeficiency Virus
MoHCDEC:
Ministry of Health, Community Development, Gender, Elderly and Children
NBS:
National Bureau of Statistics
STIs:
TDHS-MIS:
Tanzania Demographic and Health Survey and Malaria Indicator Survey
Alkema L, Chou D, Hogan D, Zhang S, MollerAB GA, Fat DM, TM BT. Global, regional, and national levels and trends in maternal mortality between 1990 and 2015, with scenario-based projections to 2030: a systematic analysis by the. Lancet [Internet]. 2016;387(10017):462–74 Available from: http://www.sciencedirect.com/science/article/pii/S0140673615008387.
MoHCDGEC. Tanzania Demographic and Health Survey and Malaria Indicator Survey [Internet]. Dar es Salaam, Tanzania, and Rockville, Maryland, USA: Ministry of Health, Community Development, Gender, Elderly and Children (MoHCDGEC); 2015. Available from: https://www.dhsprogram.com/pubs/pdf/FR321/FR321.pdf.
Kakaire O, Kaye DK, Osinde MO. Male involvement in birth preparedness and complication readiness for emergency obstetric referrals in rural Uganda. Reprod Health [Internet]. 2011;8(1):12 Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3118172&tool=pmcentrez&rendertype=abstract.
Craymah JP, Oppong RK, Tuoyire DA. Male involvement in maternal health Care at Anomabo, central region, Ghana. Int J Reprod Med. 2017;2017:1–8.
Moshi F, Nyamhanga T. Understanding the preference for homebirth; an exploration of key barriers to facility delivery in rural Tanzania. Reprod Health [Internet]. 2017;14(1):132 Available from: http://reproductive-health-journal.biomedcentral.com/articles/10.1186/s12978-017-0397-z.
Barker G, Ricardo C, Nascimento M, Olukoya A, Santos C, Ricardo C, et al. Questioning gender norms with men to improve health outcomes : Evidence of impact. 2017;1692(May).
August F, Pembe AB, Mpembeni R, Axemo P, Darj E. Men's knowledge of obstetric danger signs, birth preparedness and complication readiness in rural Tanzania. PLoS One. 2015;10(5):1–12.
Iliyasu Z, Abubakar I, Galadanci H, Aliyu M. Birth preparedness, complication readiness and fathers' participation in maternity care in a northern Nigerian community. Afr J Reprod Health. 2010;14(1):21–32.
Mosunmola RNS, Adekunbi RNF, Foluso RNO. Women's perception of husbands' support during pregnancy, labour and delivery. IOSR J Nurs Heal Sci. 2014;3(3):45–50.
Ganle JK, Dery I, Manu AA, Obeng B. 'If I go with him, I can't talk with other women': understanding women's resistance to, and acceptance of, men's involvement in maternal and child healthcare in northern Ghana. Soc Sci Med [Internet]. 2016;166:195–204 Available from: http://dx.doi.org/10.1016/j.socscimed.2016.08.030.
Nyondo-Mipando AL, Chimwaza AF, Muula AS. "He does not have to wait under a tree": perceptions of men, women and health care workers on male partner involvement in prevention of mother to child transmission of human immunodeficiency virus services in Malawi. BMC Health Serv Res. 2018;18(1):1–8.
Netemeyer R, Van Ryn M, Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process [Internet]. 1991;50(2):179–211 Available from: http://linkinghub.elsevier.com/retrieve/pii/074959789190020T.
National Bureau of Statistics. Fertility and Nuptiality report 2015 [Internet], vol. IV; 2015. Available from: http://www.nbs.go.tz/nbs/takwimu/census2012/Fertility and Nuptiality Monograph.pdf.
National Bureau of Statistics (NBS). Tanzania. 2010; Available from: https://www.dhsprogram.com/pubs/pdf/FR243/FR243[24June2011].pdf.
West CIT, Briggs NCT. Effectiveness of trained community volunteers in improving knowledge and management of childhood malaria in a rural area of Rivers State , Nigeria. 2015;18(5).
Ajzen I, Fishbein M. Constructing a theory of planned behavior questionnaire. Predict Chang Behav Reason Action Approach [Internet]. 2006;(January):1–7 Available from: http://people.umass.edu/%7B~%7Daizen/pdf/tpb.measurement.pdf.
Yargawa J, Leonardi-Bee J. Male involvement and maternal health outcomes: systematic review and meta-analysis. J Epidemiol Community Health [Internet]. 2015;69(6):604–12 Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4453485&tool=pmcentrez&rendertype=abstract.
Olum OC. Cultural values, health education and male involvement in antenatal care in Gulu District; 2016.
Thapa DK, Niehof A. Women's autonomy and husbands' involvement in maternal health care in Nepal. Soc Sci Med [Internet]. 2013;93:1–10 Available from: http://dx.doi.org/10.1016/j.socscimed.2013.06.003.
Kaji D, Niehof A. Social Science & Medicine Women ' s autonomy and husbands ' involvement in maternal health care in Nepal. Soc Sci Med [Internet]. 2013;93:1–10 Available from: http://dx.doi.org/10.1016/j.socscimed.2013.06.003.
We gratefully acknowledge the University of Dodoma for financial support. We are thankful to the administration of the Rukwa Region for allowing us to conduct the study and the acknowledgments are extended to all study participants for their participation in this study.
The study was not funded.
Department of Nursing and Midwifery, College of Health Sciences of the University of Dodoma, P.O. Box 259, Dodoma, Tanzania
Fabiola V. Moshi
Department of Public Health, College of Health Sciences of the University of Dodoma, P.O Box.259, Dodoma, Tanzania
Stephen M. Kibusi
Department of Biomedical Sciences, College of Health Sciences of the University of Dodoma, P.O Box.259, Dodoma, Tanzania
Flora Fabian
FM led the conception, design, acquisition of data, analysis, interpretation of data, and drafting of the manuscript. SK and FF guided the conception, design, and acquisition of data, analysis, interpretation, and critically revising the manuscript for intellectual content and have given final approval for the version to be published. All authors read and approved the final manuscript.
Correspondence to Fabiola V. Moshi.
The ethical clearance to conduct this study was given by the Ethical Review Committee of the University of Dodoma in Dodoma, Tanzania. Also, a letter of permission was obtained from the Rukwa Regional Administration. Both written and verbal consents were obtained from study participants after they were given an explanation on the study objectives and procedures and their right to refuse to participate in the study at any time they were assured.
Authors declare there is no competing interest.
Moshi, F.V., Kibusi, S.M. & Fabian, F. Exploring factors influencing pregnant Women's attitudes, perceived subjective norms and perceived behavior control towards male involvement in maternal services utilization: a baseline findings from a community based interventional study from Rukwa, rural Tanzania. BMC Pregnancy Childbirth 20, 634 (2020). https://doi.org/10.1186/s12884-020-03321-z
Subjective norms
Perceived behavior control
Male involvement | CommonCrawl |
Notation and Terminology
This page describes the conventions that are used for the entries in the database
Sets are denoted by upper-case roman letters, usually $A, B, C,\ldots, U, V, W$.
$\mathbb{N}=$ the set of natural numbers $=\{0,1,2,\ldots\}$,
$\mathbb{Z}=$ the set of integers $=\mathbb{N}\cup\{-n:n\in\mathbb{N}\}$,
$\mathbb{Q}=$ the set of rationals $=\{m/n:m,n\in\mathbb{Z}, n>0\}$,
$\mathbb{R}=$ the set of real numbers,
$\mathbb{C}=$ the set of complex numbers $=\{x+iy:x,y\in\mathbb{R}\}$.
$\mathcal P(A)=\{S:S\subseteq A\}$, the power set of $A$.
$A^n=\{\langle a_0,\ldots,a_{n-1}\rangle:a_0,\ldots,a_{n-1}\in A\}$, the set of all $n$-tuples of elements of $A$.
Elements of sets are denoted by lower-case roman letters, usually $a, b, c, d, e$.
Variables that range over elements are denoted by lower-case roman letters, usually $x, y, z, u, v, w, x_0, x_1, \ldots$.
Integer variables are usually denoted by $i,j,k,m,n$.
Variables that range over sets are denoted by upper-case roman letters, usually $X, Y, Z, X_0, X_1, \ldots$
Functions are denoted by lower-case roman letters, usually $f, g, h$.
A (first-order) operation on a set $A$ is a function from $A^n$ to $A$, where $n\ge 0$ is the arity of the operation. If $n=0$ then the operation is called a constant.
A (first-order) relation on a set $A$ is a subset of $A^n$, where $n>0$ is the arity of the relation.
A second-order operation on a set $A$ is a function from $\mathcal P(A)^n$ to $A$.
A second-order relation on a set $A$ is a subset of $\mathcal P(A)^n$.
A mathematical structure is a tuple of the form $\mathbf{A}=\langle A,\ldots\rangle$ where $A$ is a set and $\ldots$ specifies a list of (possibly higher-order) operations and relations on $A$.
Trace: » notation_and_terminology
notation_and_terminology.txt · Last modified: 2010/07/29 16:10 by jipsen | CommonCrawl |
We use cookies to improve your experience with our site.
Editors log in
2021 Vol.4(3)
Select articles
Display Mode: |
2021, 4(3): 0-0.
[Abstract](61) [FullText HTML] (54) [PDF 409KB](25)
COVER[J]. China Geology..
Editorial Committee of China Geology
[Abstract](45) [FullText HTML] (26) [PDF 0KB](7)
Editorial Committee of China Geology[J]. China Geology, 2021, 4(3): 1-2..
Characteristics of groundwater in Northeast Qinghai-Tibet Plateau and its response to climate change and human activities: A case study of Delingha, Qaidam Basin
Wei Zhao, Yan-zhu Lin, Peng-peng Zhou, Guang-cai Wang, Xue-ya Dang, Xiao-fan Gu
2021, 4(3): 377-388. doi: 10.31035/cg2021053
[Abstract](278) [FullText HTML] (211) [PDF 5040KB](59)
Delingha is located in the northeast margin of Qaidam Basin. Bayin River alluvial proluvial fan is the main aquifer of Delingha, in which groundwater generally flows from north to south. The hydrochemistry results showed that two different hydrochemical evolution paths formed along southeast and southwest directions, respectively. Cl-Na type groundwater was formed in front of Gahai Lake, and SO4·HCO3-Na·Ca type groundwater was formed in front of Keluke Lake. The results of deuterium (D) and 18O revealed that the groundwater mainly originated from the continuous accumulation of precipitation during geological history under cold and humid climate conditions. In addition, results of 14C indicated that the groundwater age was more than 1140 years, implying relatively poor renewal capability of regional groundwater. Moreover, our numerical modeling results showed that the regional groundwater level will continue to rise under the warm and humid climate conditions.
Wei Zhao, Yan-zhu Lin, Peng-peng Zhou, Guang-cai Wang, Xue-ya Dang and Xiao-fan Gu. Characteristics of groundwater in Northeast Qinghai-Tibet Plateau and its response to climate change and human activities: A case study of Delingha, Qaidam Basin[J]. China Geology, 2021, 4(3): 377-388. doi: 10.31035/cg2021053.
Response of glacier area variation to climate change in the Kaidu-Kongque river basin, Southern Tianshan Mountains during the last 20 years
Lu-chen Wang, Kun Yu, Liang Chang, Jun Zhang, Tao Tang, Li-he Yin, Xiao-fan Gu, Jia-qiu Dong, Ying Li, Jun Jiang, Bing-chao Yang, Qian Wang
Glaciers are crucial water resources for arid inland rivers in Northwest China. In recent decades, glaciers are largely experiencing shrinkage under the climate-warming scenario, thereby exerting tremendous influences on regional water resources. The primary role of understudying watershed scale glacier changes under changing climatic conditions is to ensure sustainable utilization of regional water resources, to prevent and mitigate glacier-related disasters. This study maps the current (2020) distribution of glacier boundaries across the Kaidu-Kongque river basin, south slope of Tianshan Mountains, and monitors the spatial evolution of glaciers over five time periods from 2000–2020 through thresholded band ratios approach, using 25 Landsat images at 30 m resolution. In addition, this study attempts to understand the role of climate characteristics for variable response of glacier area. The results show that the total area of glaciers was 398.21 km2 in 2020. The glaciers retreated by about 1.17 km2/a (0.26%/a) from 2000 to 2020.The glaciers were reducing at a significantly rapid rate between 2000 and 2005, a slow rate from 2005 to 2015, and an accelerated rate during 2015–2020. The meteorological data shows slight increasing trends of mean annual temperature (0.02°C/a) and annual precipitation (2.07 mm/a). The correlation analysis demonstrates that the role of temperature presents more significant correlation with glacier recession than precipitation. There is a temporal hysteresis in the response of glacier change to climate change. Increasing trend of temperature in summer proves to be the driving force behind the Kaidu-Kongque basin glacier recession during the recent 20 years.
Lu-chen Wang, Kun Yu, Liang Chang, Jun Zhang, Tao Tang, Li-he Yin, Xiao-fan Gu, Jia-qiu Dong, Ying Li, Jun Jiang, Bing-chao Yang and Qian Wang. Response of glacier area variation to climate change in the Kaidu-Kongque river basin, Southern Tianshan Mountains during the last 20 years[J]. China Geology, 2021, 4(3): 389-401. doi: 10.31035/cg2021055.
Relationship of underground water level and climate in Northwest China's inland basins under the global climate change: Taking the Golmud River Catchment as an example
Jia-wei Wang, Jin-ting Huang, Tuo Fang, Ge Song, Fang-qiang Sun
To identify the response of groundwater level variation to global climate change in Northwest China's inland basins, the Golmud River Catchment was chosen as a case in this paper. Approaches of time series analysis and correlation analysis were adopted to investigate the variation of groundwater level influenced by global climate change from 1977 to 2017. Results show that the temperature in the Golmud River Catchment rose 0.57°C every 10 years. It is highly positive correlated with global climate temperature, with a correlation coefficient, 0.87. The frequency and intensity of extreme precipitation were both increased. Generally, groundwater levels increased from 1977 to 2017 in all phreatic and confined aquifers and the fluctuation became more violent. Most importantly, extreme precipitation led to the fact that groundwater level rises sharply, which induced city waterlogging. However, no direct evidence shows that normal precipitation triggered groundwater level rise, and the correlation coefficients between precipitation data from Golmud meteorological station located in the Gobi Desert and groundwater level data of five observation wells are 0.13, 0.02, −0.11, 0.04, and −0.03, respectively. This phenomenon could be explained as that the main recharge source of groundwater is river leakage in the alluvial-pluvial Gobi plain because of the high total head of river water and goodness hydraulic conductivity of the vadose zone. Data analysis shows that glacier melting aggravated because of local temperature increased. As a result, runoff caused groundwater levels to ascend from 1977 to 2017. Correlation coefficients of two groundwater wells observation data and runoff of Golmud River are 0.80 and 0.68. The research results will contribute to handling the negative effects of climate change on groundwater for Northwestern China.
Jia-wei Wang, Jin-ting Huang, Tuo Fang, Ge Song and Fang-qiang Sun. Relationship of underground water level and climate in Northwest China's inland basins under the global climate change: Taking the Golmud River Catchment as an example[J]. China Geology, 2021, 4(3): 402-409. doi: 10.31035/cg2021064.
Responses of phreatophyte transpiration to falling water table in hyper-arid and arid regions, Northwest China
Li-he Yin, Dan-dan Xu, Wu-hui Jia, Xin-xin Zhang, Jun Zhang
Quantitative assessment of the impact of groundwater depletion on phreatophytes in (hyper-) arid regions is key to sustainable groundwater management. However, a parsimonious model for predicting the response of phreatophytes to a decrease of the water table is lacking. A variable saturated flow model, HYDRUS-1D, was used to numerically assess the influences of depth to the water table (DWT) and mean annual precipitation (MAP) on transpiration of groundwater-dependent vegetation in (hyper-) arid regions of northwest China. An exponential relationship is found for the normalized transpiration (a ratio of transpiration at a certain DWT to transpiration at 1 m depth, Ta*) with increasing DWT, while a positive linear relationship is identified between Ta* and annual precipitation. Sensitivity analysis shows that the model is insensitive to parameters, such as saturated soil hydraulic conductivity and water stress parameters, indicated by an insignificant variation (less than 20% in most cases) under ± 50% changes of these parameters. Based on these two relationships, a universal model has been developed to predict the response of phreatophyte transpiration to groundwater drawdown for (hyper-) arid regions using MAP only. The estimated Ta* from the model is reasonable by comparing with published measured values.
Li-he Yin, Dan-dan Xu, Wu-hui Jia, Xin-xin Zhang and Jun Zhang. Responses of phreatophyte transpiration to falling water table in hyper-arid and arid regions, Northwest China[J]. China Geology, 2021, 4(3): 410-420. doi: 10.31035/cg2021052.
Groundwater characteristics and climate and ecological evolution in the Badain Jaran Desert in the southwest Mongolian Plateau
Zhe Wang, Li-juan Wang, Jian-mei Shen, Zhen-long Nie, Ling-qun Meng, Le Cao, Shi-bo Wei, Xiang-feng Zeng
The Badain Jaran Desert is the third largest desert in China, covering an area of 50000 km2. It lies in Northwest China, where the arid and rainless natural environment has a great impact on the climate, environment, and human living conditions. Based on the results of 1∶250000 regional hydrogeological surveys and previous researches, this study systematically investigates the circulation characteristics and resource properties of the groundwater as well as the evolution of the climate and ecological environment since the Quaternary in the Badain Jaran Desert by means of geophysical exploration, hydrogeological drilling, hydrogeochemistry, and isotopic tracing. The results are as follows. (1) The groundwater in the Badain Jaran Desert is mainly recharged through the infiltration of local precipitation and has poor renewability. The groundwater recharge in the desert was calculated to be 1.8684×108 m3/a using the water balance method. (2) The Badain Jaran Desert has experienced four humid stages since the Quaternary, namely MIS 13-15, MIS 5, MIS 3, and the Early‒Middle Holocene, but the climate in the desert has shown a trend towards aridity overall. The average annual temperature in the Badain Jaran Desert has significantly increased in the past 50 years. In detail, it has increased by about 2.5°C, with a higher rate in the south than in the north. Meanwhile, the precipitation amount has shown high spatial variability and the climate has shown a warming-drying trend in the past 50 years. (3) The lakes in the hinterland of the Badain Jaran Desert continuously shrank during 1973‒2015. However, the vegetation communities maintained a highly natural distribution during 2000‒2016, with the vegetation cover has increased overall. Accordingly, the Badain Jaran Desert did not show any notable expansion in that period. This study deepens the understanding of groundwater circulation and the climate and ecological evolution in the Badain Jaran Desert. It will provide a scientific basis for the rational exploitation of the groundwater resources and the ecological protection and restoration in the Badain Jaran Desert.
Zhe Wang, Li-juan Wang, Jian-mei Shen, Zhen-long Nie, Ling-qun Meng, Le Cao, Shi-bo Wei and Xiang-feng Zeng. Groundwater characteristics and climate and ecological evolution in the Badain Jaran Desert in the southwest Mongolian Plateau[J]. China Geology, 2021, 4(3): 421-432. doi: 10.31035/cg2021056.
Hydrodynamic characteristics of a typical karst spring system based on time series analysis in northern China
Yi Guo, Feng Wang, Da-jun Qin, Zhan-feng Zhao, Fu-ping Gan, Bai-kun Yan, Juan Bai, Haji Muhammed
In order to study the hydrodynamic characteristics of the karst aquifers in northern China, time series analyses (correlation and spectral analysis in addition with hydrograph recession analysis) are applied on Baotu Spring and Heihu Spring in Jinan karst spring system, a typical karst spring system in northern China. Results show that the auto-correlation coefficient of spring water level reaches the value of 0.2 after 123 days and 117 days for Baotu Spring and Heihu Spring, respectively. The regulation time obtained from the simple spectral density function in the same period is 187 days and 175 days for Baotu Spring and Heihu Spring. The auto-correlation coefficient of spring water level reaches the value of 0.2 in 34–82 days, and regulation time ranges among 40–59 days for every single hydrological year. The delay time between precipitation and spring water level obtained from cross correlation function is around 56 days for the period of 2012–2019, and varies among 30–79 days for every single hydrological year. In addition, the spectral bands in cross amplitude functions and gain functions are small with 0.02, and the values in the coherence functions are small. All these behaviors illustrate that Jinan karst spring system has a strong memory effect, large storage capacity, noticeable regulation effect, and time series analysis is a useful tool for studying the hydrodynamic characteristics of karst spring system in northern China.
Yi Guo, Feng Wang, Da-jun Qin, Zhan-feng Zhao, Fu-ping Gan, Bai-kun Yan, Juan Bai and Haji Muhammed. Hydrodynamic characteristics of a typical karst spring system based on time series analysis in northern China[J]. China Geology, 2021, 4(3): 433-445. doi: 10.31035/cg2021049.
Mechanisms of salt rejection at the ice-liquid interface during the freezing of pore fluids in the seasonal frozen soil area
Huan Huang, Chang-fu Chen, Xiao-jie Mo, Ding-ding Wu, Yan-ming Liu, Ming-zhu Liu, Hong-han Chen
[Abstract](157) [FullText HTML] (119) [PDF 3360KB](6)
Seasonal frozen soil accounts for about 53.50% of the land area in China. Frozen soil is a complex multiphase system where ice, water, soil, and air coexist. The distribution and migration of salts in frozen soil during soil freezing are notably different from those in unfrozen soil areas. However, little knowledge is available about the process and mechanisms of salt migration in frozen soil. This study explores the mechanisms of salt migration at the ice-liquid interface during the freezing of pore fluids through batch experiments. The results are as follows. The solute concentrations of liquid and solid phases at the ice-liquid interface (\begin{document}$ {C}_{L}^{*} $\end{document}, \begin{document}$ {C}_{S}^{*} $\end{document}) gradually increased at the initial stage of freezing and remained approximately constant at the middle stage. As the ice-liquid interface advanced toward the system boundary, the diffusion of the liquid phase was blocked but the ice phase continued rejecting salts. As a result, \begin{document}$ {C}_{L}^{*} $\end{document} and \begin{document}$ {C}_{S}^{*} $\end{document} rapidly increased at the final stage of freezing. The distribution characteristics of solutes in ice and the liquid phases before \begin{document}$ {C}_{L}^{*} $\end{document} and \begin{document}$ {C}_{S}^{*} $\end{document} became steady were mainly affected by the freezing temperature, initial concentrations, and particle-size distribution of media (quartz sand and kaolin). In detail, the lower the freezing temperature and the better the particle-size distribution of media, the higher the solute proportion in the ice phase at the initial stage of freezing. Meanwhile, the increase in concentration first promoted but then inhibited the increase of solutes in the ice phase. These results have insights and scientific significance for the tackling of climate change, the environmental protection of groundwater and soil, and infrastructure protection such as roads, among other things.
Huan Huang, Chang-fu Chen, Xiao-jie Mo, Ding-ding Wu, Yan-ming Liu, Ming-zhu Liu and Hong-han Chen. Mechanisms of salt rejection at the ice-liquid interface during the freezing of pore fluids in the seasonal frozen soil area[J]. China Geology, 2021, 4(3): 446-454. doi: 10.31035/cg2021059.
Changes of groundwater flow field of Luanhe River Delta under the human activities and its impact on the ecological environment in the past 30 years
She-ming Chen, Fu-tian Liu, Zhuo Zhang, Qian Zhang, Wei Wang
The Luanhe River Delta is located in the center of the Circum-Bohai Sea Economic Zone. It enjoys rapid economic and social development while suffering relatively water scarcity. The overexploitation of groundwater in the Luanhe River Delta in recent years has caused the continuous drop of groundwater level and serious environmental and geological problems. This study systematically analyzes the evolution characteristics of the population, economy, and groundwater exploitation in the Luanhe River Delta and summarizes the change patterns of the groundwater flow regime in different aquifers in the Luanhe River Delta according to previous water resource assessment data as well as the latest groundwater survey results. Through comparison of major source/sink terms and groundwater resources, the study reveals the impacts of human activities on the groundwater resources and ecological environment in the study area over the past 30 years from 1990 to 2020. The results are as follows. The average annual drop rate of shallow groundwater and the deep groundwater in the centers of depression cones is 0.4 m and 1.64 m, respectively in the Luanhe River Delta in the past 30 years. The depression cones of shallow and deep groundwater in the study area cover an area of 545.32 km² and 548.79 km², respectively, accounting for more than 10% of the total area of the Luanhe River Delta. Overexploitation of groundwater has further aggravated land subsidence. As a result, two large-scale subsidence centers have formed, with a maximum subsidence rate of up to 120 mm/a. The drop of groundwater level has induced some ecological problems in the Luanhe River Delta area, such as the zero flow and water quality deterioration of rivers and continuous shrinkage of natural wetlands and water. Meanwhile, the proportion of natural wetland area to the total wetland area has been decreased from 99% to 8% and the water area from 1776 km² to 263 km². These results will provide data for groundwater overexploitation control, land subsidence prevention, and ecological restoration in plains and provide services for water resources management and national land space planning.
She-ming Chen, Fu-tian Liu, Zhuo Zhang, Qian Zhang and Wei Wang. Changes of groundwater flow field of Luanhe River Delta under the human activities and its impact on the ecological environment in the past 30 years[J]. China Geology, 2021, 4(3): 455-462. doi: 10.31035/cg2021060.
Gene abundances of AOA, AOB, and anammox controlled by groundwater chemistry of the Pearl River Delta, China
Kun Liu, Xin Luo, Jiu Jimmy Jiao, Ji-dong Gu, Ramon Aravena
Ammonia-oxidizing archaea (AOA), ammonia-oxidizing bacteria (AOB), and anaerobic ammonia-oxidation (anammox) bacteria are very important contributors to nitrogen cycling in natural environments. Functional gene abundances of these microbes were believed to be well relevant to N-cycling in groundwater systems, especially in the Pearl River Delta (PRD) groundwater with unique high intrinsic ammonia concentrations. In this research, 20 sediment samples from two in the PRD were collected for porewater chemistry analysis and quantification of N-cycling related genes, including archaeal and bacterial amoA gene and anammox 16S ribosomal Ribonucleic Acid (rRNA) gene. Quantitative Polymerase Chain Reaction (qPCR) results showed that gene abundances of AOA, AOB, and anammox bacteria ranged from 3.13×105 to 3.21×107, 1.83×104 to 2.74×106, and 9.27×104 to 8.96×106 copies/g in the sediment of the groundwater system, respectively. Anammox bacteria and AOA dominated in aquitards and aquifers, respectively, meanwhile, the aquitard-aquifer interfaces were demonstrated as ammonium-oxidizing hotspots in the aspect of gene numbers. Gene abundances of nitrifiers were analyzed with geochemistry profiles. Correlations between gene numbers and environmental variables indicated that the gene abundances were impacted by hydrogeological conditions, and microbial-derived ammonium loss was dominated by AOA in the northwest PRD and by anammox bacteria in the southeast PRD.
Kun Liu, Xin Luo, Jiu Jimmy Jiao, Ji-dong Gu and Ramon Aravena. Gene abundances of AOA, AOB, and anammox controlled by groundwater chemistry of the Pearl River Delta, China[J]. China Geology, 2021, 4(3): 463-475. doi: 10.31035/cg2021054.
Hydrogeochemical characteristics of groundwater and pore-water and the paleoenvironmental evolution in the past 3.10 Ma in the Xiong'an New Area, North China
Kai Zhao, Jing-xian Qi, Yi Chen, Bai-heng Ma, Li Yi, Hua-ming Guo, Xin-zhou Wang, Lin-ying Wang, Hai-tao Li
The groundwater level has been continuously decreasing due to climate change and long-time overexploitation in the Xiong 'an New Area, North China, which caused the enhanced mixing of groundwater in different aquifers and significant changes in regional groundwater chemistry characteristics. In this study, groundwater and sediment pore-water in drilling cores obtained from a 600 m borehole were investigated to evaluate hydrogeochemical processes in shallow and deep aquifers and paleo-environmental evolution in the past ca. 3.10 Ma. Results showed that there was no obvious change overall in chemical composition along the direction of groundwater runoff, but different hydrochemical processes occurred in shallow and deep groundwater in the vertical direction. Shallow groundwater (< 150 m) in the Xiong 'an New Area was characterized by high salinity (TDS > 1000 mg/L) and high concentrations of Mn and Fe, while deep groundwater had better water quality with lower salinity. The high TDS values mostly occurred in aquifers with depth < 70 m and >500 m below land surface. Water isotopes showed that aquifer pore-water mostly originated from meteoric water under the influence of evaporation, and aquitard pore-water belonged to Paleo meteoric water. In addition, the evolution of the paleoclimate since 3.10 Ma BP was reconstructed, and four climate periods were determined by the δ18O profiles of pore-water and sporopollen records from sediments at different depths. It can be inferred that the Quaternary Pleistocene (0.78‒2.58 Ma BP) was dominated by the cold and dry climate of the glacial period, with three interglacial intervals of warm and humid climate. What's more, this study demonstrates the possibilities of the applications of pore-water on the hydrogeochemical study and further supports the finding that pore-water could retain the feature of paleo-sedimentary water.
Kai Zhao, Jing-xian Qi, Yi Chen, Bai-heng Ma, Li Yi, Hua-ming Guo, Xin-zhou Wang, Lin-ying Wang and Hai-tao Li. Hydrogeochemical characteristics of groundwater and pore-water and the paleoenvironmental evolution in the past 3.10 Ma in the Xiong'an New Area, North China[J]. China Geology, 2021, 4(3): 476-486. doi: 10.31035/cg2021058.
Current situation and human health risk assessment of fluoride enrichment in groundwater in the Loess Plateau: A case study of Dali County, Shaanxi Province, China
Rui-ping Liu, Hua Zhu, Fei Liu, Ying Dong, Refaey M El-Wardany
This study aims to investigate the mechanisms and health risks of fluoride enrichment in groundwater in the Loess Plateau, China. By taking Dali County, Shaanxi Province, China as an example, this study obtains the following results through field investigation and the analyses of water, soil, and crop samples. (1) The groundwater can be divided into two major types, namely the Quaternary pore-fissure water and Karst water. The Karst area and sandy area have high-quality groundwater and serve as the target areas for optional water supply. The groundwater in the study area is slightly alkaline and highly saline. Meanwhile, high-fluoride groundwater is mainly distributed in the loess and river alluvial plains in the depression area of the Guanzhong Basin and the discharge areas of the groundwater, with the highest fluoride concentration exceeding seven times the national standard. (2) Fluoride in groundwater mainly originates from a natural source and human activities. The natural source refers to the fluoride-bearing minerals in rocks and soil, and the fluoride from this source is mainly controlled by natural factors such as climate, geologic setting, pH, specific hydrochemical environment, ion exchange, and mineral saturation. Human activities in modern life can be further divided into industrial and agricultural sources primarily. (3) The health risks of fluoride contamination are very high in the Loess Plateau, especially for children compared to adults. Meanwhile, the risks of fluoride exposure through food intake are higher than those through drinking water intake. The authors suggest selecting target areas to improve water supply and ensure the safety of drinking water in the study area. Besides, it is necessary to plant crops with low fluoride content or cash crops and to conduct groundwater treatment to reduce the fluoride concentration in drinking water. These results will provide a theoretical basis for safe water supply in the faulted basin areas in the Loess Plateau.
Rui-ping Liu, Hua Zhu, Fei Liu, Ying Dong and Refaey M El-Wardany. Current situation and human health risk assessment of fluoride enrichment in groundwater in the Loess Plateau: A case study of Dali County, Shaanxi Province, China[J]. China Geology, 2021, 4(3): 487-497. doi: 10.31035/cg2021051.
Determining the groundwater basin and surface watershed boundary of Dalinuoer Lake in the middle of Inner Mongolian Plateau, China and its impacts on the ecological environment
Wen-peng Li, Long-feng Wang, Yi-long Zhang, Li-jie Wu, Long-mei Zeng, Zhong-sheng Tuo
[Abstract](188) [FullText HTML] (82) [PDF 14732KB](18)
The surface watershed and groundwater basin have fixed recharge scale, which are not only the basic unit for hydrologic cycle research but also control the water resources formation and evolution and its corresponding eco-geological environment pattern. To accurately identify the boundary of the surface watershed and groundwater basin is the basis for properly understanding hydrologic cycle and conducting the water balance analysis at watershed scale in complicated geologic structure area, especially when the boundary are inconsistent. In this study, the Dalinuoer Lake located in the middle of the Inner Mongolian Plateau which has complicated geologic structure was selected as the representative case. Based on the multidisciplinary comprehensive analysis of topography, tectonics, hydrogeology, groundwater dynamics and stable isotopes, the results suggest the following: (1) The surface watershed ridge and groundwater basin divide of Dalinuoer Lake are inconsistent. The surface watershed was divided into two separate groundwater systems almost having no groundwater exchange by the SW-NE Haoluku Anticlinorium Fault which has obvious water-blocking effect. The surface drainage area of Dalinuoer Lake is 6139 km2. The northern regional A is the Dalinuoer Lake groundwater system with an area of 4838 km2, and the southern regional B is the Xilamulun Riverhead groundwater system with an area of 1301 km2. (2) The groundwater in the southern of regional A and the spring-feeding river are the important recharge sources for the Dalinuoer Lake, and it has greater recharge effects than the northern Gonggeer River system. (3) It is speculated that the trend of Haoluku Anticlinorium Fault is the boundary of the westerlies and the East Asian summer Monsoon (EASM) climate systems, which further pinpoints the predecessor's understanding of this boundary line. At present, the Dalinuoer Lake watershed is proved to have gone through a prominent warming-drying trend periods, which leads to the precipitation reduction, temperature rise, human activities water usage increasement. So the hydrological cycle and lake eco-environment at watershed scale will still bound to be change, which may pose the potential deterioration risk on the suitability of fish habitat. The results can provide basic support for better understanding water balance evolution and lake area shrinkage cause as well as the ecological protection and restoration implementation of Dalinuoer Lake watershed.
Wen-peng Li, Long-feng Wang, Yi-long Zhang, Li-jie Wu, Long-mei Zeng and Zhong-sheng Tuo. Determining the groundwater basin and surface watershed boundary of Dalinuoer Lake in the middle of Inner Mongolian Plateau, China and its impacts on the ecological environment[J]. China Geology, 2021, 4(3): 498-508. doi: 10.31035/cg2021066.
Distribution, characteristics and influencing factors of fresh groundwater resources in the Loess Plateau, China
Hai-xue Li, Shuang-bao Han, Xi Wu, Sai Wang, Wei-po Liu, Tao Ma, Meng-nan Zhang, Yu-tao Wei, Fu-qiang Yuan, Lei Yuan, Fu-cheng Li, Bin Wu, Yu-shan Wang, Min-min Zhao, Han-wen Yang, Shi-bo Wei
[Abstract](158) [FullText HTML] (67) [PDF 5623KB](22)
The fresh groundwater in the Loess Plateau serves as a major source of water required for the production and livelihood of local residents and is greatly significant for regional economic and social development and ecological protection. This paper analyzes the hydrogeological conditions and groundwater characteristics in the Loess Plateau, expatiates on the types and distribution characteristics of the fresh groundwater in the plateau, and analyzes the influencing factors and mechanisms in the formation of the fresh groundwater in the plateau as a priority. Based on this, it summarizes the impacts of human activities and climatic change on the regional fresh groundwater. The groundwater in Loess Plateau features uneven temporal-spatial distribution, with the distribution space of the fresh groundwater closely relating to precipitation. The groundwater shows a distinct zoning pattern of hydrochemical types. It is fresh water in shallow parts and is salt water in deep parts overall, while the fresh water of exploration value is distributed only in a small range. The storage space and migration pathways of fresh groundwater in the loess area feature dual voids, vertical multilayers, variable structure, poor renewability, complex recharge processes, and distinct spatial differences. In general, the total dissolved solids (TDS) of the same type of groundwater tends to gradually increase from recharge areas to discharge areas. Conditions favorable for the formation of fresh groundwater in loess tablelands include the low content of soluble salts in strata, weak evaporation, and special hydrodynamic conditions. Owing to climate change and human activities, the resource quantity of regional fresh water tends to decrease overall, and the groundwater dynamic field and the recharge-discharge relationships between groundwater and surface water have changed in local areas. Human activities have a small impact on the water quality but slightly affect the water quantity of the groundwater in loess.
Hai-xue Li, Shuang-bao Han, Xi Wu, Sai Wang, Wei-po Liu, Tao Ma, Meng-nan Zhang, Yu-tao Wei, Fu-qiang Yuan, Lei Yuan, Fu-cheng Li, Bin Wu, Yu-shan Wang, Min-min Zhao, Han-wen Yang and Shi-bo Wei. Distribution, characteristics and influencing factors of fresh groundwater resources in the Loess Plateau, China[J]. China Geology, 2021, 4(3): 509-526. doi: 10.31035/cg2021057.
Effects of groundwater level on vegetation in the arid area of western China
Ge Song, Jin-ting Huang, Bo-han Ning, Jia-wei Wang, Lei Zeng
At present, investigation about the relationship between the change of groundwater level and vegetation mostly focuses on specific watersheds, i.e. limited in river catchment scale. Understanding the change of groundwater level on vegetation in the basin or large scale, be urgently needed. To fill this gap, two typical arid areas in the west of China (Tarim Basin and Qaidam Basin) were chosen the a typical research area. The vegetation status was evaluated via normalization difference vegetation index (NDVI) from 2000 to 2016, sourced from MODN1F dataset. The data used to reflect climate change were download from CMDSC (http://data.cma.cn). Groundwater level data was collected from monitor wells. Then, the relationship of vegetation and climate change was established with univariate linear regression and correlation analysis approach. Results show that: Generally, NDVI value in the study area decreased before 2004 then increased in the research period. Severe degradation was observed in the center of the basin. The area with an NDVI value > 0.5 decreased from 12% to 6% between 2000 and 2004. From 2004 to 2014, the vegetation in the study area was gradually restored. The whole coverage of Qaidam Basin was low. And the NDVI around East Taigener salt-lake degraded significantly, from 0.596 to 0.005, 2014 and 2016, respectively. The fluctuation of groundwater level is the main reason for the change of surface vegetation coverage during the vegetation degradation in the basin. However, the average annual precipitation in the study area is low, which is not enough to have a significant impact on vegetation growth. The annual average precipitation showed an increase trend during the vegetation restoration in the basin, which alleviates the water shortage of vegetation growth in the region. Meanwhile, the dependence of surface vegetation on groundwater is obviously weakened with the correlation index is −0.248. The research results are of some significance to eco-environment protection in the arid area of western China.
Ge Song, Jin-ting Huang, Bo-han Ning, Jia-wei Wang and Lei Zeng. Effects of groundwater level on vegetation in the arid area of western China[J]. China Geology, 2021, 4(3): 527-535. doi: 10.31035/cg2021062.
China's water resources in 2020
Xi-jie Chen, Long-feng Wang, Li-qiong Jia, Ting Jia
[Abstract](141) [FullText HTML] (65) [PDF 3053KB](7)
Xi-jie Chen, Long-feng Wang, Li-qiong Jia and Ting Jia. China's water resources in 2020[J]. China Geology, 2021, 4(3): 536-538. doi: 10.31035/cg2021063.
Volume 4,
Guide for Referees
CAREERS AND EVENTS
No. 45 Fuwai Street, Xicheng District, Beijing 100037, P. R. China
Supported by Beijing Renhe Information Technology Co. Ltd Technical support: [email protected] | CommonCrawl |
Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel
Soheil Gherekhloo, Anas Chaaban, Aydin Sezgin
Computer, Electrical and Mathematical Science and Engineering
Treating interference as noise (TIN) as the most appropriate approach in dealing with interference and the conditions on its optimality has attracted the interest of researchers recently. However, our knowledge on necessary and sufficient conditions of TIN is restricted to a few setups with limited number of users. In this paper, we study the optimality of TIN in terms of the generalized degrees of freedom (GDoF) for a fundamental network, namely, the M× 2 X-channel. To this end, the achievable GDoF of TIN with power allocations at the transmitters is studied. It turns out that the transmit power allocation maximizing the achievable GDOF is given by on-off signaling as long as the receivers use TIN. This leads to two variants of TIN, namely, P2P-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants is studied separately. To this end, novel genie-aided upper bounds on the capacity of the X-channel are established. The conditions on the optimality of P2P-TIN can be summarized as follows. P2P-TIN is GDoF-optimal if there exists a dominant multiple access channel or a dominant broadcast channel embedded in the X channel. Furthermore, the necessary and sufficient conditions on the GDoF-optimality of 2-IC-TIN are presented. Interestingly, it turns out that operating the M× 2 X-channel in the 2-IC-TIN mode might be still GDOF optimal, although the conditions given by Geng et al. are violated. However, 2-IC-TIN is sub-optimal if there exists a single interferer which causes sufficiently strong interference at both receivers. The comparison of the results with the state of the art shows that the GDOF optimality of TIN is expanded significantldy.
IEEE Transactions on Information Theory
https://doi.org/10.1109/TIT.2016.2628376
Published - Nov 14 2016
10.1109/TIT.2016.2628376
Dive into the research topics of 'Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel'. Together they form a unique fingerprint.
Transmitters Engineering & Materials Science 100%
Gherekhloo, S., Chaaban, A., & Sezgin, A. (2016). Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel. IEEE Transactions on Information Theory, 63(1), 355-376. https://doi.org/10.1109/TIT.2016.2628376
Gherekhloo, Soheil ; Chaaban, Anas ; Sezgin, Aydin. / Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel. In: IEEE Transactions on Information Theory. 2016 ; Vol. 63, No. 1. pp. 355-376.
@article{8d152af6e13e4c33ac439327db4eab82,
title = "Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel",
abstract = "Treating interference as noise (TIN) as the most appropriate approach in dealing with interference and the conditions on its optimality has attracted the interest of researchers recently. However, our knowledge on necessary and sufficient conditions of TIN is restricted to a few setups with limited number of users. In this paper, we study the optimality of TIN in terms of the generalized degrees of freedom (GDoF) for a fundamental network, namely, the M× 2 X-channel. To this end, the achievable GDoF of TIN with power allocations at the transmitters is studied. It turns out that the transmit power allocation maximizing the achievable GDOF is given by on-off signaling as long as the receivers use TIN. This leads to two variants of TIN, namely, P2P-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants is studied separately. To this end, novel genie-aided upper bounds on the capacity of the X-channel are established. The conditions on the optimality of P2P-TIN can be summarized as follows. P2P-TIN is GDoF-optimal if there exists a dominant multiple access channel or a dominant broadcast channel embedded in the X channel. Furthermore, the necessary and sufficient conditions on the GDoF-optimality of 2-IC-TIN are presented. Interestingly, it turns out that operating the M× 2 X-channel in the 2-IC-TIN mode might be still GDOF optimal, although the conditions given by Geng et al. are violated. However, 2-IC-TIN is sub-optimal if there exists a single interferer which causes sufficiently strong interference at both receivers. The comparison of the results with the state of the art shows that the GDOF optimality of TIN is expanded significantldy.",
author = "Soheil Gherekhloo and Anas Chaaban and Aydin Sezgin",
note = "KAUST Repository Item: Exported on 2020-10-01 Acknowledgements: This work was supported by the German Research Foundation, Germany, under Grant Se 1697/3.",
doi = "10.1109/TIT.2016.2628376",
journal = "IEEE Transactions on Information Theory",
Gherekhloo, S, Chaaban, A & Sezgin, A 2016, 'Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel', IEEE Transactions on Information Theory, vol. 63, no. 1, pp. 355-376. https://doi.org/10.1109/TIT.2016.2628376
Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel. / Gherekhloo, Soheil; Chaaban, Anas; Sezgin, Aydin.
In: IEEE Transactions on Information Theory, Vol. 63, No. 1, 14.11.2016, p. 355-376.
T1 - Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel
AU - Gherekhloo, Soheil
AU - Chaaban, Anas
AU - Sezgin, Aydin
N1 - KAUST Repository Item: Exported on 2020-10-01 Acknowledgements: This work was supported by the German Research Foundation, Germany, under Grant Se 1697/3.
N2 - Treating interference as noise (TIN) as the most appropriate approach in dealing with interference and the conditions on its optimality has attracted the interest of researchers recently. However, our knowledge on necessary and sufficient conditions of TIN is restricted to a few setups with limited number of users. In this paper, we study the optimality of TIN in terms of the generalized degrees of freedom (GDoF) for a fundamental network, namely, the M× 2 X-channel. To this end, the achievable GDoF of TIN with power allocations at the transmitters is studied. It turns out that the transmit power allocation maximizing the achievable GDOF is given by on-off signaling as long as the receivers use TIN. This leads to two variants of TIN, namely, P2P-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants is studied separately. To this end, novel genie-aided upper bounds on the capacity of the X-channel are established. The conditions on the optimality of P2P-TIN can be summarized as follows. P2P-TIN is GDoF-optimal if there exists a dominant multiple access channel or a dominant broadcast channel embedded in the X channel. Furthermore, the necessary and sufficient conditions on the GDoF-optimality of 2-IC-TIN are presented. Interestingly, it turns out that operating the M× 2 X-channel in the 2-IC-TIN mode might be still GDOF optimal, although the conditions given by Geng et al. are violated. However, 2-IC-TIN is sub-optimal if there exists a single interferer which causes sufficiently strong interference at both receivers. The comparison of the results with the state of the art shows that the GDOF optimality of TIN is expanded significantldy.
AB - Treating interference as noise (TIN) as the most appropriate approach in dealing with interference and the conditions on its optimality has attracted the interest of researchers recently. However, our knowledge on necessary and sufficient conditions of TIN is restricted to a few setups with limited number of users. In this paper, we study the optimality of TIN in terms of the generalized degrees of freedom (GDoF) for a fundamental network, namely, the M× 2 X-channel. To this end, the achievable GDoF of TIN with power allocations at the transmitters is studied. It turns out that the transmit power allocation maximizing the achievable GDOF is given by on-off signaling as long as the receivers use TIN. This leads to two variants of TIN, namely, P2P-TIN and 2-IC-TIN. While in the first variant the M× 2 X-channel is reduced to a point-to-point (P2P) channel, in the second variant, the setup is reduced to a two-user interference channel in which the receivers use TIN. The optimality of these two variants is studied separately. To this end, novel genie-aided upper bounds on the capacity of the X-channel are established. The conditions on the optimality of P2P-TIN can be summarized as follows. P2P-TIN is GDoF-optimal if there exists a dominant multiple access channel or a dominant broadcast channel embedded in the X channel. Furthermore, the necessary and sufficient conditions on the GDoF-optimality of 2-IC-TIN are presented. Interestingly, it turns out that operating the M× 2 X-channel in the 2-IC-TIN mode might be still GDOF optimal, although the conditions given by Geng et al. are violated. However, 2-IC-TIN is sub-optimal if there exists a single interferer which causes sufficiently strong interference at both receivers. The comparison of the results with the state of the art shows that the GDOF optimality of TIN is expanded significantldy.
UR - http://ieeexplore.ieee.org/document/7742969/
U2 - 10.1109/TIT.2016.2628376
DO - 10.1109/TIT.2016.2628376
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
Gherekhloo S, Chaaban A, Sezgin A. Expanded GDoF-optimality Regime of Treating Interference as Noise in the $M\times 2$ X-Channel. IEEE Transactions on Information Theory. 2016 Nov 14;63(1):355-376. https://doi.org/10.1109/TIT.2016.2628376 | CommonCrawl |
Trigonometric Graphs
Symmetrical and periodic nature of trig functions
Intro to sin(x), cos(x) and tan(x)
Key features of sine and cosine curves
Amplitude of sine and cosine
Period changes for sine and cosine
Phase shifts for sine and cosine
Transformations of sine and cosine curves and equations
Domain and range of sine and cosine curves
Graphing sine curves
Sine Waves and Music (Investigation)
Graphing cosine curves
Finding equations of sine and cosine curves
Graphical solution of trigonometric equations involving sine and cosine
Applications of sine and cosine functions
Graph sums of sine and cosine (rad)
Key features of tangent curves
Dilation of tangent curves
Period changes for tangents
Phase shifts for tangents
Transformations of tangent curves and equations
Domain and range of tangent curves
Graphing tangent curves
Find the equation of a tangent curve
Intro to sec(x), cosec(x) and cot(x)
Key features of cot, sec and cosec curves
Transformations of cot, sec and cosec curves and equations
Domain and range of cot, sec and cosec curves
Graphing cot, sec and cosec curves
Find the equation of a cot, sec and cosec curve
Solve harmonic motion problems
We say an object has a symmetry property if an aspect of it remains essentially the same after the object has been transformed in some systematic way.
From the unit circle definitions of the sine and cosine functions, we see that the function values repeat at intervals of $2\pi$2π. We say that these functions have a period of $2\pi$2π, meaning the function value at some number $x$x is always the same as the value at $x+2\pi$x+2π. That is, $\sin\left(x+2\pi\right)=\sin x$sin(x+2π)=sinx and $\cos\left(x+2\pi\right)=\cos x$cos(x+2π)=cosx.
We say the sine and cosine functions are symmetrical under a translation by $2\pi$2π.
There are other important symmetries possessed by these two functions. Again, by looking at the unit circle diagram, we observe that
$\sin\left(-x\right)=-\sin x$sin(−x)=−sinx and
$\cos\left(-x\right)=\cos x$cos(−x)=cosx
Any function that has the property $f\left(-x\right)=-f\left(x\right)$f(−x)=−f(x) is called an odd function. Thus, sine is an odd function. When it is represented by means of a graph, one can see that the picture will look the same if the graph is rotated about the origin by $180^\circ$180°. This property is characteristic of odd functions.
Any function that has the property $f\left(-x\right)=f\left(x\right)$f(−x)=f(x), is called an even function. Thus, the cosine function is an even function. The graph of any even function is the same as its reflection about the vertical axis.
We can check that the tangent function is an odd function. We make use of the unit circle definition of the tangent function:
$\tan\left(-x\right)=\frac{\sin\left(-x\right)}{\cos\left(-x\right)}=\frac{-\sin x}{\cos x}=-\frac{\sin x}{\cos x}=-\tan x$tan(−x)=sin(−x)cos(−x)=−sinxcosx=−sinxcosx=−tanx
The tangent function also has translational symmetry. It has a period of $\pi$π. We can verify from the unit circle diagram or from the graphs of sine and cosine that $\sin\left(x+\pi\right)=-\sin x$sin(x+π)=−sinx and $\cos\left(x+\pi\right)=-\cos x$cos(x+π)=−cosx. This means that $\tan\left(x+\pi\right)=\frac{\sin\left(x+\pi\right)}{\cos\left(x+\pi\right)}=\frac{-\sin x}{-\cos x}=\tan x$tan(x+π)=sin(x+π)cos(x+π)=−sinx−cosx=tanx for all values of $x$x and this is the required condition.
Examine the graph of $y=\sin x$y=sinx.
Loading Graph...
How long is one cycle of the graph?
State the $x$x values for which $\sin x=0$sinx=0, from $x=0$x=0 to $x=2\pi$x=2π inclusive.
State the first $x$x value for which $\sin x=0.5$sinx=0.5
Using the symmetry of the graph, for what other value of $x$x shown on the graph does $\sin x=0.5$sinx=0.5?
Using the symmetry of the graph, for what values of $x$x does $\sin x=-0.5$sinx=−0.5?
Examine the graph of $y=\tan x+2$y=tanx+2.
State the $x$x values for which $\tan x+2=2$tanx+2=2, from $x=-2\pi$x=−2π to $x=2\pi$x=2π inclusive.
State the first positive $x$x value for which $\tan x+2=3$tanx+2=3
Using the period of the graph, for what other values of $x$x between $x=-2\pi$x=−2π and $x=2\pi$x=2π does $\tan x+2=3$tanx+2=3?
Write all answers on the same line separated by commas.
For what values of $x$x between $x=-2\pi$x=−2π and $x=2\pi$x=2π does $\tan x+2=1$tanx+2=1?
Examine the graph of $y=\cos x$y=cosx.
State the exact value of $\cos\frac{\pi}{6}$cosπ6.
Use the graph to determine all other values of $x$x between $x=-\pi$x=−π and $x=\pi$x=π for which $\cos x=\pm\frac{\sqrt{3}}{2}$cosx=±√32.
Display and interpret the graphs of functions with the graphs of their inverse and/or reciprocal functions | CommonCrawl |
beta distribution example
Probability density function. The beta distribution represents continuous probability distribution parametrized by two positive shape parameters, $ \alpha $ and $ \beta $, which appear as exponents of the random variable x and control the shape of the distribution. }. Here is a great article on understanding beta distribution with an example of baseball game. These two parameters appear as exponents of the random variableand manage the shape of the distribution. The shape parameters are q and r (\(\alpha\) and \(\beta\)). looks like this (generated in R): what that tells you by a beta distribution. Here is the probability distribution function for 4-parameters beta distribution. where Γ is the gamma function. The national batting average is 0.27. In this case, p = 50% is the most likely value for p. But wait, it is also possible to have an unfair coin that behaves accidentally like a fair coin. Think of and as imaginary coin flips before you actually flip the coin: - 1 is the number of heads you get, and - 1 is the number of tails. The following is a proof that is a legitimate probability density function. It is equally likely to be a fair coin, to be a two-headed coin, to be a two-tailed coin, or any mixture of alloy that has one side heavier than the other. Now consider the case where the coin is biased 20% towards head, and we start with an uninformative prior. A four-parameters or general beta distribution can be transformed into two-parameters or standard beta distribution. For example, the beta distribution might be used to find how likely it is that your preferred candidate for mayor will receive 70% of the vote. Let's say you create a beta distribution to model the percentage of votes a particular politician would get in an upcoming interval. Standard Cumulative Beta Distribution Function with α = 4 and β = 5. Parameters. Example 1 – Fitting a Beta Distribution This section presents an example of how to fit a beta distribution. A fair coin has = , and the magnitude describes how confident you are about your belief. If the coin is fair, then it is most likely that the coin will land head half of the time. A less confident guy would probably assign = = 3. Here is the only formula you'll need to get through this post. We know nothing, and when we don't know anything, we say anything can happen. Given the fact that there are four parameters to be determined, it is also termed as four parameters beta distribution. If there exists a prior distribution about any event having outcome within an interval (a < X < b or 0 < X < 1), based on the upcoming event outcomes, the prior may change. The Beta distribution is also known as a Pearson Type I distribution. It instantly emerges up at and centers around p = 20%. Sometimes during experiments, we don't want what we already know bias the way we interpret data. Thus, if the likelihood probability function is binomial distribution, in that case, beta distribution will be called as conjugate prior of binomial distribution. So let's start with a coin. The diagram below represents the hypothetical scenario representing the change in prior probability distribution which happens due to change in the value of shape parameters value of \(\alpha\) and \(\beta\). A Beta distribution is used to model things that have a limited range, like 0 to 1.. In (typical), beta distribution of the first kind is another name for the basic beta distribution, while Beta distribution of the second kind is also called the beta prime distribution. Thank you for visiting our site today. Example. Note the parameters a, b, q as \(\alpha\) and r as \(\beta\). This is a special case of Beta, and is parametrized as Beta(⍺=1, β=1). As he swings his bat, we update ⍺ and β along the way. Beta distribution have two shape parameters namely \(\alpha\) and \(\beta\). As beta distribution is used as prior distribution, beta distribution can act as conjugate prior to the likelihood probability distribution function. The mean of beta distribution is \(\frac{\aplha}{\alpha + \beta}\). I would love to connect with you on, probability distribution of probabilities. Let's start with an uninformative prior, and suppose the coin is indeed fair. We have previously thought of and as imaginary coin flips. great article on understanding beta distribution with an example of baseball game, Stackexchange thread on beta distribution intuition, Hierarchical Clustering Explained with Python Example, Negative Binomial Distribution Python Examples, Generalized Linear Models Explained with Examples, Bernoulli Distribution Explained with Python Examples, Correlation Concepts, Matrix & Heatmap using Seaborn, Poisson Distribution Explained with Python Examples, Beta distribution is more often used in the Bayesian modeling, When four parameters such as inner and outer bound of interval and \(\alpha\) and \(\beta\) are unknown, the beta distribution is known as, When two parameters such as \(\alpha\) and \(\beta\) are unknown and interval varies between 0 and 1, the beta distribution is known as. Please reload the CAPTCHA. Beta distribution calculator, beta distribution examples, Theory of beta type I distribution, mean of beta distribution, variance of beta distribution, What is the intuition behind beta distribution? The beta cdf is the same as the incomplete beta function. The domain of the beta distribution is \((0, 1)\), just like a probability, so we already know we're on the right track- but the appropriateness of the beta for this task goes far beyond that. Suppose you are collecting data that has hard lower and upper bounds of zero and one respectively. Pay attention to a and b taking value as 0 and 1 respectively. The Beta distribution is characterized as follows. I will study some of those applications in the near future. Examples are the probability of success in an experiment having only two outcomes, like success and failure. As is the second shape parameter, β, also always great then zero Compute the probability that the shipment has 20% to 30% defective DVDs. This is a special case of the pdf of the beta distribution. Before flipping or even looking at the coin, what do we know about the coin? setTimeout( function() {
Cozumel Hurricane Season, How To Open Layout Editor In Android Studio, Sony Xperia 1 Ii Xq-at42, Stonehurst Village North Conway Nh, Alex Rodriguez Teams, Surah Ali Imran Termasuk Surah,
beta distribution example 2020 | CommonCrawl |
Works by M. Pavšič
( view other items matching `M. Pavšič`, view all matches )
Listing dateFirst authorImpactPub yearRelevanceDownloads Order
BibTeX / EndNote / RIS / etc
Export this page: Choose a format.. Formatted textPlain textBibTeXZoteroEndNoteReference Manager
Limit to items.
pro authors only
published only
Configure languages here. Sign in to use this feature.
categorization shortcuts
hide abstracts
open articles in new windows
Open Category Editor
Disambiguations Disambiguations:
Matej Pavšič [9] M. Pavšič [1]
On the Interpretation of the Relativistic Quantum Mechanics with Invariant Evolution Parameter.Matej Pavšič - 1991 - Foundations of Physics 21 (9):1005-1019.details
The relativistic quantum mechanics with Lorentz-invariant evolution parameter and indefinite mass is a very elegant theory. But it cannot be derived by quantizing the usual classical relativity in which there is the mass-shell constraint. In this paper the classical theory is modified so that it remains Lorentz invariant, but the constraint disappears; mass is no longer fixed—it is an arbitrary constant of motion. The quantization of this unconstrained theory gives the relativistic quantum mechanics in which wave functions are localized and (...) normalized in spacetime. Though many authors have published good works in support for such a localization in time, the latter has been generally considered as problematic. Here I show that wave packets restricted to a finite region of spacetime are not a nuisance, but just the contrary. They have the physical interpretation in the fact that an observer perceives a world line event by event, as his experience of "now" proceeds in spacetime. Quantum mechanically this means that at a certain value of the evolution parameter τ the event is most probably to occur within the spacetime region around {ie1005-1} occupied by the wave packet; at later value of τ the position {ie1005-2}—and hence the time coordinate t—of the wave packet is changed. This is closely related to the interpretation of quantum mechanics in general. (shrink)
Quantum Mechanics, Misc in Philosophy of Physical Science
Direct download (3 more)
Bookmark 7 citations
Clifford-Algebra Based Polydimensional Relativity and Relativistic Dynamics.Matej Pavšič - 2001 - Foundations of Physics 31 (8):1185-1209.details
Starting from the geometric calculus based on Clifford algebra, the idea that physical quantities are Clifford aggregates ("polyvectors") is explored. A generalized point particle action ("polyvector action") is proposed. It is shown that the polyvector action, because of the presence of a scalar (more precisely a pseudoscalar) variable, can be reduced to the well known, unconstrained, Stueckelberg action which involves an invariant evolution parameter. It is pointed out that, starting from a different direction, DeWitt and Rovelli postulated the existence of (...) a clock variable attached to particles which serve as a reference system for identification of spacetime points. The action they postulated is equivalent to the polyvector action. Relativistic dynamics (with an invariant evolution parameter) is thus shown to be based on even stronger theoretical and conceptual foundations than usually believed. (shrink)
Space and Time in Philosophy of Physical Science
The Embedding Model of Induced Gravity with Bosonic Sources.Matej Pavšic - 1994 - Foundations of Physics 24 (11):1495-1518.details
We consider a theory in which spacetime is a 4-dimensional manifold V4 embedded in an N-dimensional space VN. The dynamics is given by a first-order action which is a straightforward generalization of the well-known Nambu-Gotto string action. Instead of the latter action we then consider an equivalent action, a generalization of the Howe-Tucker action, which is a functional of the (extrinsic) embedding variables ηa(x) and of the (intrinsic) induced metric gυv (x) on V4. In the quantized theory we can define (...) an effective action by means of the Feynman path integral in which we functionally integrate over the embedding variables. What remains is functionally dependent solely on the induced metric. It is well known that the effective action so obtained contains the Ricci scalar R and its higher orders. But due to our special choice of a quantity, the so-called "matter" density ω(η) in VN entering the original first-order action, it turns out that the effective action contains also the source term. The latter is in general that of a p-dimensional membrane (p-brane). In particular we consider the case of bosonic point particles. Finally we discuss and clarify certain interpretational aspects of quantum mechanics from the viewpoint of our embedding model. (shrink)
Philosophy of Physics, Miscellaneous in Philosophy of Physical Science
On the Resolution of Time Problem in Quantum Gravity Induced From Unconstrained Membranes.Matej Pavšič - 1996 - Foundations of Physics 26 (2):159-195.details
The relativistic theory of unconstrained p-dimensional membranes (p-branes) is further developed and then applied to the embedding model of induced gravity. Space-time is considered as a 4-dimensional unconstrained membrane evolving in an N-dimensional embedding space. The parameter of evolution or the evolution time τ is a distinct concept from the coordinate time t=x0. Quantization of the theory is also discussed. A covariant functional Schrödinger equation has a solution for the wave functional such that it is sharply localized in a certain (...) subspace P of space-time, and much less sharply localized (though still localized) outside P. With the passage of evolution the region P moves forward in space-time. Such a solution we interpret as incorporating two seemingly contradictory observations: (i) experiments clearly indicate that space-time is a continuum in which events are existing; (ii) not the whole 4-dimensional space-time, but only a 3-dimensional section which moves forward in time, is accessible to our immediate experience. The notorious problem of time is thus resolved in our approach to quantum gravity. Finally we include sources into our unconstrained embedding model. (shrink)
Quantum Gravity in Philosophy of Physical Science
Bookmark 1 citation
Clifford Space as a Generalization of Spacetime: Prospects for QFT of Point Particles and Strings. [REVIEW]Matej Pavšič - 2005 - Foundations of Physics 35 (9):1617-1642.details
The idea that spacetime has to be replaced by Clifford space (C-space) is explored. Quantum field theory (QFT) and string theory are generalized to C-space. It is shown how one can solve the cosmological constant problem and formulate string theory without central terms in the Virasoro algebra by exploiting the peculiar pseudo-Euclidean signature of C-space and the Jackiw definition of the vacuum state. As an introduction into the subject, a toy model of the harmonic oscillator in pseudo-Euclidean space is studied.
Parametrized Field Theory.Matej Pavšič - 1998 - Foundations of Physics 28 (9):1453-1464.details
A theory is presented in which a field depends not only on spacetime coordinates xμ, but also on a Lorentz-invariant parameter τ. Such a theory is conceptually and technically simple and manifestly covariant at every step. The generator of evolution and the generator of spacetime translations and Lorentz transformations are obtained in a straightforward way. In the quantized theory the Heisenberg equation of motion is written in a covariant form and is equivalent to the field equation. The equal τ commutator (...) between the field and its canonically conjugate momentum is just proportional to the spacetime δ function. Finally comparison with the conventional field theory is done, and it is found that the expectation value of the momentum operator in the on shell states is the same. (shrink)
Formulation of a Relativistic Theory Without Constraints.Matej Pavšič - 1998 - Foundations of Physics 28 (9):1443-1451.details
A relativistic, i.e., Lorentz co-variant theory without constraints is formulated. This is possible if we allow the dynamical variables to depend on an invariant parameter τ. Thus we obtain a dynamical theory in spacetime, called relativistic dynamics. First the case of a point particle, and then of extended objects such as membranes of arbitrary dimensions are considered.
On a Unified Theory of Generalized Branes Coupled to Gauge Fields, Including the Gravitational and Kalb–Ramond Fields.M. Pavšič - 2007 - Foundations of Physics 37 (8):1197-1242.details
We investigate a theory in which fundamental objects are branes described in terms of higher grade coordinates $X^{\mu{_1}\ldots \mu{_n}}$ encoding both the motion of a brane as a whole, and its volume evolution. We thus formulate a dynamics which generalizes the dynamics of the usual branes. Geometrically, coordinates $X^{\mu{_1} \ldots \mu{_n}}$ and associated coordinate frame fields { ${\gamma_{\mu{_1}\ldots\mu{_n}}}$ } extend the notion of geometry from spacetime to that of an enlarged space, called Clifford space or C-space. If we start from (...) four-dimensional spacetime, then the dimension of C-space is 16. The fact that C-space has more than four dimensions suggests that it could serve as a realization of Kaluza-Klein idea. The "extra dimensions" are not just the ordinary extra dimensions, they are related to the volume degrees of freedom, therefore they are physical, and need not be compactified. Gauge fields are due to the metric of Clifford space. It turns out that amongst the latter gauge fields there also exist higher grade, antisymmetric fields of the Kalb–Ramond type, and their non-Abelian generalization. All those fields are naturally coupled to the generalized branes, whose dynamics is given by a generalized Howe–Tucker action in curved C-space. (shrink)
Gauge Theories in Philosophy of Physical Science
Direct download (11 more)
Quantum Gravity Induced From Unconstrained Membranes.Matej Pavšič - 1998 - Foundations of Physics 28 (9):1465-1477.details
The theory of unconstrained membranes of arbitrary dimension is presented. Their relativistic dynamics is described by an action which is a generalization of the Stueckelberg point-particle action. In the quantum version of the theory, the evolution of a membrane's state is governed by the relativistic Schrödinger equation. Particular stationary solutions correspond to the conventional, constrained membranes. Contrary to the usual practice, our spacetime is identified, not with the embedding space (which brings the problem of compactification), but with a membrane of (...) dimension 4 or higher. A 4-membrane is thus assumed to represent spacetime. The Einstein-Hilbert action emerges as an effective action after functionally integrating out the membrane's embedding functions. (shrink)
Rigid Particle and its Spin Revisited.Matej Pavšič - 2007 - Foundations of Physics 37 (1):40-79.details
The arguments by Pandres that the double valued spherical harmonics provide a basis for the irreducible spinor representation of the three dimensional rotation group are further developed and justified. The usual arguments against the inadmissibility of such functions, concerning hermiticity, orthogonality, behaviour under rotations, etc., are all shown to be related to the unsuitable choice of functions representing the states with opposite projections of angular momentum. By a correct choice of functions and definition of inner product those difficulties do not (...) occur. And yet the orbital angular momentum in the ordinary configuration space can have integer eigenvalues only, for the reason which have roots in the nature of quantum mechanics in such space. The situation is different in the velocity space of the rigid particle, whose action contains a term with the extrinsic curvature. (shrink)
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it: | CommonCrawl |
When a bomb explodes, does it momentum remain same?
If, from an aircraft, a bomb is thrown to an object placed at ground and bomb explodes before it hits the object, i.e if it explodes in the middle of its path, does it momentum remain same?
I knew that law of conservation of momentum can be used when no other forces or agents, except action and reaction, do not act between and among objects. Since in this case many kinds of chemical reactions involve, can we apply law of conservation of momentum in this case? If law of conservation of momentum cannot be applied here, what would happen? Either momentum will increase or it will decrease. But I'm not sure which may occur. Please tell me what may happen and what are the reasons behind it.
At first I thought momentum will decrease because the mass of particles created after explode of bomb will be near to zero, and the product of velocity with a large value and mass with very small value will give another value which is also near to zero. Later I started to think that my idea is not correct, because the product can be higher if the value of velocity of very small particle becomes so much higher.
newtonian-mechanics momentum conservation-laws explosions
Qmechanic♦
Nazmul HassanNazmul Hassan
$\begingroup$ Forget the airplane, take it out into deep space. Explode the bomb in the absence of air or, of any nearby object. Conservation of momentum describes a system. It's the total momentum of the system that is conserved. So, if the system consists of nothing but the bomb... $\endgroup$ – Solomon Slow May 6 '16 at 18:43
UPDATE : The explosion itself conserves linear momentum, regardless of how small the fragments are. If we ignore gravity and air resistance and all other external forces, there is no change in total momentum. This is because the internal forces all occur in equal and opposite pairs (Newton's 3rd Law).
If we take the external forces into account, then momentum is not conserved. Even during the brief explosion, the bomb and its fragments are being accelerated downwards by gravity. Air resistance will also affect different fragments unequally because it depends on the size and speed of the fragments, which are very unlikely to be equal.
However, if we extend "the system" to include the Earth and its atmosphere as well as the bomb and its fragments, then we can again say that momentum is conserved - before, during and after the explosion. All of the forces which we are taking account of (gravity, air resistance, buoyancy, and the bomb blast pushing fragments apart) are now internal forces, so Newton's 3rd Law again applies.
ORIGINAL ANSWER :
Neglecting air resistance, and until any of the fragments of the bomb reach the ground, the centre of mass of the bomb follows the same trajectory as it would if the bomb did not explode - ie part of a parabola.
Conservation of linear momentum does not apply here because there is an outside force (gravity) which changes the magnitude and direction of the total momentum. However, all of the forces in the explosion are internal forces (action/ reaction) which do not alter the motion of the centre of mass and do not affect total momentum.
The explosion does not change the total mass of the bomb.
sammy gerbilsammy gerbil
$\begingroup$ ... not by much, anyway. $\endgroup$ – safkan May 6 '16 at 21:12
$\begingroup$ Since here gravitational force acts here and its amount is not negligible, total momentum is not conserved? Am I right? If it changes, since an outside force (gravitational force) acts here, does it momentum increase? $\endgroup$ – Nazmul Hassan May 7 '16 at 1:44
$\begingroup$ @NazmulHassan : That's right, total momentum of the bomb/fragments is not conserved. Strictly speaking, momentum is a vector so it can change direction as well as magnitude. If (as here) there is a change in direction it does not make sense to ask if the change in momentum is an increase or decrease. As with any projectile, if it moves up (against gravity force) speed decreases, if it moves down speed increases. $\endgroup$ – sammy gerbil May 7 '16 at 10:13
It will stay the same, if we neglect the variation due to gravity (every external force is going to change the momentum).
If we assume a uniform distribution of the shrapnels' mass (same size for all shrapnels), the shrapnels going in the direction the bomb was originally going will have, on average, higher velocity.
With a great simplification, we can say that, on explosion, the bomb splits in two identical parts of mass $M/2$ upon explosion. Assuming no gravity, the motion will be 1-dimensional. Since total momentum is conserved, we will have:
$$M V = \frac M 2 v_1 + \frac M 2 v_2$$
That is to say,
$$V = \frac{v_1 + v_2}{2}$$
Notice that kinetic energy won't be conserved, because we have to take into account some kind of chemical energy that triggers the explosion, which will be partly converted in heath, sound and radiation.
Anyway, remember that in the absence of external forces total momentum is always conserved.
Edit (clarification):
Yes, gravity is going to increase the total momentum of the system: $\frac{ d\vec q}{dt} = m \vec g$. The momentum of the whole system increases constantly in time (the bomb -before the explosion- and the shrapnels -after the explosion- accelerate with constant acceleration g towards the ground).
Anyway, your question if I got it correctly was more if the process triggering the explosion (chemical reaction etc.) would change the momentum. That's not the case: only an external force will change the total momentum of any system.
So the momentum of the system as a whole is increasing constantly due to gravity; but, the total momentum immediately before the explosion is exactly the same a the total momentum immediately after
valeriovalerio
$\begingroup$ I edited my answer to clarify the role of gravity and why I said it can be neglected. The point is, as I wrote in the last lines, that even if the momentum of the system as a whole is increasing constantly due to gravity, the total momentum immediately before the explosion is exactly the same a the total momentum immediately after. $\endgroup$ – valerio May 7 '16 at 10:14
That the bomb breaks apart due to the explosive forces which are internal to the system, has nothing to do with the trajectory of the center of mass. As user Sammy Gerbil points out correctly, the initial trajectory of the bomb was parabolic and even if the bomb exploded into million fragments of different masses, the center of mass would continue on it's path the same way as if nothing has happened. Explosion doesn't change anything for the COM. The only external force is the gravitational force (we are neglecting any drag forces due to air and the wind direction) and it's direction is straight down, so the COM's vertical component of the velocity is increasing. Component of linear momentum along the y-axis through out the parabolic trajectory of the COM continues to increase. While the horizontal veloctiy of COM and hence the horizontal component of linear momentum of the system remains unchanged because there is no net external force acting in that direction. Mass of the bomb remains unchanged. This scenario includes mass being converted to energy.
Momentum is not conserved in this situation because an external force, gravity, is acting on your system, the bomb.
Remember that analyzing each fragment of the bomb individually would require you to take into account all the forces that act on it. This includes the explosive forces whose magnitude is impossible to know. Also, for individual fragments, vertical and horizontal linear momentum are changing in a way different from the COM, but when you vector add them over all the individual pieces, you would once again find the same vertical and horizontal values of the COM as a function of time, as if it were the only particle moving in a parabolic path.
edited May 7 '16 at 5:25
Entangled_ParticleEntangled_Particle
protected by Qmechanic♦ May 7 '16 at 10:23
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics momentum conservation-laws explosions or ask your own question.
Inelastic collision and impulse
Does Newton's third law apply to momentum or to forces?
Where does the force appear when considering object interactions in another reference frame?
Violation of the Newton's first law of motion in the movement of spacecrafts in the vacuum of space
Does a circularly polarized electromagnetic wave transfer angular momentum from the transmitter to a receiving antennae?
Contradiction between law of conservation of energy and law of conservation of momentum?
Conservation of angular momentum when radius becomes very small?
Why do internal forces support conservation of momentum but not law of conservation of energy when a shell explodes?
Elastic collisions voilate 3rd law of motion
Is this a valid way to derive conservaton of momentum? | CommonCrawl |
Next, if these theorized safe and effective pills don't just get you through a test or the day's daily brain task but also make you smarter, whatever smarter means, then what? Where's the boundary between genius and madness? If Einstein had taken such drugs, would he have created a better theory of gravity? Or would he have become delusional, chasing quantum ghosts with no practical application, or worse yet, string theory. (Please use "string theory" in your subject line for easy sorting of hate mail.)
(In particular, I don't think it's because there's a sudden new surge of drugs. FDA drug approval has been decreasing over the past few decades, so this is unlikely a priori. More specifically, many of the major or hot drugs go back a long time. Bacopa goes back millennia, melatonin I don't even know, piracetam was the '60s, modafinil was '70s or '80s, ALCAR was '80s AFAIK, Noopept & coluracetam were '90s, and so on.)
A similar pill from HQ Inc. (Palmetto, Fla.) called the CorTemp Ingestible Core Body Temperature Sensor transmits real-time body temperature. Firefighters, football players, soldiers and astronauts use it to ensure that they do not overheat in high temperatures. HQ Inc. is working on a consumer version, to be available in 2018, that would wirelessly communicate to a smartphone app.
Analyzing the results is a little tricky because I was simultaneously running the first magnesium citrate self-experiment, which turned out to cause a quite complex result which looks like a gradually-accumulating overdose negating an initial benefit for net harm, and also toying with LLLT, which turned out to have a strong correlation with benefits. So for the potential small Noopept effect to not be swamped, I need to include those in the analysis. I designed the experiment to try to find the best dose level, so I want to look at an average Noopept effect but also the estimated effect at each dose size in case some are negative (especially in the case of 5-pills/60mg); I included the pilot experiment data as 10mg doses since they were also blind & randomized. Finally, missingness affects analysis: because not every variable is recorded for each date (what was the value of the variable for the blind randomized magnesium citrate before and after I finished that experiment? what value do you assign the Magtein variable before I bought it and after I used it all up?), just running a linear regression may not work exactly as one expects as various days get omitted because part of the data was missing.
Running low on gum (even using it weekly or less, it still runs out), I decided to try patches. Reading through various discussions, I couldn't find any clear verdict on what patch brands might be safer (in terms of nicotine evaporation through a cut or edge) than others, so I went with the cheapest Habitrol I could find as a first try of patches (Nicotine Transdermal System Patch, Stop Smoking Aid, 21 mg, Step 1, 14 patches) in May 2013. I am curious to what extent nicotine might improve a long time period like several hours or a whole day, compared to the shorter-acting nicotine gum which feels like it helps for an hour at most and then tapers off (which is very useful in its own right for kicking me into starting something I have been procrastinating on). I have not decided whether to try another self-experiment.
As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo.
Another class of substances with the potential to enhance cognition in normal healthy individuals is the class of prescription stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). These include methylphenidate (MPH), best known as Ritalin or Concerta, and amphetamine (AMP), most widely prescribed as mixed AMP salts consisting primarily of dextroamphetamine (d-AMP), known by the trade name Adderall. These medications have become familiar to the general public because of the growing rates of diagnosis of ADHD children and adults (Froehlich et al., 2007; Sankaranarayanan, Puumala, & Kratochvil, 2006) and the recognition that these medications are effective for treating ADHD (MTA Cooperative Group, 1999; Swanson et al., 2008).
It is a known fact that cognitive decline is often linked to aging. It may not be as visible as skin aging, but the brain does in fact age. Often, cognitive decline is not noticeable because it could be as mild as forgetting names of people. However, research has shown that even in healthy adults, cognitive decline can start as early as in the late twenties or early thirties.
"A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times.
Though coffee gives instant alertness, the effect lasts only for a short while. People who drink coffee every day may develop caffeine tolerance; this is the reason why it is still important to control your daily intake. It is advisable that an individual should not consume more than 300 mg of coffee a day. Caffeine, the world's favorite nootropic has fewer side effects, but if consumed abnormally in excess, it can result in nausea, restlessness, nervousness, and hyperactivity. This is the reason why people who need increased sharpness would instead induce L-theanine, or some other Nootropic, along with caffeine. Today, you can find various smart drugs that contain caffeine in them. OptiMind, one of the best and most sought-after nootropics in the U.S, containing caffeine, is considered best brain supplement for adults and kids when compared to other focus drugs present in the market today.
Expect to experience an increase in focus and a drastic reduction in reaction time [11][12][13][14][15][16]. You'll have an easier time quickly switching between different mental tasks, and will experience an increase in general cognitive ability [17][18]. Queal Flow also improves cognition and motivation, by means of reducing anxiety and stress [19][20][21][22][23]. If you're using Flow regularly for a longer period of time, it's also very likely to improve your mental health in the long term (reducing cognitive decline), and might even improve your memory [24][25].
A big part is that we are finally starting to apply complex systems science to psycho-neuro-pharmacology and a nootropic approach. The neural system is awesomely complex and old-fashioned reductionist science has a really hard time with complexity. Big companies spends hundreds of millions of dollars trying to separate the effects of just a single molecule from placebo – and nootropics invariably show up as "stacks" of many different ingredients (ours, Qualia , currently has 42 separate synergistic nootropics ingredients from alpha GPC to bacopa monnieri and L-theanine). That kind of complex, multi pathway input requires a different methodology to understand well that goes beyond simply what's put in capsules.
So is there a future in smart drugs? Some scientists are more optimistic than others. Gary Lynch, a professor in the School of Medicine at the University of California, Irvine argues that recent advances in neuroscience have opened the way for the smart design of drugs, configured for specific biological targets in the brain. "Memory enhancement is not very far off," he says, although the prospects for other kinds of mental enhancement are "very difficult to know… To me, there's an inevitability to the thing, but a timeline is difficult."
None of that has kept entrepreneurs and their customers from experimenting and buying into the business of magic pills, however. In 2015 alone, the nootropics business raked in over $1 billion dollars, and web sites like the nootropics subreddit, the Bluelight forums, and Bulletproof Exec are popular and packed with people looking for easy ways to boost their mental performance. Still, this bizarre, Philip K. Dick-esque world of smart drugs is a tough pill to swallow. To dive into the topic and explain, I spoke to Kamal Patel, Director of evidence-based medical database Examine.com, and even tried a few commercially-available nootropics myself.
Power times prior times benefit minus cost of experimentation: (0.20 \times 0.30 \times 540) - 41 = -9. So the VoI is negative: because my default is that fish oil works and I am taking it, weak information that it doesn't work isn't enough. If the power calculation were giving us 40% reliable information, then the chance of learning I should drop fish oil is improved enough to make the experiment worthwhile (going from 20% to 40% switches the value from -$9 to +$23.8).
"In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!"
When it comes to coping with exam stress or meeting that looming deadline, the prospect of a "smart drug" that could help you focus, learn and think faster is very seductive. At least this is what current trends on university campuses suggest. Just as you might drink a cup of coffee to help you stay alert, an increasing number of students and academics are turning to prescription drugs to boost academic performance.
Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb
ADMISSIONSUNDERGRADUATE GRADUATE CONTINUING EDUCATION RESEARCHDIVISIONS RESEARCH IMPACT LIBRARIES INNOVATION AND PARTNERSHIP SUPPORT FOR RESEARCHERS RESEARCH IN CONVERSATION PUBLIC ENGAGEMENT WITH RESEARCH NEWS & EVENTSEVENTS SCIENCE BLOG ARTS BLOG OXFORD AND BREXIT NEWS RELEASES FOR JOURNALISTS FILMING IN OXFORD FIND AN EXPERT ABOUTORGANISATION FACTS AND FIGURES OXFORD PEOPLE OXFORD ACCESS INTERNATIONAL OXFORD BUILDING OUR FUTURE JOBS 牛津大学Staff Oxford students Alumni Visitors Local community
Nootropics are becoming increasingly popular as a tool for improving memory, information recall, and focus. Though research has not yet determined the mechanism for how nootropics work, it is clear that they provide significant cognitive benefits. Additionally, through a variety of hypothesized biological mechanisms, these compounds are thought to have the potential to improve vision.
I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day.
Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive.
Furthermore, there is no certain way to know whether you'll have an adverse reaction to a particular substance, even if it's natural. This risk is heightened when stacking multiple substances because substances can have synergistic effects, meaning one substance can heighten the effects of another. However, using nootropic stacks that are known to have been frequently used can reduce the chances of any negative side effects.
The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime.
The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition.
If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings.
Since my experiment had a number of flaws (non-blind, varying doses at varying times of day), I wound up doing a second better experiment using blind standardized smaller doses in the morning. The negative effect was much smaller, but there was still no mood/productivity benefit. Having used up my first batch of potassium citrate in these 2 experiments, I will not be ordering again since it clearly doesn't work for me.
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo.
That doesn't necessarily mean all smart drugs – now and in the future – will be harmless, however. The brain is complicated. In trying to upgrade it, you risk upsetting its intricate balance. "It's not just about more, it's about having to be exquisitely and exactly right. And that's very hard to do," says Arnstein. "What's good for one system may be bad for another system," adds Trevor Robbins, Professor of Cognitive Neuroscience at the University of Cambridge. "It's clear from the experimental literature that you can affect memory with pharmacological agents, but the problem is keeping them safe."
Finally, it's not clear that caffeine results in performance gains after long-term use; homeostasis/tolerance is a concern for all stimulants, but especially for caffeine. It is plausible that all caffeine consumption does for the long-term chronic user is restore performance to baseline. (Imagine someone waking up and drinking coffee, and their performance improves - well, so would the performance of a non-addict who is also slowly waking up!) See for example, James & Rogers 2005, Sigmon et al 2009, and Rogers et al 2010. A cross-section of thousands of participants in the Cambridge brain-training study found caffeine intake showed negligible effect sizes for mean and component scores (participants were not told to use caffeine, but the training was recreational & difficult, so one expects some difference).
But there would also be significant downsides. Amphetamines are structurally similar to crystal meth – a potent, highly addictive recreational drug which has ruined countless lives and can be fatal. Both Adderall and Ritalin are known to be addictive, and there are already numerous reports of workers who struggled to give them up. There are also side effects, such as nervousness, anxiety, insomnia, stomach pains, and even hair loss, among others.
The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.)
Imagine a pill you can take to speed up your thought processes, boost your memory, and make you more productive. If it sounds like the ultimate life hack, you're not alone. There are pills that promise that out there, but whether they work is complicated. Here are the most popular cognitive enhancers available, and what science actually says about them.
Competitors of importance in the smart pills market have been recorded and analyzed in MRFR's report. These market players include RF Co., Ltd., CapsoVision, Inc., JINSHAN Science & Technology, BDD Limited, MEDTRONIC, Check-Cap, PENTAX Medical, INTROMEDIC, Olympus Corporation, FUJIFILM Holdings Corporation, MEDISAFE, and Proteus Digital Health, Inc.
Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28] | CommonCrawl |
A chaotic bursting-spiking transition in a pancreatic beta-cells system: observation of an interior glucose-induced crisis
MBE Home
A proton therapy model using discrete difference equations with an example of treating hepatocellular carcinoma
August 2017, 14(4): 843-880. doi: 10.3934/mbe.2017046
A two-patch prey-predator model with predator dispersal driven by the predation strength
Yun Kang 1,, , Sourav Kumar Sasmal 2, and Komi Messan 3,
Sciences and Mathematics Faculty, College of Integrative Sciences and Arts, Arizona State University, Mesa, AZ 85212, USA
Agricultural and Ecological Research Unit, Indian Statistical Institute, 203, B. T. Road, Kolkata 700108, India
Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Mesa, AZ 85212, USA
* Corresponding author: Yun Kang
Received August 30, 2016 Accepted December 25, 2016 Published February 2017
Fund Project: The first author is partially supported by NSF-DMS(1313312); NSF-IOS/DMS (1558127) and The James S. McDonnell Foundation 21st Century Science Initiative in Studying Complex Systems Scholar Award (UHC Scholar Award 220020472)
Full Text(HTML)
Figure(4) / Table(3)
Foraging movements of predator play an important role in population dynamics of prey-predator systems, which have been considered as mechanisms that contribute to spatial self-organization of prey and predator. In nature, there are many examples of prey-predator interactions where prey is immobile while predator disperses between patches non-randomly through different factors such as stimuli following the encounter of a prey. In this work, we formulate a Rosenzweig-MacArthur prey-predator two patch model with mobility only in predator and the assumption that predators move towards patches with more concentrated prey-predator interactions. We provide completed local and global analysis of our model. Our analytical results combined with bifurcation diagrams suggest that: (1) dispersal may stabilize or destabilize the coupled system; (2) dispersal may generate multiple interior equilibria that lead to rich bistable dynamics or may destroy interior equilibria that lead to the extinction of predator in one patch or both patches; (3) Under certain conditions, the large dispersal can promote the permanence of the system. In addition, we compare the dynamics of our model to the classic two patch model to obtain a better understanding how different dispersal strategies may have different impacts on the dynamics and spatial patterns.
Keywords: Rosenzweig-MacArthur prey-predator model, self-organization effects, dispersal, persistence, non-random foraging movements.
Mathematics Subject Classification: Primary: 37G35, 34C23; Secondary: 92D25, 92D40.
Citation: Yun Kang, Sourav Kumar Sasmal, Komi Messan. A two-patch prey-predator model with predator dispersal driven by the predation strength. Mathematical Biosciences & Engineering, 2017, 14 (4) : 843-880. doi: 10.3934/mbe.2017046
L. Aarssen and R. Turkington, Biotic specialization between neighbouring genotypes in lolium perenne and trifolium repens from a permanent pasture, The Journal of Ecology, 73 (1985), 605-614. doi: 10.2307/2260497. Google Scholar
R. F. Alder, Migration alone can produce persistence of host-parasitoid models, The American Naturalist, 141 (1993), 642-650. Google Scholar
J. Bascompte and R. V. Solé, Spatially induced bifurcations in single-species population dynamics, Journal of Animal Ecology, 63 (1994), 256-264. doi: 10.2307/5544. Google Scholar
B. M. Bolker and S. W. Pacala, Spatial moment equations for plant competition: Understanding spatial strategies and the advantages of short dispersal, The American Naturalist, 153 (1999), 575-602. doi: 10.1086/303199. Google Scholar
C. J. Bolter, M. Dicke, J. J. Van Loon, J. Visser and M. A. Posthumus, Attraction of colorado potato beetle to herbivore-damaged plants during herbivory and after its termination, Journal of Chemical Ecology, 23 (1997), 1003-1023. doi: 10.1023/B:JOEC.0000006385.70652.5e. Google Scholar
C. Carroll and D. H. Janzen, Ecology of foraging by ants, Annual Review of Ecology and Systematics, 4 (1973), 231-257. doi: 10.1146/annurev.es.04.110173.001311. Google Scholar
A. Casal, J. Eilbeck and J. López-Gómez, Existence and uniqueness of coexistence states for a predator-prey model with diffusion, Differential and Integral Equations, 7 (1994), 411-439. Google Scholar
P. L. Chesson and W. W. Murdoch, Aggregation of risk: Relationships among host-parasitoid models, American Naturalist, 127 (1986), 696-715. doi: 10.1086/284514. Google Scholar
W. C. Chewning, Migratory effects in predator-prey models, Mathematical Biosciences, 23 (1975), 253-262. doi: 10.1016/0025-5564(75)90039-5. Google Scholar
R. Cressman and K. Vlastimil, Two-patch population models with adaptive dispersal: The effects of varying dispersal speeds, Journal of Mathematical Biology, 67 (2013), 329-358. doi: 10.1007/s00285-012-0548-3. Google Scholar
E. Curio, The Ethology of Predation ,Springer-Verlag Berlin Heidelberg, 7 1976. doi: 10.1007/978-3-642-81028-2. Google Scholar
M. Doebli, Dispersal and dynamics, Theoretical Population Biology, 47 (1995), 82-106. doi: 10.1006/tpbi.1995.1004. Google Scholar
W. Feng, B. Rock and J. Hinson, On a new model of two-patch predator-prey system with migration of both species, Journal of Applied Analysis and Computation, 1 (2011), 193-203. Google Scholar
J. Ford, The Role of the Trypanosomiases in African Ecology. A Study of the Tsetse Fly Problem, in Oxford University Press, Oxford, 1971.Google Scholar
A. G. Gatehouse, Permanence and the dynamics of biological systems, Host Finding Behaviour Of Tsetse Flies, (1972), 83-95. Google Scholar
S. Ghosh and S. Bhattacharyya, A two-patch prey-predator model with food-gathering activity, Journal of Applied Mathematics and Computing, 37 (2011), 497-521. doi: 10.1007/s12190-010-0446-z. Google Scholar
M. Gillies and T. Wilkes, The range of attraction of single baits for some West African mosquitoes, Bulletin of Entomological Research, 60 (1970), 225-235. doi: 10.1017/S000748530004075X. Google Scholar
M. Gillies and T. Wilkes, The range of attraction of animal baits and carbon dioxide for mosquitoes, Bulletin of Entomological Research, 61 (1972), 389-404. Google Scholar
M. Gillies and T. Wilkes, The range of attraction of birds as baits for some west african mosquitoes (diptera, culicidae), Bulletin of Entomological Research, 63 (1974), 573-582. doi: 10.1017/S0007485300047817. Google Scholar
[20] I. Hanski, Metapopulation Ecology, Oxford University Press, Oxford, 1999.
[21] I. A. Hanski and M. E. Gilpin, Metapopulation Biology: Ecology, Genetics, and Evolution, Academic Press, San Diego, 1997.
M. Hassell and R. May, Aggregation of predators and insect parasites and its effect on stability, The Journal of Animal Ecology, 43 (1974), 567-594. doi: 10.2307/3384. Google Scholar
M. Hassell and T. Southwood, Foraging strategies of insects, Annual Review of Ecology and Systematics, 9 (1978), 75-98. doi: 10.1146/annurev.es.09.110178.000451. Google Scholar
M. Hassell, O. Miramontes, P. Rohani and R. May, Appropriate formulations for dispersal in spatially structured models: comments on bascompte & Solé, Journal of Animal Ecology, 64 (1995), 662-664. doi: 10.2307/5808. Google Scholar
M. P. Hassell, H. N. Comins and R. M. May, Spatial structure and chaos in insect population dynamics, Nature, 353 (1991), 255-258. doi: 10.1038/353255a0. Google Scholar
A. Hastings, Can spatial variation along lead to selection for dispersal?, Theoretical Population Biology, 24 (1983), 244-251. Google Scholar
A. Hastings, Complex interactions between dispersal and dynamics: Lessons from coupled logistic equations, Ecology, 74 (1993), 1362-1372. doi: 10.2307/1940066. Google Scholar
C. Hauzy, M. Gauduchon, F. D. Hulot and M. Loreau, Density-dependent dispersal and relative dispersal affect the stability of predator-prey metacommunities, Journal of Theoretical Biology, 266 (2010), 458-469. doi: 10.1016/j.jtbi.2010.07.008. Google Scholar
R. D. Holt, Population dynamics in two-patch environments: Some anomalous consequences of an optimal habitat distribution, Theoretical Population Biology, 28 (1985), 181-208. doi: 10.1016/0040-5809(85)90027-9. Google Scholar
S. Hsu, S. Hubbell and P. Waltman, A mathematical theory for single-nutrient competition in continuous cultures of micro-organisms, SIAM Journal on Applied Mathematics, 32 (1977), 366-383. doi: 10.1137/0132030. Google Scholar
S. Hsu, On global stability of a predator-prey system, Mathematical Biosciences, 39 (1978), 1-10. doi: 10.1016/0025-5564(78)90025-1. Google Scholar
Y. Huang and O. Diekmann, Predator migration in response to prey density: What are the consequences?, Journal of Mathematical Biology, 43 (2001), 561-581. doi: 10.1007/s002850100107. Google Scholar
V. Hutson, A theorem on average liapunov functions, Monatshefte für Mathematik, 98 (1984), 267-275. doi: 10.1007/BF01540776. Google Scholar
V. Hutson and K. Schmit, Permanence and the dynamics of biological systems, Mathematical Biosciences, 111 (1992), 1-71. doi: 10.1016/0025-5564(92)90078-B. Google Scholar
V. A. Jansen, Regulation of predator-prey systems through spatial interactions: A possible solution to the paradox of enrichment, Oikos, 74 (1995), 384-390. doi: 10.2307/3545983. Google Scholar
V. A. Jansen, The dynamics of two diffusively coupled predator-prey populations, Theoretical Population Biology, 59 (2001), 119-131. doi: 10.1006/tpbi.2000.1506. Google Scholar
V. A. A. Jansen, Theoretical Aspects of Metapopulation Dynamics, PhD thesis, Ph. D. thesis, Leiden University, The Netherlands, 1994.Google Scholar
Y. Kang and D. Armbruster, Dispersal effects on a discrete two-patch model for plant-insect interactions, Journal of Theoretical Biology, 268 (2011), 84-97. doi: 10.1016/j.jtbi.2010.09.033. Google Scholar
Y. Kang and C. Castillo-Chavez, Multiscale analysis of compartment models with dispersal, Journal of Biological Dynamics, 6 (2012), 50-79. doi: 10.1080/17513758.2012.713125. Google Scholar
P. Kareiva and G. Odell, Swarms of predators exhibit "prey-taxis" if individual predators use area-restricted search, American Naturalist,, 130 (1987), 233-270. Google Scholar
P. Kareiva, A. Mullen and R. Southwood, Population dynamics in spatially complex environments: Theory and data [and discussion], Philosophical Transactions of the Royal Society of London B: Biological Sciences, 330 (1990), 175-190. doi: 10.1098/rstb.1990.0191. Google Scholar
S. Kéfi, M. Rietkerk, M. van Baalen and M. Loreau, Local facilitation, bistability and transitions in arid ecosystems, Theoretical Population Biology, 71 (2007), 367-379. Google Scholar
P. Klepac, M. G. Neubert and P. van den Driessche, Dispersal delays, predator-prey stability, and the paradox of enrichment, Theoretical Population Biology, 71 (2007), 436-444. doi: 10.1016/j.tpb.2007.02.002. Google Scholar
M. Kummel, D. Brown and A. Bruder, How the aphids got their spots: Predation drives self-organization of aphid colonies in a patchy habitat, Oikos, 122 (2013), 896-906. doi: 10.1111/j.1600-0706.2012.20805.x. Google Scholar
K. Kuto and Y. Yamada, Multiple coexistence states for a prey-predator system with cross-diffusion, Journal of Differential Equations, 197 (2004), 315-348. doi: 10.1016/j.jde.2003.08.003. Google Scholar
I. Lengyel and I. R. Epstein, Diffusion-induced instability in chemically reacting systems: Steady-state multiplicity, oscillation, and chaos, Chaos: An Interdisciplinary Journal of Nonlinear Science, 1 (1991), 69-76. doi: 10.1063/1.165819. Google Scholar
S. A. Levin, Dispersion and population interactions, American Naturalist, 108 (1974), 207-228. doi: 10.1086/282900. Google Scholar
R. Levins, Some demographic and genetic consequences of environmental heterogeneity for biological control, Bulletin of the Entomological Society of America, 15 (1969), 237-240. doi: 10.1093/besa/15.3.237. Google Scholar
Z.-z. Li, M. Gao, C. Hui, X.-z. Han and H. Shi, Impact of predator pursuit and prey evasion on synchrony and spatial patterns in metapopulation, Ecological Modelling, 185 (2005), 245-254. doi: 10.1016/j.ecolmodel.2004.12.008. Google Scholar
X. Liu and L. Chen, Complex dynamics of Holling type Ⅱ Lotka--Volterra predator--prey system with impulsive perturbations on the predator, Chaos, Solitons & Fractals, 16 (2003), 311-320. doi: 10.1016/S0960-0779(02)00408-3. Google Scholar
Y. Liu, The Dynamical Behavior of a Two Patch Predator-Prey Model ,Honor Thesis, from The College of William and Mary, 2010.Google Scholar
J. H. Loughrin, D. A. Potter, T. R. Hamilton-Kemp and M. E. Byers, Role of feeding-induced plant volatiles in aggregative behavior of the japanese beetle (coleoptera: Scarabaeidae), Environmental Entomology, 25 (1996), 1188-1191. doi: 10.1093/ee/25.5.1188. Google Scholar
J. Madden, Physiological reactions of Pinus radiata to attack by woodwasp, Sirex noctilio F.(Hymenoptera: Siricidae), Bulletin of Entomological Research, 67 (1977), 405-426. doi: 10.1017/S0007485300011214. Google Scholar
L. Markus, Ⅱ. Asymptotically autonomous differential systems, in Contributions to the Theory of Nonlinear Oscillations (AM-36), Vol. Ⅲ, Princeton University Press, 1956, 17–30. doi: 10.1515/9781400882175-003. Google Scholar
R. M. May, Host-parasitoid systems in patchy environments: A phenomenological model, The Journal of Animal Ecology, 47 (1978), 833-844. doi: 10.2307/3674. Google Scholar
R. McMurtrie, Persistence and stability of single-species and prey-predator systems in spatially heterogeneous environments, Mathematical Biosciences, 39 (1978), 11-51. doi: 10.1016/0025-5564(78)90026-3. Google Scholar
T. F. Miller, D. J. Mladenoff and M. K. Clayton, Old-growth northern hardwood forests: Spatial autocorrelation and patterns of understory vegetation, Ecological Monographs, 72 (2002), 487-503. Google Scholar
W. W. Murdoch, C. J. Briggs, R. M. Nisbet, W. S. Gurney and A. Stewart-Oaten, Aggregation and stability in metapopulation models, American Naturalist, 140 (1992), 41-58. doi: 10.1086/285402. Google Scholar
M. Pascual, Diffusion-induced chaos in a spatial predator-prey system, Proceedings of the Royal Society of London B: Biological Sciences, 251 (1993), 1-7. doi: 10.1098/rspb.1993.0001. Google Scholar
M. Rees, P. J. Grubb and D. Kelly, Quantifying the impact of competition and spatial heterogeneity on the structure and dynamics of a four-species guild of winter annuals, American Naturalist, 147 (1996), 1-32. doi: 10.1086/285837. Google Scholar
M. Rietkerk and J. Van de Koppel, Regular pattern formation in real ecosystems, Trends in Ecology & Evolution, 23 (2008), 169-175. doi: 10.1016/j.tree.2007.10.013. Google Scholar
P. Rohani and G. D. Ruxton, Dispersal and stability in metapopulations, Mathematical Medicine and Biology, 16 (1999), 297-306. doi: 10.1093/imammb/16.3.297. Google Scholar
M. L. Rosenzweig and R. H. MacArthur, Graphical representation and stability conditions of predator-prey interactions, American Naturalist, 97 (1963), 209-223. doi: 10.1086/282272. Google Scholar
G. D. Ruxton, Density-dependent migration and stability in a system of linked populations, Bulletin of Mathematical Biology, 58 (1996), 643-660. doi: 10.1007/BF02459477. Google Scholar
L. M. Schoonhoven, Plant recognition by lepidopterous larvae, (1972), 87–99.Google Scholar
L. M. Schoonhoven, On the variability of chemosensory information, The Host-Plant in Relation to Insect Behaviour and Reproduction, Symp. Biol. Hung., 16 (1976), 261–266. doi: 10.1007/978-1-4613-4274-8_42. Google Scholar
L. M. Schoonhoven, Chemosensory systems and feeding behavior in phytophagous insects, (1977), 391–398.Google Scholar
E. W. Seabloom, O. N. Bjørnstad, B. M. Bolker and O. Reichman, Spatial signature of environmental heterogeneity, dispersal, and competition in successional grasslands, Ecological Monographs, 75 (2005), 199-214. doi: 10.1890/03-0841. Google Scholar
G. Seifert and L. Markus, Contributions to the Theory of Nonlinear Oscillations ,Princeton University Press, 1956.Google Scholar
Y. Shahak, E. Gal, Y. Offir and D. Ben-Yakir, Photoselective shade netting integrated with greenhouse technologies for improved performance of vegetable and ornamental crops, International Workshop on Greenhouse Environmental Control and Crop Production in Semi-Arid Regions, 797 (2008), 75-80. doi: 10.17660/ActaHortic.2008.797.8. Google Scholar
R. V. Solé and J. Bascompte, Self-Organization in Complex Ecosystems ,Princeton University Press, Princeton, 2006.Google Scholar
A. Soro, S. Sundberg and H. Rydin, Species diversity, niche metrics and species associations in harvested and undisturbed bogs, Journal of Vegetation Science, 10 (1999), 549-560. doi: 10.2307/3237189. Google Scholar
H. R. Thieme, Mathematics in Population Biology ,Princeton University Press, 2003. Google Scholar
D. Tilman and P. M. Kareiva, Spatial Ecology: The Role of Space in Population Dynamics and Interspecific Interactions ,volume 30, Princeton University Press, 1997.Google Scholar
J. van de Koppel, J. C. Gascoigne, G. Theraulaz, M. Rietkerk, W. M. Mooij and P. M. Herman, Experimental evidence for spatial self-organization and its emergent effects in mussel bed ecosystems, Science, 322 (2008), 739-742. Google Scholar
J. K. Waage, Behavioral Aspects of Foraging in the Parasitoid, Nemeritis Canescens (Grav. ) ,PhD Thesis, from University of London, 1977.Google Scholar
J. Wang, J. Shi and J. Wei, Predator-prey system with strong Allee effect in prey, Journal of Mathematical Biology, 62 (2011), 291-331. doi: 10.1007/s00285-010-0332-1. Google Scholar
Figure 1. One and two bifurcation diagrams of Model (4) where $r=1.5$, $d_1=0.2$, $d_2=0.1$, $K_1=5$, $K_2=3$, $a_1=0.25$, and $a_2=0.15$. The left figure (1a) describes how number of interior equilibria changes for different dispersal values $\rho_i, i=1,2$: black regions have three interior equilibria; red regions have two interior equilibria; blue regions have unique interior equilibrium; yellow regions have no interior equilibrium and predator in Patch 2 dies out; white regions have no interior equilibrium and both predator die out. The right figure (1b) describes the number of interior equilibria and their stability when $\rho_2=0.025$ and $\rho_1$ changes from 0 to 0.5 where $y$-axis is the population size of predator at Patch 1: Blue represents the sink; green represents the saddle; and red represents the source
Figure Options
Download as PowerPoint slide
Figure 4. One dimensional bifurcation diagrams of Model (4) where $r=1.5$, $d_1=0.2$, $d_2=0.1$, $K_1=5$, $K_2=3$ and $a_1=0.25$. The left figure (4a) describes describes the number of interior equilibria and their stability when $\rho_1=0.5$ and $\rho_2$ changes from 0 to 0.05. The right figure (4b) describes the number of interior equilibria and their stability when $\rho_1=0.6$ and $\rho_2$ changes from 0 to 1.8. In both figures, blue represents the sink; green represents the saddle; and red represents the source
Figure 2. One and two bifurcation diagrams of Model (4) where $r=1.5$, $d_1=0.2$, $d_2=0.1$, $K_1=5$, $K_2=3$, $a_1=0.25$ and $a_2=0.25$. The left figure (2a) describes how number of interior equilibria changes for different dispersal values $\rho_i, i=1,2$: black regions have three interior equilibria; red regions have two interior equilibria; blue regions have unique interior equilibrium; yellow regions have no interior equilibrium and predator in Patch 2 dies out; white regions have no interior equilibrium and both predator die out. The right figure (2b) describes the number of interior equilibria and their stability when $\rho_1=1$ and $\rho_2$ changes from 0 to 2.5 where $y$-axis is the population size of predator at Patch 1: Blue represents the sink; green represents the saddle; and red represents the source
Figure 3. One and two bifurcation diagrams of Model (4) where $r=1.5$, $d_1=0.2$, $d_2=0.1$, $K_1=5$, $K_2=3$, $a_1=0.35$ and $a_2=0.25$. The left figure (3a) describes how number of interior equilibria changes for different dispersal values $\rho_i, i=1,2$: black regions have three interior equilibria; red regions have two interior equilibria; blue regions have unique interior equilibrium; yellow regions have no interior equilibrium and predator in Patch 2 dies out; white regions have no interior equilibrium and both predator die out. The right figure (3b) describes the number of interior equilibria and their stability when $\rho_1=1$ and $\rho_2$ changes from 0 to 7 where $y$-axis is the population size of predator at Patch 1: Blue represents the sink; green represents the saddle; and red represents the source
Table 1. The comparison of boundary equilibria between Model (4) and Model (12). LAS refers to the local asymptotical stability, and GAS refers to the global stability
Scenarios Model (4) whose dispersal is driven by the strength of prey-predator interactions Classical Model (12) whose dispersal is driven by the density of predators
$E_{K_10K_20}$ LAS and GAS if $\mu_i>K_i$ for both $i=1,2$. Dispersal has no effects on its stability. GAS if $\mu_i>K_i$ for both $i=1,2$; While LAS if $d_1+d_2+\rho_1+\rho_2>\frac{a_1K_1}{1+K_1}+\frac{a_2K_2}{1+K_2}$ and $\left[ d_1-\frac{a_1K_1}{1+K_1}\right]\left[1-\frac{a_2K_2}{(d_2+\rho_2)(1+K_2)}\right]+\frac{\rho_1}{d_2+\rho_2}\left[ d_2-\frac{a_2K_2}{1+K_2}\right]>0$. Large dispersal may be able to stabilize the equilibrium.
$E_{i2}^b$ ($y_i=0$) LAS if $\frac{K_i-1}{2}<\mu_i<K_i$ and one of the conditions sa, sb, sc, sd in Theorem 3.2 holds. Large dispersal has potential to either stabilize or stabilize the equilibrium. Does not exists
$E_i^b$ ($x_i=0$) Does not exists LAS if $\frac{K_i-1}{2}<\widehat{\mu}_i<K_i$ and $r_j<a_j\hat{\nu}_j^i$. GAS if $\frac{K_i-1}{2}<\widehat{\mu}_i<K_i$ and $\frac{r_j(K_j+1)^2}{4a_jK_j}<\widehat{\nu}_i^j$. Large dispersal of predator in Patch $i$ will either destroy or destabilize the equilibrium while large dispersal of predator in Patch $j$ may stabilize the equilibrium.
Download as excel
Table 2. The comparison of prey persistence and extinction between Model (4) and Model (12)
Persistence of prey Always persist, dispersal of predator has no effects One or both prey persist if conditions 4. in Theorem (4.1) holds. Small dispersal of predator in Patch $i$ and large dispersal of predator in Patch $j$ can help the persistence of prey in Patch $i$.
Extinction of prey Never extinct $x_i$ extinct if $\frac{K_j-1}{2}<\widehat{\mu}_j<K_j$ and $\frac{r_i(K_i+1)^2}{4a_iK_i}<\widehat{\nu}_i^j$. Large dispersal of predator in Patch $i$ can promote the extinction of prey in Patch $i$.
Table 3. The comparison of predator persistence and extinction between Model (4) and Model (12)
Persistence of predator Predator at Patch $j$ is persistent if Conditions in Theorem 3.6 holds. Small dispersal of predator in Patch $j$ can help the persistence of predator in that patch. Dispersal is able to promote the persistence of predator when predator goes extinct in the single patch model. Predators in both patches have the same persistence conditions. They persist if $0<{\mu}_i<K_i$ for $i=1,2$. Dispersal seems to have no effects in the persistence of predator.
Extinction of predator Simulations suggestions (see the yellow regions of Figure (1a) and Figure (3a)) that the large dispersal of predator in Patch $i$ may lead to the its own extinction. Predators in both patches have the same extinction conditions. They go extinct if ${\mu}_i>K_i$ or $\mu_i<0$ for $i=1,2$
Jinfeng Wang, Hongxia Fan. Dynamics in a Rosenzweig-Macarthur predator-prey system with quiescence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 909-918. doi: 10.3934/dcdsb.2016.21.909
Komi Messan, Yun Kang. A two patch prey-predator model with multiple foraging strategies in predator: Applications to insects. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 947-976. doi: 10.3934/dcdsb.2017048
Wei Feng, Nicole Rocco, Michael Freeze, Xin Lu. Mathematical analysis on an extended Rosenzweig-MacArthur model of tri-trophic food chain. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1215-1230. doi: 10.3934/dcdss.2014.7.1215
Yaying Dong, Shanbing Li, Yanling Li. Effects of dispersal for a predator-prey model in a heterogeneous environment. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2511-2528. doi: 10.3934/cpaa.2019114
Isam Al-Darabsah, Xianhua Tang, Yuan Yuan. A prey-predator model with migrations and delays. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 737-761. doi: 10.3934/dcdsb.2016.21.737
R. P. Gupta, Peeyush Chandra, Malay Banerjee. Dynamical complexity of a prey-predator model with nonlinear predator harvesting. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 423-443. doi: 10.3934/dcdsb.2015.20.423
Sampurna Sengupta, Pritha Das, Debasis Mukherjee. Stochastic non-autonomous Holling type-Ⅲ prey-predator model with predator's intra-specific competition. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3275-3296. doi: 10.3934/dcdsb.2018244
Guirong Jiang, Qishao Lu. The dynamics of a Prey-Predator model with impulsive state feedback control. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1301-1320. doi: 10.3934/dcdsb.2006.6.1301
Xinfu Chen, Yuanwei Qi, Mingxin Wang. Steady states of a strongly coupled prey-predator model. Conference Publications, 2005, 2005 (Special) : 173-180. doi: 10.3934/proc.2005.2005.173
Kousuke Kuto, Yoshio Yamada. Coexistence states for a prey-predator model with cross-diffusion. Conference Publications, 2005, 2005 (Special) : 536-545. doi: 10.3934/proc.2005.2005.536
Mingxin Wang, Peter Y. H. Pang. Qualitative analysis of a diffusive variable-territory prey-predator model. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 1061-1072. doi: 10.3934/dcds.2009.23.1061
J. Gani, R. J. Swift. Prey-predator models with infected prey and predators. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5059-5066. doi: 10.3934/dcds.2013.33.5059
Wenjie Ni, Mingxin Wang. Dynamical properties of a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3409-3420. doi: 10.3934/dcdsb.2017172
Yongli Cai, Malay Banerjee, Yun Kang, Weiming Wang. Spatiotemporal complexity in a predator--prey model with weak Allee effects. Mathematical Biosciences & Engineering, 2014, 11 (6) : 1247-1274. doi: 10.3934/mbe.2014.11.1247
Yang Lu, Xia Wang, Shengqiang Liu. A non-autonomous predator-prey model with infected prey. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3817-3836. doi: 10.3934/dcdsb.2018082
Huiling Li, Peter Y. H. Pang, Mingxin Wang. Qualitative analysis of a diffusive prey-predator model with trophic interactions of three levels. Discrete & Continuous Dynamical Systems - B, 2012, 17 (1) : 127-152. doi: 10.3934/dcdsb.2012.17.127
Shanbing Li, Jianhua Wu. Effect of cross-diffusion in the diffusion prey-predator model with a protection zone. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1539-1558. doi: 10.3934/dcds.2017063
Meng Zhao, Wan-Tong Li, Jia-Feng Cao. A prey-predator model with a free boundary and sign-changing coefficient in time-periodic environment. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3295-3316. doi: 10.3934/dcdsb.2017138
Shuping Li, Weinian Zhang. Bifurcations of a discrete prey-predator model with Holling type II functional response. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 159-176. doi: 10.3934/dcdsb.2010.14.159
Na Min, Mingxin Wang. Hopf bifurcation and steady-state bifurcation for a Leslie-Gower prey-predator model with strong Allee effect in prey. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1071-1099. doi: 10.3934/dcds.2019045
PDF downloads (17)
Yun Kang Sourav Kumar Sasmal Komi Messan | CommonCrawl |
deGPS is a powerful tool for detecting differential expression in RNA-sequencing studies
Chen Chu1,2,3,
Zhaoben Fang1,
Xing Hua1,2,
Yaning Yang1,
Enguo Chen4,
Allen W. Cowley Jr.2,
Mingyu Liang2,
Pengyuan Liu2,4,5 &
Yan Lu3,5
The advent of the NGS technologies has permitted profiling of whole-genome transcriptomes (i.e., RNA-Seq) at unprecedented speed and very low cost. RNA-Seq provides a far more precise measurement of transcript levels and their isoforms compared to other methods such as microarrays. A fundamental goal of RNA-Seq is to better identify expression changes between different biological or disease conditions. However, existing methods for detecting differential expression from RNA-Seq count data have not been comprehensively evaluated in large-scale RNA-Seq datasets. Many of them suffer from inflation of type I error and failure in controlling false discovery rate especially in the presence of abnormal high sequence read counts in RNA-Seq experiments.
To address these challenges, we propose a powerful and robust tool, termed deGPS, for detecting differential expression in RNA-Seq data. This framework contains new normalization methods based on generalized Poisson distribution modeling sequence count data, followed by permutation-based differential expression tests. We systematically evaluated our new tool in simulated datasets from several large-scale TCGA RNA-Seq projects, unbiased benchmark data from compcodeR package, and real RNA-Seq data from the development transcriptome of Drosophila. deGPS can precisely control type I error and false discovery rate for the detection of differential expression and is robust in the presence of abnormal high sequence read counts in RNA-Seq experiments.
Software implementing our deGPS was released within an R package with parallel computations (https://github.com/LL-LAB-MCW/deGPS). deGPS is a powerful and robust tool for data normalization and detecting different expression in RNA-Seq experiments. Beyond RNA-Seq, deGPS has the potential to significantly enhance future data analysis efforts from many other high-throughput platforms such as ChIP-Seq, MBD-Seq and RIP-Seq.
Next-generation sequencing (NGS) technologies parallelize the sequencing processes and produce millions of short-read sequences concurrently. The advent of the NGS technologies has permitted profiling of whole-genome transcriptomes by RNA-Seq, at unprecedented speed and very low cost. RNA-Seq provides a far more precise measurement of transcript levels and their isoforms compared to other methods such as microarrays [1].
In RNA-Seq experiments, millions of short sequence reads are aligned to a reference genome and the number of reads that fall into a particular genomic region is recorded, as read count data. These regions of interest are annotated as microRNA (miRNA), small interfering RNAs (siRNA), long noncoding RNAs (lncRNA), or messenger RNA (mRNA) in the context of RNA-Seq experiment, here all referred to as transcripts. The read count is linearly related to the abundance of target transcripts [2]. A major objective of RNA-Seq is to better identify count-based expression changes between different biological or disease conditions. A major challenge in differential expression analysis in RNA-Seq data is the unexpectedly large variability of sequence count data among transcripts. The observed count data are integers ranging theoretically from zero to infinite. Furthermore, read counts observed at a particular transcript location are limited by the depth of sequencing coverage and are dependent on the relative abundance of other transcripts. This differs from microarray experiments, where probe intensities for measuring transcript expression are independent of each other [3].
These unique features contained in RNA-Seq data have motivated the development of a number of statistical methods for data normalization and differential expression (DE) detection. Typical approaches use Poisson or negative binomial (NB) distribution to model count-based expression data. The Poisson distribution is commonly applied to models resulting in counting processes. It has a single parameter, which is uniquely determined by its mean. An important property of the Poisson distribution is that the mean equals its variance. However, read counts show a large variability in RNA-Seq experiments, and their variance is often much larger than their mean [4]. This is called the overdispersion problem. When overdispersion exists, the resulting Poisson-based tests will lead to biased and misleading conclusions.
To address the overdispersion problem, several statistical methods including DESeq [5] and edgeR [6] have been developed to model count data with NB distribution. The NB model adds an extra term to the variance of Poisson model to account for overdispersion. There are some technical differences between DESeq and edgeR for estimating the variance parameter of NB distribution. For instance, edgeR assumes that mean and variance are related and thus allows for estimating a common dispersion parameter throughout the whole experiment, followed by estimating trended and tagwise dispersions. DESeq allows for a flexible, mean-dependent location estimation of the dispersion.
Another alternative to the Poisson distribution is the generalized Poisson (GP) distribution [7]. The GP distribution introduces an extra parameter to the usual Poisson distribution. This extra parameter induces a loss of homogeneity in the stochastic counting processes modeled by the distribution. Both the NB and GP distributions can address the overdispersion problem and fix the bias resulting from using standard Poisson models. With same first two moments, GP distribution has heavier tail than NB distribution while NB distribution has larger mass at zero [7]. It is commonly observed that RNA-Seq data carry excessive zeroes or small read counts and are censored due to potential mapping errors. The GP distribution appears to fit such sequence count data better than the NB distribution on small values.
There are also some other methods developed for finding DE in RNA-Seq studies, e.g., NBPSeq [8], TSPM [9], baySeq [10], EBSeq [11],NOISeq [12], SAMseq [13] ShrinkSeq [14] and PoissonSeq [15]. Many of them were comprehensively reviewed and evaluated for their performance for finding count-based DE in several recent studies [3,16].
Here, we propose a powerful normalization method based on GP distribution modeling sequence count data, followed by regular permutation-based DE tests of GP-normalized data. Through comprehensive simulations, our method shows improved results for DE expression, in terms of false discovery rate (FDR), and sensitivity and specificity, in RNA-Seq experiments.
Overview of deGPS
To identify biologically important changes in RNA expression, we propose a more accurate and sensitive two-step method for analyzing sequence count data from RNA-Seq experiments (Fig. 1). Here, we implement our method in an R statistical package, termed "deGPS" (https://github.com/LL-LAB-MCW). To speed up permutation tests, deGPS also provides efficient parallel computation using multi-core processors. In Step 1, two different methods based on the GP distribution, namely GP-Quantile and GP-Theta, were developed for normalizing sequence count data. These two GP-based methods differ in parameter estimation and data transformation. Generally, GP distributions fit sequence count data better than NB distributions on transcripts over a wide range of relative abundance in RNA-Seq experiments (Fig. 2). Other commonly used normalization methods including global, quantile [17], locally weighted least squares (Lowess) [18], and trimmed mean method (TMM) [19] for high-throughput data, as is used for microarrays, can be also adopted in deGPS. The latter normalization methods are based on either linear scaling or sample quantiles instead of modeling sequence count data. Normalization in Step 1 removes potential technical artifacts arising from unintended noise, while maintaining the true differences between biological samples.
Overview of deGPS for analyzing sequence count data in RNA-Seq
Modeling sequence read counts from RNA-Seq with the NB and GP distributions. a Read counts fitted by the NB and GP distributions and b QQ plots
After data normalization, DE detections are performed in Step 2. We employ the empirical distribution of T-statistics to determine the p-values of DE tests. To obtain empirical distributions, we first randomly shuffle the samples between groups, then calculate T-statistics in permuted samples, and finally merge T-statistics from all transcripts without any averaging as one whole empirical distribution. The number of transcripts analyzed in a typical RNA-Seq experiment is often large, ranging from hundreds to ten of thousands. Using this sampling strategy, reliable empirical distributions can be obtained in small sample sizes. The permutation-based DE test in Step 2 is robust and powerful when sample size is small.
Simulation strategies
To evaluate the performance of deGPS, we conducted comprehensive simulations under a range of scenarios comparable to recent RNA-Seq studies. The advantages and disadvantages of each tool are difficult to elicit for a particular small data set. Therefore, we first simulated sequence count data from two large-scale RNA-Seq studies from The Cancer Genome Atlas (TCGA), including 491 miR-Seq libraries (Additional file 1) and 100 mRNA-Seq libraries in human lung tumor tissues (Additional file 2).
To estimate type I error under null hypothesis, we randomly sampled the same number of subjects from our downloaded RNA-Seq datasets into two groups each with 5 subjects. Type I error is defined as the proportion of transcripts with nominal p-values less than 0.05 from statistical tests under null hypothesis. To estimate FDR and true positive rate (TPR) (i.e., statistical power) under alternative hypotheses, we first randomly generated two groups of samples and randomly chose a subset of transcripts. Subsequently, we made two types of changes in the selected transcripts to create DE between two groups. In the "shift" transformation, we added varied quantities (with variations as one fifth of the added values) of read counts into the selected transcripts in either group. In the "scaling" and "shift" transformation, we multiplied the read counts of selected transcripts by varied quantities (with variations as one fifth of the multiplied values) after applying the "shift" transformation (Additional file 3). In our deGPS method, nominal p-values were adjusted by the Benjamini-Hochberg procedure [20]. FDR is defined as the proportion of transcripts identified by a statistical test with a significance level of 0.05 (i.e., adjusted p-values < 0.05) that are indeed false discoveries (i.e., non-DE transcripts); TRP is defined as the proportion of DE transcripts identified by a statistical test with a significance level of 0.05. Each simulation was replicated 1,000 times.
In the real data-driven simulations, sequence count data were normalized by GP-Theta or GP-Quantile methods before applying our permutation-based DE tests. For the purpose of comparison, we also included in the simulation four other normalization methods (namely, Global, Lowess, Quantile, and TMM) that are not based on the GP distribution, but are commonly used for high throughput data such as those from microarray [21,22]. Our DE tests were then applied to the normalized data generated by all of these methods (Additional file 4). We also chose four additional tools, edgeR (v3.6.7), DESeq (v1.16.0), DESeq2 (v1.4.5), and SAMseq (v2.0) which are currently among the top performers of differential analysis of sequence count data [16]. Prior to DE tests, edgeR performs TMM, relative log expression (RLE) or upper quartile for data normalization in its own R package [19]. DESeq and its variant (DESeq2) use a similar RLE approach for data normalization by creating a virtual library that every sample is compared against [5]. Similarly, nominal p-values output from these R packages were adjusted by the Benjamini-Hochberg (BH) procedure for evaluating FDR and TPR [20]. Note that edgeR has multiple user-defined parameter settings while both DESeq and DESeq2 were applied by default setting. We present the results from the most commonly used TMM normalization with glmLRT (named edgeR1) and glmQLF tests (named edgeR2), which generally have better performance than the other setting (Additional file 3). SAMseq were implemented with default parameter setting.
In addition to the above data-driven simulation strategy, we also used compcodeR for benchmarking of DE analysis methods [23]. The compcodeR package provides functionality for simulating realistic RNA-seq count data sets and an interface for implementing several commonly used statistical methods such as DESeq and edgeR for DE analysis. We set the proportion of upregulated transcripts as 50 %, set sample size as 5, 8, and 10 subjects per group, and introduced 0, 0.5, 1.0 and 2.0 % probability of random outliers to model abnormally high counts in RNA-Seq studies. All of the other parameters are default. compcodeR-based simulations were replicated 100 times in each scenario. Type I error, FDR, TPR and AUC were evaluated and compared by its own functions in the compcodeR. It is worth noting that compcodeR simulates sequence count data from NB distributions, which potentially favors DESeq and edgeR.
To evaluate different FDR adjustment methods, we introduced R package fdrtool [24] to further compare BH method [20] to area-based FDR (QVAL) and density-based FDR (LFDR) in compcodeR-based simulations. We took permutation T-statistics instead of p-values in deGPS as the input of fdrtool and extract the QVAL and LFDR from the output. Since sample sizes in RNA-Seq experiments are typically small, the estimated variances and their associated T-statistics used in permutation tests are probably highly variable. We thus compared ordinary T-statistic to regularized T statistic in permutation tests for DE detection. Regularized T-statistic was implemented in R package st [25].
Type I errors and false positive rates
We first evaluated type I error and FDR of different methods in datasets simulated from two large-scale RNA-Seq studies, including 491 miRNA and 100 mRNA TCGA samples (Fig. 3). FDR is used for quantifying the rate of false discoveries when multiple hypothesis testing is concerned especially in RNA-Seq experiments. Among these methods, only three methods (GP-Theta, TMM and DESeq) can precisely control both type I error and FDR in both miRNA and mRNA datasets. SAMseq has correct type I error and FDR in miR-Seq dataset, but inflates type I error and FDR in mRNA-Seq dataset. DESeq is the most conservative among these methods in terms of type I error and FDR. Its variant DESeq2 becomes less conservative but leads to higher FDR than expected. edgeR appears to be unable to control both type I error and FDR in all scenarios (Fig. 3 and Additional file 5).
Type I error and false discovery rate. Data were simulated from large-scale TCGA lung cancer sequencing studies, a miRNA and b mRNA. Two different types of data transformation, "shift" and "scaling & shift" were applied. Boxplots summarize type I error and false discovery rate of different statistical methods for DE detection under a wide range of simulations. Methods in red font are those do not have correct type I error and/or false discovery rate
Six of these methods (GP-Theta, GP-Quantile, Global, Lowess, Quantile, and TMM) use different strategies of data normalization, but use the same DE tests as deGPS. They yield very different type I error and FDR. Only GP-Theta and TMM are able to control both type I error and FDR at the desired level; whereas the other four methods have inflated type I error and/or FDR. These results suggest data normalization has substantial impacts on the performance of DE tests in terms of type I error and FDR.
True positive rates
Next, we evaluated TPR (i.e., statistical power) of different methods in these RNA-Seq datasets (Fig. 4 and Additional file 6). These methods show different TPR among different RNA-Seq datasets. GP-Theta consistently produces the highest TPR among the methods that also have correct type I error and FDR in both miRNA and mRNA datasets. SAMseq has roughly similar TPR to deGPS without regard to the consequence of type I error and FDR. DESeq2 has improved TPR, but at the cost of inflated type I error and FDR, when compared with its original version DESeq. Generally, edgeR has high TPR but also exhibits high FDR too.
True positive rate. a miRNA and (b) mRNA. True positive rate (TPR) can be interpreted as statistical power
We also observed that data normalization dramatically influences statistical power of DE tests. Although the same DE tests were applied after data normalization, six different normalization methods result in varied TPR. Besides GP-Theta, TMM performs reasonably better than the other four methods.
Sensitivity and specificity
We compared deGPS with other methods in terms of sensitivity and specificity in these two RNA-Seq studies. We thus calculated the receiver operating characteristic (ROC) curve and area under curve (AUC) of different methods to measure their sensitivity and specificity (Fig. 5 and Additional files 7). For the clearer presentation, AUC with false positive rate (FPR) less than 0.05 was calculated. In general, SAMseq, GP-Theta, DESeq2 and TMM are the top four performers for DE analysis of sequence count data according to the AUC metric. Among the methods that have correct type I error and FDR, GP-Theta performs the best as it has the largest AUC. DESeq2 often has higher AUC than its original version DESeq. Generally, DESeq and its variant DESeq2 perform better than edgeR in mRNA datasets in terms of AUC, whereas their performances are comparable in miRNA datasets. The normalization methods other than GP-Theta and TMM usually result in lower AUC.
Sensitivity and specificity. a miRNA and b mRNA. The AUC with false positive rate less than 0.05 was calculated. Boxplots summarize AUC values from a wide range of simulation settings. TPR, true positive rate; FPR, false positive rate
Benchmark data
We further compared deGPS with SAMseq, DESeq and edgeR using compcodeR. compcodeR is an R package for benchmarking of DE analysis methods, in particular methods developed for analyzing RNA-Seq data [23]. In the analysis, deGPS with GP-Theta normalization, SAMSeq, DESeq and DESeq2, and edgeR1 and edgeR2 were evaluated in benchmark data (Fig. 6 and Additional file 8).
Benchmark data from compcodeR. Type I error rate, FDR, TPR and AUC are evaluated under 0, 0.5, 1 and 2 % of outliers in RNA-Seq data. Sample size is 5 subjects per group
In compcodeR-based simulations, both deGPS and SAMSeq consistently control both type I error and FDR and are robust against the occurrence of random outliers in RNA-Seq experiments; whereas DESeq2 and edgeR1 are not able to control type I error and/or FDR in most of scenarios. DESeq is still conservative in terms of type I error, but its ability of FDR control varies among different levels of random outliers and samples. edgeR2 generally performs much better than edgeR1 in terms of FDR control in compcodeR-based simulations. edgeR1 is based on generalized linear model in which regular likelihood ratio test (LRT) is performed; whereas edgeR2 replaces the Chi-square approximation to the LRT statistic with a quasi-likelihood F-test [26].
In terms of TPR and AUC, SAMseq performs slightly better than deGPS, but the difference between these two methods becomes small when increasing sample sizes from 5 to 8 subjects per group (Fig. 6 and Additional file 8). edgeR2 performs similarly to deGPS in RNA-Seq data without random outliers or very low proportion of outliers (i.e., <0.5 %) . However, deGPS outperforms edgeR2 when random outliers increase up to 1 % in RNA-Seq data. Interestingly, deGPS achieves similar TPR under different levels of random outliers, suggesting it is a robust approach for DE analysis in the presence of abnormal high sequence read counts in particular transcripts in RNA-Seq experiments. It should be also noted that both DESeq and edgeR model sequence count data with NB distribution; whereas deGPS is based on GP distribution. Therefore, compcodeR benchmark analysis that simulates sequence count data from NB distributions may favor DESeq and edgeR and thus overestimate their performance as compared with deGPS in real RNA-Seq data.
We also evaluated effects of different FDR adjustment methods on the performance of deGPS. The median FDR of QVAL or LFDR is a little smaller than that of BH method although the later can precisely control both FDR and type I error. QVAL and LFDR do not always outperform BH method when repeating simulation in each scenario as they have much bigger interquartile range in the boxplot (Additional file 9). It may worth further investigation why the performances of these three FDR adjusting methods differ from case to case. Finally, we compared ordinary T-statistic with regularized T-statistic in permutation tests for DE detection. The simulation results showed that, based on deGPS-transformed data, ordinary T-statistic has a little higher TPR and is generally comparable with regularized T-statistic in terms of type I error and AUC (Additional file 9).
Real data analysis of the developmental transcriptome of Drosophila
In addition to simulated datasets, we also analyzed the developmental transcriptome of Drosophila melanogaster (Fig. 7 and Additional file 10) [27]. We compared six different methods (i.e., deGPS, SAMseq, DESeq, DESeq2, edgeR1 and edgeR2) to identify genes that were differentially expressed between four development stages of Drosophila, which include early embryo (0 to 12 days), late embryo (13 to 24 days), larval and adult stages. Each stage contains 6 RNA-Seq samples. The RNA-Seq read count data in 14,869 genes from these 24 samples were downloaded from http://bowtie-bio.sourceforge.net/recount [28]. Prior to the analysis, we filtered genes without any read counts in all samples from any two compared groups. Similar to the above simulations, the BH procedure was used to control FDR [20], and all genes found to be DE at a FDR threshold of 0.05 were considered significantly DE. As expected, there were a large number of developmental-regulated DE genes in early embryo development, compared with adult Drosophila. Generally, edgeR1 and DESeq2 identified the largest number of DE genes than the other methods. This is perhaps due to their failure in controlling FDR, as observed in simulations. DESeq is the most conservative and identified the smallest number of DE genes among these methods. edgeR methods show extremely high concordance; all of DE genes that were identified by edgeR2 were identified by edgeR1. Similar observations are also true in DESeq methods; about 99 % of DE genes that were identified by DESeq were identified by DESeq2. Approximately 70, 70 and 87 % of DE genes found by deGPS overlap with SAMseq, edgeR1 and DESeq2, respectively.
Analysis of the development transcriptome of Drosophila Melanogaster. Four development stages (early embryo, later embryo, larval and adult) were analyzed (Graveley, et al., 2011). The numbers of genes differentially expressed between two adjacent stages are presented at a FDR threshold of 0.05. The "overlap proportion" is calculated as dividing overlap numbers by its column's DEs
Next, we evaluated the ability of the above methods to control type I error and false positive numbers. We randomly assigned equal number of subjects (without replacement) from the same development stages into two groups of 5 subjects each. Each group contained equal number of subjects from the same development stages and thus had similar gene expression profiles. Therefore, we expected that no genes are truly DE when comparing these two synthetic groups. Nevertheless, among 100 simulations, these methods identified DE genes ranging from 16 to 277 false positives per genome scan. deGPS found the lowest number of false positives, whereas edgeR1 found the highest number of false positives. edgeR1 inflates type I error, whereas the other four methods can control type I error at the desired level (Additional file 11).
In this study, we developed a novel tool, deGPS, for data normalization and DE detection in RNA-Seq studies. deGPS shows improved results for analyzing count-based expression data in most cases through comprehensive simulations. Among 11 methods evaluated in our simulations, it is the only one that can precisely control type I error and FDR in all of scenarios while maintaining high statistical power for DE detection. Good performance of deGPS results from two significant methodological improvements. First, the newly proposed normalization methods model sequence count data by using GP distribution. Data normalization has substantial impact on the performance of statistical methods for DE analysis of sequence count data. Among the six normalization methods evaluated in our study, GP-Theta achieved highest power and AUC while controlling type I error and FDR in either real data-driven simulations or compcodeR-based benchmark data. One possible reason why GP-Theta outperforms the other normalization methods is that it gives a definite estimate of how much the sample mean should be shrunk to alleviate the impact of overdispersion. Second, the regular permutation-based DE tests in deGPS are robust and powerful. Though the data may be skewed, simulations have proved that it is appropriate to pull T-statistics from all transcripts to form one whole empirical distribution. Using this strategy, reliable empirical distributions can be obtained in small sample sizes where many statistical models are prone to inflated type I error and/or FDR. Appropriate use of FDR adjustment methods and regularized T-statistics in permutations may further improve the performance of deGPS. This requires further investigation in future studies.
We compared our deGPS with edgeR, DESeq and SAMseq, which are currently among the top performers for DE analysis of sequence count data [16]. There are methodological distinctions between deGPS and edgeR/DESeq. Our deGPS assumes a GP distribution on the data for a single library across all genes, whereas edgeR and DESeq assumes a NB distribution on the data for a single gene across non-differenentially expressed libraries. Our simulations showed that DESeq is relatively conservative in terms of type I error and is prone to inflated FDR when outliers are introduced to RNA-Seq data. Its variant DESeq2 becomes less conservative and has an increased power but at the cost of poor FDR control. edgeR1 appeared unable to control type I error and FDR in either real data-driven simulations or compcodeR-based benchmark data. edgeR1 method uses LRT statistics that are approximated by a Chi-square distribution, whereas edgeR2 replaces the Chi-square approximation to the LRT statistic with a quasi-likelihood F-test [26]. As a result, edgeR2 has improved FDR control as compared with edgeR1 in most cases. SAMseq is a nonparametric method for finding DE. It performs reasonably better in compcodeR-based simulations, whereas it inflates type I error and FDR in real data-driven simulations from mRNA datasets.
It is not uncommon that some extremely high abundant transcripts (e.g., pseudogenes, ribosomal RNAs, mitochondrial RNAs, contaminant mRNAs and unannotated RNAs) are presented in RNA-Seq data, for example, as seen in the above Drosophila RNA-Seq data (Additional file 12) [27]. These abnormally high read counts (i.e., outliers) in RNA-Seq data will lead to increased numbers of falsely declared DE genes if standard normalization is applied. For example, DESeq, DESeq2 and edgeR1 inflate FDR and lose TPR (i.e., power) when increasing the proportion of outliers up to 0.5 % in RNA-Seq data. Although edgeR2 maintains correct FDR, TPR is dramatically decreased with the increase of outliers in the data. Interestingly, our deGPS consistently controls both type I error and FDR, and maintains similar TPR under different levels of random outliers. This suggests that deGPS is a robust approach for DE analysis in the presence of abnormal high sequence read counts in RNA-Seq samples. In the GP-Theta method, normalization factor is estimated as sample mean multiplied by \( \left(1-\widehat{\lambda}\right) \) where \( \widehat{\lambda} \) is an overdispersion parameter accounting for unexpectedly high variability in sequence count data. We observed that large variability of \( 1/\left(1-\widehat{\lambda}\right) \) exists across RNA-Seq samples from the analysis of two large-scale TCGA data (Additional file 13), suggesting the necessity of shrinkage normalization strategy in these overdispersed count data. Such shrinkage strategy in the analysis helps maintain statistical power and robustness of DE detection.
There are several limitations in deGPS. First, the permutation traversing all the probabilities becomes computationally time-consuming when the sample size increases, though a maximum of permutations can be specified to avoid the problem. To partially alleviate the computational burden, deGPS provides efficient parallel computation in multi-core processors to speed up permutation tests. Runtime of deGPS for RNA-Seq experiments with less than 10 subjects per group is comparable, if parallel computation is applied, to edgeR and DESeq which are currently one of the fastest and most commonly used R packages for DE analysis of RNA-Seq data (Additional file 14). For example, deGPS takes about 3 min for analyzing the Drosophila developmental transcriptome on a Dell PowerEdge r620 with Intel Xeon E5-2660 2.20 Ghz dual-socket 8-core. Although sample sizes will affect runtime of deGPS, it is worth noting that as compared with other methods, permutation-based DE detection implemented in deGPS is robust against different sample sizes. Second, deGPS cannot handle complex experimental designs. Only two-group differential test is currently considered in deGPS. However, our GP-Theta normalization method can be potentially adopted in complex design of RNA-seq experiments or using other statistics instead of a t statistic. Third, it may be inappropriate to compare two groups with library sizes of all samples in one group several times consistently larger than another. Under such very rare circumstances, the shrinkage on sample mean is heavy because of the severely overdispersed read counts. As a result, the normalization factors may not increase as fast as the library size does. The variations within groups may therefore not be large enough to eliminate the large library size differences so that empirical distribution of t statistics may be biased. In that case, TMM normalization is suggested in the application of deGPS package. Fourth, in mRNA data, our method is currently applicable to gene-level read count data while the application on position-level read count data remains further investigations.
In summary, we developed a powerful and robust tool for differential analysis of count-based expression of RNA-Seq data. We implemented our methods in an R package deGPS with parallel computations. deGPS performs better than existing methods in most cases. It is a robust approach against the occurrence of data outliers in RNA-Seq experiments. Beyond RNA-Seq, deGPS has the potential to significantly enhance future data analysis efforts from many other high-throughput platforms such as ChIP-Seq, MBD-Seq and RIP-Seq [29].
GP distribution
Sequence count data, X, observed in a RNA-Seq experiment can be modeled with a GP distribution with parameters θ and λ:
$$ \Pr \left(\mathrm{X}=\mathrm{x}\right)=\left\{\begin{array}{lll}\hfill & \frac{\uptheta {\left(\uptheta +\mathrm{x}\uplambda \right)}^{\mathrm{x}\hbox{-} 1}{{\mathrm{e}}^{\hbox{-} \uptheta \hbox{-} \mathrm{x}}}^{\uplambda}}{\mathrm{x}!},\hfill & \mathrm{x}=0,1,2,\dots \hfill \\ {}0\hfill & \hfill & \mathrm{f}\mathrm{o}\mathrm{r}\;\mathrm{x}>q\; if\;\lambda <0\hfill \end{array}\right. $$
where θ > 0, \( \max \left(-1,-\frac{\uptheta}{\mathrm{q}}\right)\le \uplambda \le 1, \), and q(≥4) is the largest positive integer for which θ + qλ > 0 when λ < 0. The mean of X is θ(1 − λ)− 1 and the variance of X is θ(1 − λ)− 3. When λ = 0, GP becomes a Poisson. The parameter θ is the mean for the natural Poisson process. The parameter λ is the average rate of effort that the subjects are making to deviate from the process. A positive value of λ indicates that the subjects are making an effort to accelerate the natural process while the negative one denotes an effort to retard the process [30]. In the context of RNA-Seq, θ represents the average number of reads mapped to transcripts in a sample. It is correlated to the depth of sequence coverage and total reads mapped to reference genome in the sample. λ represents the bias during the sample preparation and sequencing process [31]. Note that all the fitted λs are always far away from zero, which suggests sequence count data is highly over-dispersed in RNA experiments. It is worth noting that deGPS models gene- or transcript-level sequence count data within the same sample. This is distinct from GPseq that instead models position-level count data [31].
The maximum likelihood estimate (MLE) of λ in the GP model (1) can be obtained by solving the following equation:
$$ {\displaystyle {\sum}_{\mathrm{i}=1}^{\mathrm{n}}\frac{{\mathrm{X}}_{\mathrm{i}}\left(1-{\mathrm{X}}_{\mathrm{i}}\right)}{\overline{\mathrm{X}}+\left({\mathrm{X}}_{\mathrm{i}}-\overline{\mathrm{X}}\right)\uplambda}}-\mathrm{n}\overline{\mathrm{X}}=0 $$
where \( \overline{\mathrm{X}}={\displaystyle {\sum}_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{X}}_{\mathrm{i}}/\mathrm{n}} \) and is the sample mean of reads mapped to transcripts. The MLE of θ can be estimated as \( \overline{\mathrm{X}}\left(1-\widehat{\uplambda}\right) \).
Normalization methods
We propose two new normalization methods for sequence count data based on the above GP distribution: GP-Quantile and GP-Theta. The GP-Quantile method fits every sample in the data with GP distribution, and maps every read count to the corresponding probability, P(X < x), of the fitted GP. Despite that read counts in every sample are normalized between 0 and 1, the data information may be lost during the GP-Quantile normalization process.
In the GP-Theta method, read counts from each sample are divided by the parameter θ of the fitted GP distribution. The MLE of θ is \( \overline{\mathrm{X}}\left(1-\widehat{\uplambda}\right) \) where \( \widehat{\uplambda} \) is the MLE of the over-dispersion parameter λ and \( \overline{\mathrm{X}} \) is the sample mean of sequence reads mapped to transcripts. This MLE \( \widehat{\uptheta} \) can be treated as a shrunk value of \( \overline{\mathrm{X}} \). A major purpose of the GP-Theta method is to remove sample bias due to depth of sequence coverage in RNA-Seq experiments. Similar ideas were previously used for the normalization of RNA-Seq data such as trimmed mean method (TMM) [19].
Differential expression tests
After the data normalization, a procedure using empirical distributional of T-test statistic is conducted in our DE test. To eliminate potential technical noise arising from RNA-Seq experiments, T-test statistics are calculated after normalizations:
$$ \mathrm{T}.\mathrm{stat}\left(\mathrm{X}\hbox{'},\mathrm{Y}\hbox{'}\right)=\frac{\mathrm{Mean}\left(\mathrm{X}\hbox{'}\right)\hbox{-} \mathrm{Mean}\left(\mathrm{Y}\hbox{'}\right)}{\sqrt{\mathrm{Var}\left(\mathrm{X}\hbox{'}\right)/{\mathrm{N}}_{\mathrm{x}\hbox{'}}+\mathrm{V}\mathrm{a}\mathrm{r}\left(\mathrm{Y}\hbox{'}\right)/{\mathrm{N}}_{\mathrm{y}\hbox{'}}}} $$
where "Var" is the variance function and "Mean" is the mean value of read count of a transcript in the sample. X' and Y' are GP-transformed read counts from two groups of samples; Nx' and Ny' are sample sizes of the two groups.
We propose to use empirical distribution of T-statistics to determine the p-values of DE tests. We generate empirical distributions by randomly shuffling the samples into two groups and calculate T-test statistics for each transcript in the permutated samples. Due to the abundance of the transcripts, our permutation strategy can produce reliable empirical distributions even with small sample sizes (e.g., two samples for each group) that are still common in RNA-Seq experiments. The p-values are therefore calculated according to the empirical distribution of T statistics. However, the pooled t-statistics is mixture of a "null group" of statistics corresponding to non DE genes and an "alternative" group corresponding to DE genes. Thus we also include fdrtool [24] to adjust p values in our R package.
The estimated variances and thus T-statistics used in permutation tests are probably highly variable due to a typically small sample sizes in RNA-Seq experiments. Instead of the above ordinary T-statistics, regularized T-statistics, implemented in R package st, are also included in our deGPS.
The real data analysis of the developmental transcriptome of Drosophila can be found in our released R package–deGPS (https://github.com/LL-LAB-MCW). compcodeR-based simulations can be repeated by the R codes which are available in Additional file 15.
The data sets supporting the results of this article are included within the article and its additional files.
AUC:
Area under curve
ChIP-Seq:
Chromatin immunoprecipitation sequencing
Differential expression
FDR:
False discovery rate
FPR:
False positive rate
GP:
Generalized Poisson
lncRNA:
Long noncoding RNAs
Lowess:
Locally weighted least squares
LRT:
Likelihood ratio test
MBD-Seq:
Methyl-CpG binding domain protein-enriched genome sequencing
miRNA:
mRNA:
Messenger RNA
Negative binomial
NGS:
RIP-seq:
RNA-immunoprecipitation sequencing
RNA-Seq:
Operating characteristic curve
siRNA:
Small interfering RNAs
TCTA:
The cancer Genome Atlas
TMM:
Trimmed mean method
TPR:
True positive rate
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009;10(1):57–63.
Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat Methods. 2008;5(7):621–8.
Rapaport F, Khanin R, Liang Y, Pirun M, Krek A, Zumbo P, et al. Comprehensive evaluation of differential gene expression analysis methods for RNA-seq data. Genome Biol. 2013;14(9):R95.
Nagalakshmi U, Wang Z, Waern K, Shou C, Raha D, Gerstein M, et al. The transcriptional landscape of the yeast genome defined by RNA sequencing. Science. 2008;320(5881):1344–9.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11(10):R106.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26(1):139–40.
Joe H, Zhu R. Generalized Poisson distribution: the property of mixture of Poisson and comparison with negative binomial distribution. Biom J. 2005;47(2):219–29.
Di Y, Schafer DW, Cumbie JS, Chang JH. The NBP Negative Binomial Model for Assessing Differential Gene Expression from RNA-Seq. Stat Appl Genet Mol Biol. 2011;10(1):1–28.
Auer PL, Doerge RW. A Two-Stage Poisson Model for Testing RNA-Seq Data. Stat Appl Genet Mol Biol. 2011;10(1):1–26.
Hardcastle TJ, Kelly KA. baySeq: empirical Bayesian methods for identifying differential expression in sequence count data. BMC Bioinformatics. 2010;11:422.
Leng N, Dawson J, Thomson J, Ruotti V, Rissman A, Smits B, et al. EBSeq: an empirical bayes hierarchical model for inference in RNA-seq experiments. University of Wisconsin: Tech. Rep. 226, Department of Biostatistics and Medical Informatics; 2012.
Tarazona S, Garcia-Alcalde F, Dopazo J, Ferrer A, Conesa A. Differential expression in RNA-seq: a matter of depth. Genome Res. 2011;21:2213–23.
Li J, Tibshirani R. Finding consistent patterns: a nonparametric approach for identifying differential expression in RNA-seq data. Stat Methods Med Res. 2011;22(5):519–36.
Van de Wiel M, Leday G, Pardo L, Rue H, Van der Vaart A, Van Wieringen W. Bayesian analysis of RNA sequencing data by estimating multiple shrinkage priors. Biostatistics. 2012;14:113–28.
Li J, Witten DM, Johnstone IM, Tibshirani R. Normalization, testing, and false discovery rate estimation for RNA-sequencing data. Biostatistics. 2012;13(3):523–38.
Soneson C, Delorenzi M. A comparison of methods for differential expression analysis of RNA-seq data. BMC Bioinformatics. 2013;14(1):91.
Affymetrix: Statistical Algorithms Description Document. http://media.affymetrix.com/support/technical/whitepapers/sadd_whitepaper.pdf 2002
Yang YH, Dudoit S, Luu P, Lin DM, Peng V, Ngai J, et al. Normalization for cDNA microarray data: a robust composite method addressing single and multiple slide systematic variation. Nucleic Acids Res. 2002;30(4), e15.
Robinson MD, Oshlack A. A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 2010;11(3):R25.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995;57(1):289–300.
Bolstad BM, Irizarry RA, Astrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003;19(2):185–93.
Zien AAT, Zimmer R, Lengauer T. Centralization: a new method for the normalization of gene expression data. Bioinformatics. 2001;17 Suppl 1:S323–331.
Soneson C. compcodeR-an R package for benchmarking differential expression methods for RNA-seq data. Bioinformatics. 2014;30(17):2517–8.
Strimmer K. A unified approach to false discovery rate estimation. BMC Bioinformatics. 2008;9:303.
Zuber V, Strimmer K. Gene ranking and biomarker discovery under correlation. Bioinformatics. 2009;25(20):2700–7.
Lund SP, Nettleton D, McCarthy DJ, Smyth GK. Detecting differential expression in RNA-sequence data using quasi-likelihood with shrunken dispersion estimates. Stat Appl Genet Mol Biol. 2012;11(5)
Graveley BR, Brooks AN, Carlson JW, Duff MO, Landolin JM, Yang L, et al. The developmental transcriptome of Drosophila melanogaster. Nature. 2011;471(7339):473–9.
Frazee AC, Langmead B, Leek JT. ReCount: a multi-experiment resource of analysis-ready RNA-seq gene count datasets. BMC Bioinformatics. 2011;12:449.
Metzker ML. Sequencing technologies - the next generation. Nat Rev Genet. 2010;11(1):31–46.
Consul PC. Generalized Poisson Distributions: Properties and Applications. New York: Marcel Dekker Incorporated; 1989.
Srivastava S, Chen L. A two-parameter generalized Poisson model to improve the analysis of RNA-seq data. Nucleic Acids Res. 2010;38(17), e170.
This work has been support in part by start-up from Advancing a Healthier Wisconsin Fund (FP00001701 and FP00001703), Louisiana Hope Research Grant provided by Free to Breathe, Women Health Research Program, National Natural Science Foundation of China (No. 81372514, 81472420 and 31401125), and the Fundamental Research Funds for the Central Universities of China. We thank Haris G. Vikis for reading and commenting on the manuscript and Liping Li for her helping generation of read counts for the study.
Department of Statistics and Finance, University of Science and Technology of China, Hefei, Anhui, 230026, China
Chen Chu, Zhaoben Fang, Xing Hua & Yaning Yang
Department of Physiology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
Chen Chu, Xing Hua, Allen W. Cowley Jr., Mingyu Liang & Pengyuan Liu
Department of Gynecologic Oncology, The Affiliated Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310029, China
Chen Chu & Yan Lu
Division of Respiratory Medicine, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310058, China
Enguo Chen & Pengyuan Liu
Institute for Translational Medicine, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310029, China
Pengyuan Liu & Yan Lu
Chen Chu
Zhaoben Fang
Xing Hua
Yaning Yang
Enguo Chen
Allen W. Cowley Jr.
Mingyu Liang
Pengyuan Liu
Yan Lu
Correspondence to Pengyuan Liu or Yan Lu.
PL and YL designed research; CC performed research; CC, ZF, XH, YY, EC, AWC, ML, PL and YL analyzed data; CC, YL and PL wrote the paper. All authors read and approved the final manuscript.
-TCGA samples used in microRNA-Seq simulations.
-TCGA samples used in mRNA-Seq simulations.
Additional file 3: Document S1.
-Simulation settings.
-A flowchart for method comparisons in simulations.
-Type I error and false discovery rate of edgeR with different parameter settings. Different parameter settings in edgeR were defined in Document S1. Methods in red font are those inflate type I error and/or false discovery rate.
-True positive rate of edgeR with different parameter settings. Methods in red font are those inflate type I error and/or false discovery rate.
-AUC of edgeR with different parameter settings. Methods in red font are those inflate type I error and/or false discovery rate.
-Simulation results from compcodeR. Sample size was set as 8 subjects per group. Note that edgeR2 does not control correct type I error when increasing sample size although its FDR is at the desired level.
-Effects of different FDR adjustment methods and T-statistics on the performance on deGPS. Sample sizes were set as (A) 5 and (B) 8 subjects per group. The default FDR adjustment of deGPS is BH method; two additional FDR adjustments QVAL and LFDR were evaluated. The default statistic of deGPS is ordinary T-statistic in permutations; regularized T-statistic (st) was evaluated.
Additional file 10: Figure S7.
-Genes differentially expressed between any two non-adjacent developmental stages of Drosophila melanogaster
-Average false positive number and type I error when comparing two groups that are randomly and equally sampled from different developmental stages of Drosophila melanogaster.
-Outliers in Drosophila data. Blue points represent the logarithm of the difference between quantiles and five times sample mean. And red points represent the logarithm of the difference between quantiles and ten times sample mean. compcodeR generates outliers by multiplying the read counts by 5–10. The figure shows that top 2 % read counts are larger than 5 times sample mean and top 1 % read counts are larger than 10 times sample mean. Though sample mean may not represent the read counts randomly generated by compcodeR properly, we can conclude that up to 1-2 % random outliers are not rare in real data.
Additional file 13: Figure S10.
-Overdispersion of sequence count data in RNA-Seq. (A) Histogram of 1/(1-λ ̂) (in logarithm scale), and (B) Sample variance is far away from its mean. Sequence count data were fitted with GP distribution for each sample from TCGA. 1/(1-λ ̂) measures the extent of the departure of the data from Poisson distribution.
Additional file 14: Table S3.
-Running times of different R packages for analyzing the development transcriptome of Drosophila.
R codes-Repeat compcodeR-based simulations.
Chu, C., Fang, Z., Hua, X. et al. deGPS is a powerful tool for detecting differential expression in RNA-sequencing studies. BMC Genomics 16, 455 (2015). https://doi.org/10.1186/s12864-015-1676-0 | CommonCrawl |
Ricci curvature type lower bounds for sub-Riemannian structures on Sasakian manifolds
Polynomial and linearized normal forms for almost periodic differential systems
January 2016, 36(1): 323-344. doi: 10.3934/dcds.2016.36.323
Intermediate $\beta$-shifts of finite type
Bing Li 1, , Tuomas Sahlsten 2, and Tony Samuel 3,
Department of Mathematics, South China University of Technology, Guangzhou, 510641, China
Einstein Institute of Mathematics, The Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904, Israel
Fachbereich 3 Mathematik, Universität Bremen, 28359 Bremen, Germany
Received March 2014 Revised March 2015 Published June 2015
An aim of this article is to highlight dynamical differences between the greedy, and hence the lazy, $\beta$-shift (transformation) and an intermediate $\beta$-shift (transformation), for a fixed $\beta \in (1, 2)$. Specifically, a classification in terms of the kneading invariants of the linear maps $T_{\beta,\alpha} \colon x \mapsto \beta x + \alpha \bmod 1$ for which the corresponding intermediate $\beta$-shift is of finite type is given. This characterisation is then employed to construct a class of pairs $(\beta,\alpha)$ such that the intermediate $\beta$-shift associated with $T_{\beta, \alpha}$ is a subshift of finite type. It is also proved that these maps $T_{\beta,\alpha}$ are not transitive. This is in contrast to the situation for the corresponding greedy and lazy $\beta$-shifts and $\beta$-transformations, for which both of the two properties do not hold.
Keywords: subshifts of finite type, $\beta$-transformations, transitivity..
Mathematics Subject Classification: Primary: 37B10; Secondary: 11A67, 11R0.
Citation: Bing Li, Tuomas Sahlsten, Tony Samuel. Intermediate $\beta$-shifts of finite type. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 323-344. doi: 10.3934/dcds.2016.36.323
L. Alsedá and F. Manosas, Kneading theory for a family of circle maps with one discontinuity,, Acta Math. Univ. Comenian. (N.S.), 65 (1996), 11. Google Scholar
M. F. Barnsley and N. Mihalache, Symmetric itinerary sets,, preprint, (). Google Scholar
M. Barnsley, W. Steiner and A. Vince, A combinatorial characterization of the critical itineraries of an overlapping dynamical system,, preprint, (). Google Scholar
M. F. Barnsley, B. Harding and A. Vince, The entropy of a special overlapping dynamical system,, Ergodic Theory and Dynamical Systems, 34 (2014), 483. doi: 10.1017/etds.2012.140. Google Scholar
A. Bertrand-Mathis, Développement en base $\theta$, répartition modulo un de la suite $(x \theta^n)_{n \geq 0}$; languages codeés et $\theta$-shift,, Bull. Soc. Math. Fr., 114 (1986), 271. Google Scholar
F. Blanchard, $\beta$-expansions and symbolic dynamics,, Theoret. Comput. Sci., 65 (1989), 131. doi: 10.1016/0304-3975(89)90038-8. Google Scholar
M. Brin and G. Stuck, Introduction to Dynamical Systems,, Cambridge University Press, (2002). doi: 10.1017/CBO9780511755316. Google Scholar
K. Dajani and C. Kraaikamp, Ergodic Theory of Numbers,, Carus Mathematical Monographs, (2002). Google Scholar
K. Dajani and C. Kraaikamp, From greedy to lazy expansions and their driving dynamics,, Expo. Math., 20 (2002), 315. doi: 10.1016/S0723-0869(02)80010-X. Google Scholar
K. Dajani and M. deVries, Measures of maximal entropy for random $\beta$-expansions,, J. Eur. Math. Soc., 7 (2005), 51. doi: 10.4171/JEMS/21. Google Scholar
K. Dajani and M. deVries, Invariant densities for random $\beta$-expansions,, J. Eur. Math. Soc., 9 (2007), 157. doi: 10.4171/JEMS/76. Google Scholar
I. Daubechies, R. DeVore, S. Güntürk and V. Vaishampayan, A/D conversion with imperfect quantizers,, IEEE Trans. Inform. Theory, 52 (2006), 874. doi: 10.1109/TIT.2005.864430. Google Scholar
B. Eckhardt and G. Ott, Periodic orbit analysis of the Lorenz attractor,, Zeit. Phys. B, 93 (1994), 259. doi: 10.1007/BF01316970. Google Scholar
K. J. Falconer, Fractal Geometry: Mathematical Foundations and Applications,, Third edition. John Wiley & Sons, (2014). Google Scholar
A.-H. Fan and B.-W. Wang, On the lengths of basic intervals in beta expansions,, Nonlinearity, 25 (2012), 1329. doi: 10.1088/0951-7715/25/5/1329. Google Scholar
C. Frougny and A. C. Lai, On negative bases,, Proceedings of DLT 09, 5583 (2009), 252. doi: 10.1007/978-3-642-02737-6_20. Google Scholar
P. Glendinning, Topological conjugation of Lorenz maps by $\beta$-transformations,, Math. Proc. Camb. Phil. Soc., 107 (1990), 401. doi: 10.1017/S0305004100068675. Google Scholar
T. Hejda, Z. Masáková and E. Pelantová, Greedy and lazy representations in negative base systems., Kybernetika, 49 (2013), 258. Google Scholar
F. Hofbauer, Maximal measures for piecewise monotonically increasing transformations on $[0, 1]$,, Ergodic Theory Lecture Notes in Mathematics, 729 (1979), 66. Google Scholar
J. H. Hubbard and C. T. Sparrow, The classification of topologically expansive Lorenz maps,, Comm. Pure Appl. Math., 43 (1990), 431. doi: 10.1002/cpa.3160430402. Google Scholar
S. Ito and T. Sadahiro, Beta-Expansions with negative bases,, Integers, 9 (2009), 239. doi: 10.1515/INTEG.2009.023. Google Scholar
C. Kalle and W. Steiner, Beta-expansions, natural extensions and multiple tilings associated with Pisot units,, Trans. Amer. Math. Soc., 364 (2012), 2281. doi: 10.1090/S0002-9947-2012-05362-1. Google Scholar
V. Komornik and P. Loreti, Unique developments in non-integer bases,, Amer. Math. Monthly, 105 (1998), 636. doi: 10.2307/2589246. Google Scholar
L. Liao and W. Steiner, Dynamical properties of the negative beta-transformation,, Ergodic Theory Dyn. Sys., 32 (2012), 1673. doi: 10.1017/S0143385711000514. Google Scholar
D. Lind, The entropies of topological Markov shifts and a related class of algebraic integers,, Ergodic Theory Dyn. Sys., 4 (1984), 283. doi: 10.1017/S0143385700002443. Google Scholar
D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding,, Cambridge University Press, (1995). doi: 10.1017/CBO9780511626302. Google Scholar
E. N. Lorenz, Deterministic nonperiodic flow,, The Theory of Chaotic Attractors, (2004), 25. doi: 10.1007/978-0-387-21830-4_2. Google Scholar
M. R. Palmer, On the Classification of Measure Preserving Transformations of Lebesgue Spaces,, Ph. D. thesis, (1979). Google Scholar
W. Parry, On the $\beta$-expansions of real numbers,, Acta Math. Acad. Sci. Hungar., 11 (1960), 401. doi: 10.1007/BF02020954. Google Scholar
W. Parry, Representations for real numbers,, Acta Math. Acad. Sci. Hungar., 15 (1964), 95. doi: 10.1007/BF01897025. Google Scholar
W. Parry, Symbolic dynamics and transformations of the unit interval,, Trans. Amer. Math. Soc., 122 (1966), 368. doi: 10.1090/S0002-9947-1966-0197683-5. Google Scholar
A. Rényi, Representations for real numbers and their ergodic properties,, Acta Math. Acad. Sci. Hungar., 8 (1957), 477. doi: 10.1007/BF02020331. Google Scholar
N. Sidorov, Arithmetic dynamics,, Topics in Dynamics and Ergodic Theory, 310 (2003), 145. doi: 10.1017/CBO9780511546716.010. Google Scholar
N. Sidorov, Almost every number has a continuum of $\beta$-expansions,, Amer. Math. Monthly, 110 (2003), 838. doi: 10.2307/3647804. Google Scholar
D. Viswanath, Symbolic dynamics and periodic orbits of the Lorenz attractor,, Nonlinearity, 16 (2003), 1035. doi: 10.1088/0951-7715/16/3/314. Google Scholar
K. M. Wilkinson, Ergodic properties of a class of piecewise linear transformations,, Z. Wahrscheinlickeitstheorie verw. Gebiete, 31 (1975), 303. Google Scholar
R. F. Williams, Structure of Lorenz attractors,, Publ. Math. IHES, 50 (1979), 73. Google Scholar
Christopher Hoffman. Subshifts of finite type which have completely positive entropy. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1497-1516. doi: 10.3934/dcds.2011.29.1497
David Färm, Tomas Persson. Dimension and measure of baker-like skew-products of $\boldsymbol{\beta}$-transformations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3525-3537. doi: 10.3934/dcds.2012.32.3525
Philipp Gohlke, Dan Rust, Timo Spindeler. Shifts of finite type and random substitutions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (9) : 5085-5103. doi: 10.3934/dcds.2019206
Patrick Nelson, Noah Smith, Stanca Ciupe, Weiping Zou, Gilbert S. Omenn, Massimo Pietropaolo. Modeling dynamic changes in type 1 diabetes progression: Quantifying $\beta$-cell variation after the appearance of islet-specific autoimmune responses. Mathematical Biosciences & Engineering, 2009, 6 (4) : 753-778. doi: 10.3934/mbe.2009.6.753
John Banks, Brett Stanley. A note on equivalent definitions of topological transitivity. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1293-1296. doi: 10.3934/dcds.2013.33.1293
K. H. Kim and F. W. Roush. The Williams conjecture is false for irreducible subshifts. Electronic Research Announcements, 1997, 3: 105-109.
Sergio Muñoz. Robust transitivity of maps of the real line. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 1163-1177. doi: 10.3934/dcds.2015.35.1163
Juan Luis García Guirao, Marek Lampart. Transitivity of a Lotka-Volterra map. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 75-82. doi: 10.3934/dcdsb.2008.9.75
Kevin McGoff, Ronnie Pavlov. Random $\mathbb{Z}^d$-shifts of finite type. Journal of Modern Dynamics, 2016, 10: 287-330. doi: 10.3934/jmd.2016.10.287
Mike Boyle, Sompong Chuysurichay. The mapping class group of a shift of finite type. Journal of Modern Dynamics, 2018, 13: 115-145. doi: 10.3934/jmd.2018014
Lyndsey Clark. The $\beta$-transformation with a hole. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1249-1269. doi: 10.3934/dcds.2016.36.1249
Nicolai T. A. Haydn. Phase transitions in one-dimensional subshifts. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1965-1973. doi: 10.3934/dcds.2013.33.1965
Dou Dou. Minimal subshifts of arbitrary mean topological dimension. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1411-1424. doi: 10.3934/dcds.2017058
Fabio Bagagiolo. Optimal control of finite horizon type for a multidimensional delayed switching system. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 239-264. doi: 10.3934/dcdsb.2005.5.239
Anthony Quas, Terry Soo. Weak mixing suspension flows over shifts of finite type are universal. Journal of Modern Dynamics, 2012, 6 (4) : 427-449. doi: 10.3934/jmd.2012.6.427
Denis Volk. Almost every interval translation map of three intervals is finite type. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 2307-2314. doi: 10.3934/dcds.2014.34.2307
Marilena N. Poulou, Nikolaos M. Stavrakakis. Finite dimensionality of a Klein-Gordon-Schrödinger type system. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 149-161. doi: 10.3934/dcdss.2009.2.149
Viorel Barbu. Existence for nonlinear finite dimensional stochastic differential equations of subgradient type. Mathematical Control & Related Fields, 2018, 8 (3&4) : 501-508. doi: 10.3934/mcrf.2018020
Yutian Lei. On finite energy solutions of fractional order equations of the Choquard type. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1497-1515. doi: 10.3934/dcds.2019064
Nicolás Matte Bon. Topological full groups of minimal subshifts with subgroups of intermediate growth. Journal of Modern Dynamics, 2015, 9: 67-80. doi: 10.3934/jmd.2015.9.67
Bing Li Tuomas Sahlsten Tony Samuel | CommonCrawl |
Hoon Choi ORCID: orcid.org/0000-0002-9115-96361,2
This study aimed to determine the transfer factor (TF) of methidathion for cucumber harvesters in greenhouses using the dermal exposure rates (DERs) and dislodgeable foliar residues (DFRs) measured simultaneously in my previous works. The DERs recalculated using the reference body surface area for the Korean adult males were 31.5–1281.1 μg/h, and the DFR values were 12.1–222.5 ng/cm2 over 7 d after application. A strong correlation between the DERs and DFRs was observed, with a regression coefficient of 0.9982. The TF for cucumber harvesters in greenhouses was determined to be 6020.4 cm2/h, which was five times higher than that proposed by the US Environmental Protection Agency (EPA). Additionally, based on TF value of methidathion, the reentry intervals (REIs) with or without personal protective equipment (PPE) were estimated for 82 pesticides registered on cucumber. The REIs with PPE, obtained from acceptable operator exposure levels and TF value, were less than 0 d, indicating the lowest risk possibility. However, REIs without PPE were estimated between 0.04 and 4.4 d for seven pesticides, including chlorothalonil, emamectin benzoate, flubendiamide, fluquinconazole, iminoctadine tris(albesilate), propineb, and pyridaben. In conclusion, cucumber harvesters should wear PPE for health safety when they reenter the greenhouse to harvest cucumbers following application of pesticides.
Occupational exposure to pesticides can occur mainly in factory workers during manufacturing and in farmers during mixing/loading, spraying, and harvesting the agricultural commodities. Acute and chronic health threats of pesticide exposure greatly concern farmers, which arise from the amount and frequency of pesticide use, the time farmers spend in their fields, and the potential unsafe exposure levels in these situations. To deal with concerns about pesticide hazards, their exposure should be appropriately controlled to ensure the health of agricultural workers.
Following pesticide application to agricultural crops, its exposure is primarily attributed to dermal deposition and inhalation. Dermal deposition/adsorption is the main route of exposure to farmers and occurs indirectly through contact between the skin and the leaf surface stained with the spraying solution, but not through direct contact with the pesticide droplet after application [1]. Dislodgeable foliar residues (DFRs) of pesticides can easily translocate to the body surface of workers during pesticide application, pruning, thinning, and harvesting [2]. Therefore, a dissipation study of DFRs was conducted to predict the dermal exposure of farm workers to pesticide and determine the safe reentry interval (REI). Transfer factor (TF) can be considered a link between dermal exposure rates (DERs) and DFRs [3]. TF is the ratio of exposure to the DFRs and calculated using DERs and the foliage surface area contacted by the worker per hour [4, 5]. Consequently, the estimation of dermal exposure to other pesticides is possible using specific TF values established for specific crops, activities, and field conditions [6].
The number of greenhouse farms and the cultivation area have increased globally, particularly in Korea, because of the high production capacities per unit area and year-round cultivation. In 2020, greenhouse acreage and production reached 60,866 hectares and 2.3 million tons, respectively, in the Republic of Korea [7]. Moreover, farm workers frequently reenter the facility for the continuous harvesting of agricultural commodities such as cucumber, which has grown 87% of the total production in the greenhouse. As a result, the probability of farmworker exposure to pesticides also increased in specific work tasks, which could be attributed to the enclosed greenhouse farm system, frequent pesticide application and reentry. The resultant health effects among greenhouse farm workers have continued to be reported, including hormonal, neurological, and respiratory disorders [8,9,10,11].
In Korea, exposure to mixers and sprayers during pesticide application has been a great deal of focus in the past [1, 12,13,14]. The exposure characteristics for applicators were reported in open fields, including green pepper fields, paddy fields, mandarin, and apple orchards [1, 12, 13], and were also compared by diverse formulations and different application methods [1, 13]. Moreover, the exposure pattern for agricultural workers was investigated during the application of the pesticide suspension to the cucumber in a greenhouse environment [1, 14]. However, there is also the possibility of exposure in a field sprayed previously with pesticides, where agricultural workers reenter for picking, harvesting, pruning/thinning, maintenance, etc. In Korea's farming situation, agricultural workers generally prefer to wear long-sleeved shirts and long trousers instead of personal protective equipment (PPE) during the harvest, because of the inconvenience of the work, thereby causing a higher possibility of risk to pesticides [15, 16]. My research group previously reported the exposure and risk to methidation for workers during harvesting cucumber for 7 days in the greenhouse, which showed that workers exposed mainly through hands, thighs, and arms by the direct contact with the pesticides on crop foliage or cucumber [17]. Besides, the deposition and dissipation characteristics of methidathion on cucumber foliage were also investigated in my previous publication [18].
As mentioned above, exposure to reentering workers could be estimated using the TF value calculated from the DERs and DFRs. To the best of my knowledge, no previous reports on the DERs for harvesters and DFRs have been published in the Republic of Korea, except for my previous papers. Hence, this study aimed to derive the TF value using reentry DERs and DFRs measured concurrently in the same cucumber greenhouse, reported in my previous works [17, 18]. In addition, the REIs of 82 pesticides registered on cucumber were determined to set priorities for pesticide exposure management.
Recalculation of dermal exposure to pesticides in the cucumber field
The DERs to harvesters for 7 d post pesticide application, reported in my previous study [17], was reassessed based on numerous assumptions concerning harvesting time per day, body surface area, and reference value. The dermal exposure rate (DER, μg/h) was calculated by extrapolating the exposure amount (μg/cm2; measured by dosimeters) to the body surface area (cm2) and dividing it by the work time (h). The calculation is based on the assumption that pesticide exposure through direct foliar contact is proportional to work duration. The body surface area for Korean adult male suggested by Kim et al. [19] was used to calculate the DER (Table 1).
Table 1 Body surface area for the Korean adult male
Determination of TF
TF (cm2/h) was determined using the following formula:
$${\text{TF}}\left( {{\text{cm}}^{{2}} /{\text{h}}} \right) = {\text{DER}}({\mu g}/{\text{h}}) \times {1}000/{\text{DFR }}\left( {{\text{ng}}/{\text{cm}}^{{2}} } \right)$$
DFRs of methidation measured in my previous study [18] were used for calculation of TF. A linear regression curve was obtained by plotting DERs versus DFRs at an interval of 1, 2, 3, 5, and 7 d post application. The linear relationship between the DERs and DFRs was evaluated using the F-test, linear regression equation, and regression coefficient (R2). Statistical analysis was conducted using SPSS 18.0 (SPSS Inc., Arming, NY, USA). The slope of the linear regression equation was determined as TF.
Dermal exposure assessment
The initial DFR (DFR0, ng/cm2) for each pesticide compound was calculated using the following formula:
$${\text{DFR}}_{0} \left( {{\text{ng}}/{\text{cm}}^{{2}} } \right) = {\text{DV}} \times {\text{A}}.{\text{I}}. \times {1}0/{\text{DF}},$$
where DV is the foliage deposit volume of the spraying solution (nL/cm2), A.I. is the active ingredient (%), and DF is the dilution factor of pesticide products. Assuming that foliage DV is the same regardless of the pesticide type and formulation, the foliage DV of methidathion spraying solution was used to determine the DFR0 for each pesticide compound. Accordingly, the DV value was set as 888.8 nL/cm2 using DFR0 of 355.5 ng/cm2, A.I. of 40%, and DF of 1000 [18]. The initial DER (DER0, μg/h) for each pesticide was calculated by multiplying the DFR0 with the TF value. The potential dermal exposure (PDE, μg/day) per day was expressed as the corresponding DER0 multiplied by the harvesting time per day (H/D) of 8 h, deduced using an H/D of 8.3 h/day in the melon greenhouse [8, 18]. The actual dermal exposure (ADE, μg/day) to harvesters in the cucumber greenhouse was calculated by extrapolating PDE to the penetration rate (PEN) through personal protective equipment (PPE) and skin absorption (ABS). The default values of PEN and ABS were assumed to be 10%, respectively [18].
Determination of reentry intervals and safe work time
The REIs and safe work times (SWTs) were calculated for pesticides registered on cucumber. The REI for harvesters in the cucumber greenhouse was derived using the following formula:
$${\text{REI }}\left( {{\text{days}}} \right)\, = \,\left[ {{\text{ln}}\left( {{\text{AOEL}}\, \times \,{\text{BW}}} \right){-}{\text{ln}}\left( {{\text{ADE}}} \right)} \right]\, \times \,k^{{ - {1}}} ,$$
where AOEL is the acceptable operator exposure level (μg/kg b.w./day), BW is the body weight of adult Korean males (kg b.w.), ADE is the initial ADE, and k is the dissipation constant for DFR. AOELs established and reported by the Rural Development Administration (RDA) were used for this study [20], the body weight taken was 70 kg [1, 14, 18], and the dissipation constant was assumed to be –0.4915 [18]. The SWT is the maximum harvesting time per day for which the exposure to pesticides is below the AOEL and was calculated using the following formula:
$${\text{SWT}}\left( {{\text{h}}/{\text{day}}} \right) = ({\text{AOEL}} \times {\text{BW}})/{\text{ADE}} \times {\text{H}}/{\text{D}}.$$
Reassessment of dermal exposure to pesticides for workers in the cucumber field
DERs to methidathion in the cucumber greenhouse were determined in my previous experiment [17] using the surface area of the appropriate body region suggested by the US Environmental Protection Agency (EPA) [21] and Vercruysse et al. [22]. The reported DER values were 34.8–1343.5 μg/h over 7 d after application of methidathion during cucumber harvest in the greenhouse with dermal dosimetry (Table 2). In addition, inhalation exposure was not observed in any of the workers. Currently, the exposure of agricultural workers to pesticides in Korea is determined using the reference body surface area values by each body parts for a Korean adult male suggested by Kim et al. [19]. Therefore, DERs to methidathion were recalculated using the reference body surface area value (Table 2). The recalculated DERs were 31.5–1281.1 μg/h over 7 d after application during cucumber harvesting, approximately 95% similar to the DERs in previous work [17].
Table 2 Dislodgeable foliar residues (DFRs) and dermal exposure rates (DERs) for harvesters to methidathion in my previous works [17, 18]
TF for workers harvesting in the cucumber greenhouse
Methidathion DFRs on cucumber leaves measured in my previous study [18] were in the range of 12.1–222.5 ng/cm2 for 7 d after application (Table 2). The correlation between the DERs and DFRs measured concurrently in the same cucumber field was investigated. The linear regression analysis between the DERs and DFRs of methidathion showed that the regression model was significant at the F-value (p < 0.05), demonstrating a high linear relationship between the two variables for 7 d after application. The R2 was 0.9982, indicating that 99.8% of the variation in DERs explained by the DFRs. Therefore, DERs could be estimated from the DFRs. The TF of methidathion for harvesters was determined to be 6020.4 cm2/h (95% CI; 5544.7–6496.2), as shown in Fig. 1.
Correlation between dislodgeable foliar residues (DFRs) and dermal exposure rates (DERs) to methidathion for harvesters in the cucumber greenhouse
In the 1980s, a Zweig factor of 5000 cm2/h (based on a one-sided surface area) was used as the TF to estimate worker exposure [23]. However, this factor tends to overestimate exposure to low-crop workers and underestimate exposure to high-crop workers [24]. Meanwhile, the US EPA has established TFs based on detailed conditions, including crop height and work activity [6]; the proposed TFs for harvesting and irrigation activities by hands were 550 and 1900 cm2/h, respectively, in cucumber fields with low crop height and full foliage density. Greenhouse floral production presents a unique cultural situation, with planting rows between narrow walkways to maximize the growing area. This results in foliar contact and a higher possibility of workers' exposure to pesticide residues while using these walkways for harvesting or other tasks [25]. Therefore, the US EPA suggested a TF of 1200 cm2/h for harvesting vegetables with a high crop height and full foliage density in greenhouses. However, the TF value of 6020.4 cm2/h determined in this study was five times higher than that proposed by the US EPA. These results demonstrate that Korean harvesters could be at a higher risk of pesticide exposure in a greenhouse than US workers. Over the past few decades, the US EPA has been actively engaged in refining its methodologies and developing data for assessing exposure and establishing TFs for all crops, activities, and field conditions. Therefore, further studies are needed to establish TFs specialized for the Korean situation.
Exposure assessment and REIs for harvesters in the cucumber greenhouse
The TF is not dependent on the pesticide applied [2, 5], and is generally used to quickly assess exposure to any pesticide-active ingredient using estimates of exposure time and the concentration of residue that workers will contact [5]. Crop type is a major factor in determining DFR values without excluding the effect of formulation type [2]. Exposure of workers to pesticides registered for cucumber was estimated using the TF value of 6020.4 cm2/h determined in this study, followed by the assessment of health risks. As of 2022, 163 pesticides in 1374 products have been registered for application to cucumber fields in Korea. Of these, only 82 pesticide-active ingredients used for foliage sprays have been assessed for exposure and health risks, for which RDA established the AOEL values. Using the specific dissipation constant of DFR may be inappropriate for calculating REIs of other pesticides, because the dissipation of DFRs depends on the physico-chemical properties and degradation characteristics of each pesticide. Therefore, the REI calculations in this study were restrictively performed to prioritize pesticides for pesticide exposure management. Table 3 shows the estimated ADEs and REIs for cucumber harvesters in Korea.
Table 3 Estimated dermal exposure and reentry interval for harvesters to pesticides registered for the cucumber greenhouses
REIs for harvesters using the PPE were –17.6 to –0.3 d, corresponding to 0.02–84.9% of the AOEL value. Agricultural workers generally harvest cucumbers daily in a greenhouse because it is a continuously harvested crop with a rapid growth rate. Therefore, these results demonstrate the lowest possibility of risk for workers wearing PPE, even when they reenter the greenhouse on the day of application. However, the use of PPE is considerably more limited for harvesters due to work-related inconvenience than for applicators. In Korea, agricultural workers generally harvest crops wearing long-sleeved shirts and long trousers [15, 16]. Consequently, for the harvesters not wearing PPE, the REIs were determined between 0.04 to 4.4 d for seven pesticides including chlorothalonil, emamectin benzoate, flubendiamide, fluquinconazole, iminoctadine tris(albesilate), propineb, and pyridaben; SWT for six pesticides (except for flubendiamide) was less than 4 h/day. The potential health risks of these pesticides were due to the lower AOEL values for emamectin benzoate, flubendiamide, fluquinconazole, iminoctadine tris(albesilate), and pyridaben and the higher DFR0 for chlorothalonil and propineb. Therefore, a harvester must wear PPE for health safety when reentering a facility after spraying pesticides. Meanwhile, as mentioned above, REIs estimated in this study had a few limitations, such as the application of dissipation constant of methidathion. DFRs for pesticides with potential health risks should be further investigated to ensure the health safety of greenhouse workers more definitively.
All data generated or analyzed during this study are included in this published article.
DFR:
Dislodgeable foliar residue
TF:
Transfer factor
DER:
Dermal exposure rate
DV:
Deposit volume of spraying solution
A.I.:
DF:
PDE:
Potential dermal exposure
H/D:
Harvesting time per day
ADE:
Actual dermal exposure
PEN:
Penetration rate
PPE:
ABS:
Skin absorption
REI:
Reentry interval
AOEL:
Acceptable operator exposure level
BW:
RDA:
Rural development administration
SWT:
Safe work time
EPA:
Choi H, Moon JK, Kim JH (2013) Assessment of the exposure of workers to the insecticide imidacloprid during application on various field crops by a hand-held power sprayer. J Agric Food Chem 61:10642–10648
Kasiotis KM, Tsakirakis AN, Glass CR, Charistou AN, Anastassiadou P, Gerritsen-Ebben R et al (2017) Assessment of field re-entry exposure to pesticides: a dislodgeable foliar residue study. Sci Total Environ 596–597:178–186
Whitmyre GK, Ross JH, Ginevan ME, Eberhart D (2005) Development of risk-based restricted entry intervals. In: Franklin CA, Worgan JP (eds) Occupational and residential exposure assessment for pesticides. Wiley, West Sussex, UK, pp 45–69
Korpalski S, Bruce E, Holden L, Klonne D (2005) Dislodgeable foliar residues are lognormally distributed for agricultural re-entry studies. J Expo Anal Environ Epidemiol 15:160–163
Jiang W, Hernandez B, Richmond D, Yanga N (2017) Harvesters in strawberry fields: a literature review of pesticide exposure, an observation of their work activities, and a model for exposure prediction. J Expo Sci Environ Epidemiol 27:391–397
US Environmental protection agency (2012) Science advisory council for exposure (ExpoSAC) policy 3. Office of Pesticide Programs, Washington DC
Ministry of Agriculture, Food and Rural Affairs (2021) Statistical yearbook of agriculture, food and rural affairs. Republic of Korea, Sejong
Park JS, Oh GJ (2008) Differences in farmer's syndrome between greenhouse-melon farmers and rice farmers. J Agric Med Community Health 33:27–36
Lee WJ (2011) Pesticide exposure and health. J Environ Health Sci 37:81–93
Amoatey P, Al-Mayahi A, Omidvarborna H, Baawain MS, Sulaiman H (2020) Occupational exposure to pesticides and associated health effects among greenhouse farm workers. Environ Sci Pollut Res 27:22251–22270
Xie Y, Li J, Guo X, Zhao J, Yang B, Xiao W et al (2020) Health status among greenhouse workers exposed to different levels of pesticides: a genetic matching analysis. Sci Rep 10:8714
Choi H, Moon JK, Liu KH, Park HW, Ihm YB, Park BS et al (2006) Risk assessment of human exposure to cypermethrin during treatment of mandarin fields. Arch Environ Contam Toxicol 50:437–442
Kim EH, Moon JK, Choi H, Hong SM, Lee DH, Lee HM et al (2012) Exposure and risk assessment of insecticide methomyl for applicator during treatment on apple orchard. J Korean Soc Appl Biol Chem 55:95–100
Choi H, Kim JH (2018) Risk and exposure assessment for agricultural workers during treatment of cucumber with the fungicide fenarimol in greenhouse. Appl Biol Chem 61(1):1–6
Kim DH, Baek YJ, Lee JY (2016) Contemporary research to standardize the development and test methods for performance of pesticide protective clothing. Korean J Human Ecol 25:185–205
Kim DH, Lee JY (2020) Protective and comfort performance of pesticide protective clothing: physicochemical properties of materials and clothing Ensemble. Korean J Community Living Sci 31:559–573
Byoun JY, Choi H, Moon JK, Park HW, Liu KH, Ihm YB et al (2005) Risk assessment of human exposure to methidathion during harvest of cucumber in green house. J Toxicol Pub Health 21:297–301
Choi H, Byoun JY, Kim JH (2013) Determination of reentry interval for cucumber harvesters in greenhouse after application of insecticide methidathion. J Korean Soc Appl Biol Chem 56:465–467
Kim EH, Lee HR, Choi H, Moon JK, Hong SS, Jeong MH et al (2011) Methodology for quantitative monitoring of agricultural worker exposure to pesticides. Korean J Pest Sci 15:507–528
Rural Development Administration (2022) Standard for Pesticide Registration, Administrative Rule of Pesticide Management Act (Notification 2022-04, Revised 2022.03.08, Date of Enforcement 2022.03.08). Suwon, Republic of Korea.
US Environmental Protection Agency (1996) Occupational and residential exposure test guidelines, OPPTS 875. 1000, EPA 712-C-96–261. Washington DC.
Vercruysse F, Driegde S, Steurbaut W, Dejonckheere W (1999) Exposure assessment of professional pesticide users during treatment of potato fields. Pest Sci 55:467–473
Zweig G, Gao RU, Witt JM, Profendrof W, Bogen K (1984) Dermal exposure to carbaryl by strawberry harvesters. J Agri Food Chem 32:1232–1236
Lanning CL, Wehner TA, Norton JA, Dunbar DM, Grosso LS (1998) Correlation of actual strawberry harvester exposure with that predicted from abamectin dislodgeable foliar residues. J Agric Food Chem 46:2340–2345
Thompson B, Coronado G, Puschel K, Allen E (2001) Identifying constituents to participate in a project to control pesticide exposure in children of farmworkers. Environ Health Perspect 109:443–448
This research was supported by Wonkwang University in 2022.
Department of Life and Environmental Sciences, Wonkwang University, 460, Iksan-Daero, Iksan, 54538, Republic of Korea
Hoon Choi
Institute of Life Science and Natural Resources, Wonkwang University, 460, Iksan-Daero, Iksan, 54538, Republic of Korea
HC conceived and designed the project, collected the data, performed the analysis and interpretation, and wrote the paper. The author read and approved the final manuscript.
Correspondence to Hoon Choi.
Choi, H. Transfer factor calculated using dermal exposure and dislodgeable foliar residue and exposure assessment for reentry worker after pesticide application in cucumber field. Appl Biol Chem 66, 1 (2023). https://doi.org/10.1186/s13765-022-00765-z
DOI: https://doi.org/10.1186/s13765-022-00765-z | CommonCrawl |
IIT JEE MOCK TEST-Set Theory1
Welcome to your IIT JEE MOCK TEST-Set Theory1
Chapter Wise Test
Topic - Set Theory
Maximum Marks : 120 Marking Scheme: (+4) for Correct & (-1) for incorrect answer Time: 60Mins
Please Don't Cheat!
1. Let $A=\{(1,2),(3,4), 5\}$, then which of the following is incorrect?
Deselect Answer
$\{3,4\} \notin \mathrm{A}$ as $(3,4)$ is an element of $\mathrm{A}$
$\{5\},\{(3,4)\}$ are subsets of A but not elements of A
$\{1,2\},\{5\}$ are subsets of $A$
$\{(1,2),(3,4), 5\}$ is subset of $A$
2. A market research group conducted a survey of 1000 consumers and reported that 720 consumers liked product A and 450 consumers liked product $\mathrm{B}$. What is the least number that must have liked both products ?
3.One of the partitions of the set $\{1,2,5, x, y, \sqrt{2}, \sqrt{3}\}$ is
$\{\{1,2, \mathrm{x}\},\{\mathrm{x}, 5, \mathrm{y}\},\{\sqrt{2}, \sqrt{3}\}\}$
$\{\{1,2, \sqrt{2}\},\{\mathrm{x}, \mathrm{y}, \sqrt{2}\},\{5, \sqrt{2}, \sqrt{3}\}\}$
$\{\{1,2\},\{5, \mathrm{x}\},\{\sqrt{2}, \sqrt{3}\}\}$
$\{\{1,2,5\},\{\mathrm{x}, \mathrm{y}\},\{\sqrt{2}, \sqrt{3}\}\}$
4. Let $\mathrm{A}$ and $\mathrm{B}$ be two sets then $(\mathrm{A} \cup \mathrm{B})^{\prime} \cup\left(\mathrm{A}^{\prime} \cap \mathrm{B}\right)$ is equal to
$\mathrm{A}^{\prime}$
$\mathrm{A}$
5. Let $\mathrm{A}=\{(\mathrm{n}, 2 \mathrm{n}): \mathrm{n} \in \mathrm{N}\}$ and $\mathrm{B}=\{(2 \mathrm{n}, 3 \mathrm{n}): \mathrm{n} \in \mathrm{N}\}$. What is $\mathrm{A} \cap \mathrm{B}$ equal to ?
$\{(\mathrm{n}, 6 \mathrm{n}): \mathrm{n} \in \mathrm{N}\}$
$\{(2 n, 6 n): n \in N\}$
$\{(n, 3 n): n \in N\}$
$\phi$
6. If $a \mathrm{~N}=\{\mathrm{ax}: \mathrm{x} \in \mathbf{N}\}$ and $\mathrm{bN} \cap \mathrm{cN}=\mathrm{dN}$, where $\mathrm{b}, \mathrm{c} \in \mathbf{N}$ are relatively prime, then
$d=b c$
$c=b d$
$b=c d$
7. In a class of 55 students, the number of students studying different subjects are 23 in Mathematics, 24 in Physics, 19 in Chemistry, 12 in Mathematics and Physics, 9 in Mathematics and Chemistry, 7 in Physics and Chemistry and 4 in all the three subjects. The number of students who have taken exactly one subject is
All of these
8. A set A has 3 elements and another set B has 6 elements.Then
$3 \leq \mathrm{n}(\mathrm{A} \cup \mathrm{B}) \leq 6$
9. If $\mathrm{A}=\{1,2,5\}$ and $\mathrm{B}=\{3,4,5,9\}$, then $\mathrm{A} \Delta \mathrm{B}$ is equal to
$\{1,2,5,9\}$
$\{1,2,3,4,9\}$
$\{1,2,3,4,5,9\}$
10. At a certain conference of 100 people, there are 29 Indian women and 23 Indian men. Of these Indian people 4 are doctors and 24 are either men or doctors. There are no foreign doctors. How many foreigners and women doctors are attending the conference?
11. Let $X$ and $Y$ be two non-empty sets such that $X \cap A=Y \cap A=\phi$ and $X \cup A=Y \cup A$ for some non-empty set $A$. Then
$X$ is a proper subset of $Y$
$Y$ is a proper subset of $X$
$X=Y$
$X$ and $Y$ are disjoint sets
12. Let $A$ and $B$ are two sets in a universal set $U$. Then which of these is/are correct?
$\mathrm{A}-\mathrm{B}=\mathrm{A}^{\prime}-\mathrm{B}^{\prime}$
$\mathrm{A}-(\mathrm{A}-\mathrm{B})=\mathrm{A} \cap \mathrm{B}$
$\mathrm{A}-\mathrm{B}=\mathrm{A}^{\prime} \cap \mathrm{B}^{\prime}$
$A \cup B=(A-B) \cup(B-A) \cup(A \cap B)$
13. If $A$ and $B$ are non-empty sets such that $A \supset B$, then
$\mathrm{B}^{\prime}-\mathrm{A}^{\prime}=\mathrm{A}-\mathrm{B}$
$\mathrm{B}^{\prime}-\mathrm{A}^{\prime}=\mathrm{B}-\mathrm{A}$
$\mathrm{A}^{\prime}-\mathrm{B}^{\prime}=\mathrm{A}-\mathrm{B}$
$\mathrm{A}^{\prime} \cap \mathrm{B}^{\prime}=\mathrm{B}-\mathrm{A}$
14. In a town of 10,000 families, it was found that $40 \%$ families buy newspaper A, $20 \%$ families buy newspaper B and $10 \%$ families buy newspaper C. $5 \%$ families buy $\mathrm{A}$ and B, $3 \%$ buy $B$ and $C$ and $4 \%$ buy $A$ and $C$. If $2 \%$ families buy all the newspapers, then
3,300 families buy A only
1,400 families buy B only.
4000 families buy none of $\mathrm{A}, \mathrm{B}$ and $\mathrm{C}$
All are correct
15. In a battle $70 \%$ of the combatants lost one eye, $80 \%$ an ear, $75 \%$ an arm, $85 \%$ a leg, $x \%$ lost all the four limbs. The minimum value of $x$ is
16. Let $n(\mathrm{U})=700, n(\mathrm{~A})=200, n(\mathrm{~B})=300, n(\mathrm{~A} \cap \mathrm{B})=100$, then $n\left(\mathrm{~A}^{\prime} \cap \mathrm{B}^{\prime}\right)$ is equal to
17. Statement-1 : If $\mathrm{B}=\mathrm{U}-\mathrm{A}$, then $n(\mathrm{~B})=n(\mathrm{U})-n(\mathrm{~A})$ where $\mathrm{U}$ is universal set. Statement-2 : For any three arbitrary set A, B, C we have if $\mathrm{C}=\mathrm{A}-\mathrm{B}$, then $n(\mathrm{C})=n(\mathrm{~A})-n(\mathrm{~B})$.
Statement - 1 is true, Statement- 2 is true; Statement $-2$ is a correct explanation for Statement-1.
Statement $-1$ is true, Statement- 2 is true; Statement $-2$ is not a correct explanation for Statement-1.
Statement -1 is false, Statement- 2 is true.
Statement $-1$ is true, Statement- 2 is false.
18. Each student in a class of 40 , studies at least one of the subjects English, Mathematics and Economics. 16 study English, 22 Economics and 26 Mathematics, 5 study English and Economics, 14 Mathematics and Economics and 2 study all the three subjects. The number of students who study English and Mathematics but not Economics is
19. In a class of 80 students numbered a to 80 , all odd numbered students opt of Cricket, students whose numbers are divisible by 5 opt for Football and those whose numbers are divisible by 7 opt for Hockey. The number of students who do not opt any of the three games, is
20. In a class of 60 students, 23 play Hockey 15 Play Basket-ball and 20 play cricket. 7 play Hockey and Basket-ball, 5 play cricket and Basket-ball, 4 play Hockey and Cricket and 15 students do not play any of these games. Then
4 play Hockey, Basket-ball and Cricket
20 play Hockey but not Cricket
1 plays Hockey and Cricket but not Basket-ball
All above are correct
21. The set $(A \backslash B) \cup(B \backslash A)$ is equal to
$[A \backslash(A \cap B)] \cap[B \backslash(A \cap B)]$
$(A \cup B) \backslash(A \cap B)$
$A \backslash(A \cap B)$
$\overline{A \cap B} \backslash A \cup B$
22. If $\mathrm{A}$ is the set of the divisors of the number $15, \mathrm{~B}$ is the set of prime numbers smaller than 10 and $\mathrm{C}$ is the set of even numbers smaller than 9 , then $(\mathrm{A} \cup \mathrm{C}) \cap \mathrm{B}$ is the set
$\{1,3,5\}$
$\{2,5\}$
23. Two finite sets have $m$ and $n$ elements. The number of subsets of the first set is 112 more than that of the second set. The values of $m$ and $n$ are, respectively,
24. The number of students who take both the subjects mathematics and chemistry is 30 . This represents $10 \%$ of the enrolment in mathematics and $12 \%$ of the enrolment in chemistry. How many students take at least one of these two subjects?
25. If $n(\mathrm{~A})=1000, n(\mathrm{~B})=500$ and if $n(\mathrm{~A} \cap \mathrm{B}) \geq 1$ and $n(\mathrm{~A} \cup \mathrm{B})=p$, then
$500 \leq p \leq 1000$
$1001 \leq p \leq 1498$
26. The number of elements in the set $\left\{(a, b): 2 a^{2}+3 b^{2}=35, a, b \in \mathrm{Z}\right\}$, where $\mathrm{Z}$ is the set of all integers, is
27. Let A, B, C be finite sets. Suppose that $n(A)=10, n(B)=15, n$ $(C)=20, n(A \cap B)=8$ and $n(B \cap C)=9$. Then the possible value of $n(A \cup B \cup C)$ is
Any of the Three Values 26, 27, 28 Possible
28. The value of $(\mathrm{A} \cup \mathrm{B} \cup \mathrm{C}) \cap\left(\mathrm{A} \cap \mathrm{B}^{c} \cap \mathrm{C}^{c}\right)^{c} \cap \mathrm{C}^{c}$, is
$\mathrm{B} \cap \mathrm{C}^{c}$
$\mathrm{B}^{c} \cap \mathrm{C}^{c}$
$\mathrm{B} \cap \mathrm{C}$
$\mathrm{A} \cap \mathrm{B} \cap \mathrm{C}$
29. In a town of 10,000 families it was found that $40 \%$ family buy newspaper A, $20 \%$ buy newspaper B and $10 \%$ families buy newspaper C, $5 \%$ families buy $\mathrm{A}$ and $\mathrm{B}, 3 \%$ buy $\mathrm{B}$ and $\mathrm{C}$ and $4 \%$ buy $\mathrm{A}$ and $\mathrm{C}$. If $2 \%$ families buy all the three newspapers, then number of families which buy A only is
30. Statement-1 : If $\mathrm{A} \cup \mathrm{B}=\mathrm{A} \cup \mathrm{C}$ and $\mathrm{A} \cap \mathrm{B}=\mathrm{A} \cap \mathrm{C}$, then $\mathrm{B}=\mathrm{C}$. Statement-2 $: \mathrm{A} \cup(\mathrm{B} \cap \mathrm{C})=(\mathrm{A} \cup \mathrm{B}) \cap(\mathrm{A} \cup \mathrm{C})$.
Statement $-1$ is true, Statement- 2 is true; Statement $-2$ is a correct explanation for Statement-1.
Statement - 1 is true, Statement- 2 is true; Statement-2 is not a correct explanation for Statement-1. | CommonCrawl |
Can the Lorentz force stabilize the Hydrogen atom?
I've recently been working on relative equilibria for some systems of particles. (ie. studying equilibrium solutions in a rotating frame. Saturn's rings for example.) This has evolved into some classical notions of atomic physics and questions of stability for Coulomb interactions there. Entertain for a few moments a classical(planetary) model for the atom. Take an isolated hydrogen atom for example. It seems to me that the $q\left(v\times B\right)$ term of the Lorentz force provides an intrinsic stabilization for the two particle system. Naively play the right hand rule game with the system and you'll see. If we assume that the electron is in a classical orbit and not radiating, then could we establish the stability of atoms with the Lorentz force? The non-radiating electron is a topic of its own and appears to be far from solved. For example, if you would like to make the statement that accelerating charges radiate, therefore...... Try to go one step further and show why this must be true in general and you imediately run into difficulties. Gyro-stabilization might be a relevant topic here.
Order of Magnitude investigation:
\begin{align} &\text{Proton Charge:}\qquad&q &\approx 1.602\times 10^{-19}\: C\\[2mm] &\text{Permeability of free space:}\qquad &\mu_0 &\approx 4\pi\times10^{-7}\frac{volt\cdot s}{amp\cdot m}\\[2mm] &\text{Permitivity of free space:}\qquad &\epsilon_0 &\approx 8.854\times10^{-12}\frac{farad}{m}\\[2mm] &\text{Electron Speed: }\qquad &v &= \alpha c\approx c/137 \approx 2.19\times10^6\frac{m}{s}\\[2mm] &\text{Electron Radius:(Bohr) }\qquad &r & = \frac{\hbar}{m_ec\alpha}\approx 5.29\times10^{-11}m\\[2mm] \end{align}
Maximum Fields(Im thinking of the fields generated by the electron in a linear approximation for an infinitesimal section of the circular orbit:
\begin{align} \text{Biot - Savart}\qquad{\bf B}_{\text{max}} &= \frac{\mu_0 q v}{4\pi r^2}&\approx& 12.537\:\text{T}\\[2mm] \text{Coloumb}\qquad{\bf E}_{\text{max}} &= \frac{q}{4\pi\epsilon_0 r^2}&\approx& 5.145\times 10^{11}\frac{N}{C} \end{align}
With Force ratio:
\begin{align} \frac{q(v\times{\bf B})_{max}}{q{\bf E}_{max}} =\frac{\frac{\mu_0 q^2 v^2}{4\pi r^2}}{\frac{q^2}{4\pi\epsilon_0 r^2}} = v^2\mu_0\epsilon_0 = \frac{v^2}{c^2}\approx\frac{(2.19\times10^6)^2}{(3\times10^8)^2}\approx 5.329\times10^{-5} \end{align}
This shows the magnetic force is 5 orders of magnitude smaller than the electrostatic force for this crude model of the ground state hydrogen. Not sure what this conclusively says about the internal dynamics but at least I think I can foresee some problems with this force stabilizing the system.
A colleague of mine who does work with NMR(nuclear magnetic resonance) informed me that the magnetic fields they work with in the lab are on the order of 3$T$ and that they do not observe the large fields I am envisoning in practice. The concept he quoted as the reasoning behind this was "orbital quenching". This appears to be a sort of averaging to zero of this field. However, it was also stated that the magnetic field that I am interested in can be measured in beam type experiments.
classical-mechanics quantum-interpretations atomic-physics classical-electrodynamics
JEMJEM
$\begingroup$ So, the rotation around the nucleus creates a "ring current" with magnitude roughly $e v$ and radius equal to, let's say, the Bohr radius? What's $v$ of the electron, assuming things are dominated by the electrostatic field? If it's not that close to $c$ then the generated magnetic fields just won't be that big. $\endgroup$ – webb Apr 30 '14 at 16:20
$\begingroup$ That could be fine if only the ground state of the hydrogen atom did not have a zero angular momentum $\endgroup$ – gatsu Apr 30 '14 at 16:32
$\begingroup$ I've thought somewhat about the relative magnitudes of the electrostatic part to the dynamic part. What can we say about the speed of the electron in its orbit? A discussion about this is here: <physics.stackexchange.com/questions/20187/…>. Looks to me to be somewhere near $10^6\frac{m}{s}$. So now I compare the Biot-Savart law and Coulomb's law with this data? $\endgroup$ – JEM Apr 30 '14 at 16:42
$\begingroup$ @gatsu setting the total angular momentum equal to zero was prior to the introduction of the Maslov index. Born,Heisenberg,Schrodinger,etc didn't know about it. I'm learning about this topological idea currently and it seems to me that the total angular momentum of the ground state hydrogen might not be 0. $\endgroup$ – JEM Apr 30 '14 at 16:55
$\begingroup$ @JEM: thanks for pointing these things to me, I had no idea they even existed! $\endgroup$ – gatsu Apr 30 '14 at 19:58
Could we establish the stability of atoms following this line of reasoning?
Magnetic forces do not seem to be enough if the fields are retarded (most natural assumption), for the system proton+electron will radiate some energy away and cannot be stable in the Newtonian sense. Background radiation or non-electromagnetic forces are needed to make the system stable. If the fields are half-retarded, half-advanced, stable orbits were reported to be possible, cf.
J. Frenkel, Zur Elektrodynamik punktfoermiger Elektronen, Zeits. f. Phys., 32, (1925), p. 518-534. | http://dx.doi.org/10.1007/BF01331692
L. Page, Advanced Potentials and their Application to Atomic Models, Phys. Rev. 24, 296 (1924) | http://journals.aps.org/pr/abstract/10.1103/PhysRev.24.296
Ján LalinskýJán Lalinský
"The non-radiating electron is a topic of its own and appears to be far from solved.". That's an interesting statement, since there is ample evidence, that there are no such things as "non-radiating electrons".
The only thing that "exists" in nature at that scale is a quantum field, one solution of which are electrons. Under certain circumstances (when energy levels stay below the rest energy of the next higher excitations i.e. muons), electrons and photons can be treated using a simplified theoretical framework called quantum electrodynamics (QED). The problem of atomic stability is completely resolved within that framework, even though other (numerical) problems still remain.
CuriousOneCuriousOne
$\begingroup$ "The problem of atomic stability is completely resolved within that framework" That's also an interesting statement. Can you post a reference for it please? $\endgroup$ – Ján Lalinský Aug 19 '14 at 16:38
$\begingroup$ The atomic physics section of any physics department library will do... it should contain a couple of thousand mainstream textbooks on that matter, alone. $\endgroup$ – CuriousOne Aug 20 '14 at 3:15
$\begingroup$ Most of the books deals only with non-relativistic Schroedinger equation. You referred to QED, which is relativistic theory of field. The question of stability is much more difficult there. $\endgroup$ – Ján Lalinský Aug 20 '14 at 17:51
$\begingroup$ So what you are really trying to tell me is, that you haven't found a single textbook in the library that predicts that atoms should be unstable based on either theory? $\endgroup$ – CuriousOne Aug 21 '14 at 1:08
$\begingroup$ No, I am not. I am saying that your claim "The problem of atomic stability is completely resolved within that framework" is questionable, because relativistic theory of bound states is mathematically very difficult subject and I do not know of any work that proves atoms are stable in that theory. Perhaps there is such a proof - then I would like to see it. $\endgroup$ – Ján Lalinský Aug 21 '14 at 19:24
Not the answer you're looking for? Browse other questions tagged classical-mechanics quantum-interpretations atomic-physics classical-electrodynamics or ask your own question.
How fast do electrons travel in an atomic orbital?
Computing the force of the induced electric field on the moving charges that create the non-steady current & thus the same electric field.
Probable mistake in the derivation of the vector form of Biot-Savart's Law
Lorentz force in different reference frames?
Definition of energy levels of a hydrogen atom
Confusion about Coulomb Gauge Differential Equations For $\vec{A}$ and $V$
Symmetry in electricity and magnetism due to magnetic monopoles
If electromagnetic fields don't actually propagate in plane waves, how do they propagate?
Particle deflection in a magnetic dipole field | CommonCrawl |
VIRTUAL ONLY: Computational Aspects of Discrete Subgroups of Lie Groups
Reimburse Me
VIRTUAL ONLY: Computational Aspects of Discrete Subgroups of Lie Groups (Jun 14 - 18, 2021)
Alla Detinko
Michael Kapovich
Alex Kontorovich
Peter Sarnak
Institute for Advanced Study and Princeton University
This workshop is at the interface of algebra, geometry, and computer science. The major theme deals with a novel domain of computational algebra: the design, implementation, and application of algorithms based on matrix representations of groups and their geometric properties. The setting of linear Lie groups is amenable to calculation and modeling transformations, thus providing a bridge between algebra and its applications.
The main goal of the proposed workshop is to synergize and synthesize the independent strands in the area of computational aspects of discrete subgroups of Lie groups. We aim to facilitate solutions of theoretical problems by means of recent advances in computational algebra and additionally stimulate development of computational algebra oriented to other mathematical disciplines and applications.
Maryam Abdurrahman
Nikolay Bogachev
Skoltech
Tamunonye Cheetham-West
Marc Culler
Willem de Graaf
University of Trento
Martin Deraux
Universite Grenoble Alpes
Subhadip Dey
Sami Douba
Moon Duchin
Nathan Dunfield
Sara Edelman-Munoz
Anna Erschler
ENS, Paris
Anna Felikson
Simion Filip
Dane Flannery
Elena Fuchs
David Gabai
Ajeet Gary
Jonah Gaster
Jane Gilman
Xiaolong Hans Han
Susan Hermiller
Alexander Hulpke
Sebastian Hurtado`
Martin Kassabov
Olga Kharlampovich
Hunter College, CUNY
Aleksandr Kolpakov
University of Neuchâtel
Lucy Lifschitz
Biao Ma
Université Côte d'Azur
Alba Málaga Sabogal
Joseph Malionek
Curtis McMullen
Julien Paupert
Sarah Rees
Alan Reid
Max Riestenberg
Igor Rivin
Adam Robertson
Cameron Rudd
Jeroen Schillewaert
Saul Schleimer
Eduardo Silva
Maria Trnkova
Anastasiia Tsvietkova
Rutgers-Newark/IAS
Tyakal Venkataramana
Anna Wienhard
Tian An Wong
William Worden
Andrew Yarmola
Mehdi Yazdi
Export Schedule
Subscribe (iCal) Print View
Your device timezone is . Do you want to view schedules in or choose a custom timezone?
9:15 - 9:30 am EDT
Welcome - Virtual
Brendan Hassett, ICERM/Brown University
9:30 - 10:15 am EDT
Computing with hyperbolic structures in dimension 3
Seminar - Virtual
Nathan Dunfield, University of Illinois, Urbana-Champaign
Session Chair
Richard Schwartz, Brown University
I will discuss the theoretical and practical aspects of working with hyperbolic 3-manifolds computationally, illustrating the topic by extensive real-time demonstrations of the program SnapPy. Highlights include rigorous algorithms for determining hyperbolicity, testing for isometry, and solving the word problem.
10:30 - 10:45 am EDT
Coffee Break - Virtual
11:45 am - 12:15 pm EDT
Non-arithmetic lattices in PU(2,1)
Martin Deraux, Universite Grenoble Alpes
In joint work with Parker and Paupert, we constructed several non-arithmetic lattices in the isometry group of the complex hyperbolic plane, by describing explicit generating sets and constructing a fundamental domain (our original argument uses heavy computation via ad-hoc software). I will sketch an alternative proof via orbifold uniformization, which no longer relies on the computer.
12:30 - 1:30 pm EDT
Lunch/Free Time
1:30 - 2:00 pm EDT
Supramaximal Representations of Planar Surface Groups
William Goldman, University of Maryland
Jane Gilman, Rutgers University
Recently Deroin, Tholozan and Toulisse found connected components of relative character varieties of surface group representations in a Hermitian Lie grop G with remarkable properties. For example, although the Lie groups are noncompact, these components are compact. In this way they behave more like relative character varieties for compact Lie groups. (A relative character variety comprises equivalence classes of homomorphisms of the fundamental group of a surface S, where the holonomy around each boundary component of S is constrained to a fixed conjugacy class in G.)
The first examples were found by Robert Benedetto and myself in an REU in the summer of 1992, and published in Experimental Mathematics in 1999. Here S is the 4-holed sphere and G = SL(2,R). Although computer visualization played an important role in the discovery of these unexpected compact components, computation was invisible in the final proof, and its subsequent extensions.
Necklace Theory and Maximal cusps of hyperbolic 3-manifolds
David Gabai, Princeton University
(Joint work with Robert Haraway, Robert Meyerhoff, Nathaniel Thurston and Andrew Yarmola)
With rigorous computer assistance, both discrete and continuous, we show that if N is a complete finite volume hyperbolic 3-manifold with a maximal cusp of volume at most 2.62 then it is obtained by filling one of 16 explicit 2 or 3-cusped hyperbolic 3-manifolds. As an application, with more rigorous computer assistance, we (with Tom Crawford) show that the figure-8 knot complement is the unique 1-cusped hyperbolic 3-manifold with nine or more non hyperbolic fillings.
Graph embeddings in symmetric spaces
Anna Wienhard, Heidelberg University
Learning faithful graph representations has become a fundamental intermediary step in a wide range of machine learning applications. We propose the systematic use of symmetric spaces as embedding targets. We use Finsler metrics integrated in a Riemannian optimization scheme, that better adapt to dissimilar structures in the graph and develop a tool to analyze the embeddings based on the vector valued distance function in a symmetric space. For implementation, we choose Siegel spaces. We show that our approach outperforms competitive baselines for graph reconstruction tasks on various synthetic and real-world datasets and further demonstrate its applicability on two downstream tasks, recommender systems and node classification. This is joint work with Federico Lopez, Beatrice Pozzetti, Michael Strube and Steve Trettel.
Gathertown Reception
Practical computations with finitely presented groups.
Sarah Rees, University of Newcastle
Olga Kharlampovich, Hunter College, CUNY
The topics to be discussed are: (1) Techniques associated with coset enumeration and subgroup presentations (including Todd-Coxeter and Reidemeister-Schreier). (2) Algorithms associated with abelian, nilpotent and polycyclic groups, and with collection. (3) Techniques associated with rewriting, in particular the Knuth-Bendix process, and computation and use of automatic and coset automatic structures. (4) Testing for hyperbolicity.
12:00 - 12:30 pm EDT
Arithmetic and rigidity beyond lattices: Examples from hyperbolic geometry
Curtis McMullen, Harvard University
We will discuss new results and computational illustrations of (i) arithmetic aspects of non-arithmetic triangle groups in SL_2(R) and (ii) Ratner rigidity, and its failure, for planes in hyperbolic 3-manifolds of infinite volume.
Software Tutorial
Tutorial - Virtual
Marc Culler, University of Illinois at Chicago
Michael Kapovich, UC Davis
An interactive demonstration of ways to acquire, use and contribute to SnapPy.
Word problems and finite state automata
Susan Hermiller, University of Nebraska
In this talk I will discuss several ways to solve the word problem for groups by finite automata, including automatic and autostackable structures, along with geometric and topological views of these properties. We apply these algorithms to discrete subgroups of Lie groups and fundamental groups of 3-manifolds. Based on joint projects with M. Brittenham and T. Susse, and with D. Holt, S. Rees, and T. Susse.
Calculations in nilpotent groups
Moon Duchin, Tufts University
The discrete Heisenberg group can be handled in a very hands-on way, in matrix coordinates (say). An understanding of the large-scale geometry can be leveraged to find structure in the numbers. I'll discuss the rationality of growth series for the Heisenberg group and indicate what is known and not known about other nilpotent groups.
Problem Session
Alex Kontorovich, Rutgers University
Algorithmic problems for algebraic groups
Willem de Graaf, University of Trento
Alla Detinko, University of Huddersfield
We discuss a number of algorithmic problems, and possible solutions, for algebraic groups in characteristic 0. We will talk about some basic algorithms, that is, how to specify an algebraic group, computing the dimension, the Lie algebra, centralizers and normalizers, the closure of an orbit. Secondly we will look at the problem to compute the Zariski closure of a finitely generated matrix group. This also involves the related question of how to compute the smallest algebraic Lie algebra containing a given Lie algebra. A third topic is the problem how to find generators of arithmetic groups. These arise as the set of integral points of an algebraic group defined over Q. A famous theorem by Borel and Harish-Chandra asserts that these groups are finitely generated. But it remains a very hard problem to find a finite generating set. Algorithms exist for some classes of algebraic groups
Practical computation with infinite linear groups
Dane Flannery, National University of Ireland
We survey some of the progress to date in an ongoing project to enable computation with linear groups defined over infinite domains. This includes computational realization of the finite approximation method, leading up to algorithms for arithmetic groups and beyond. This is joint work with Alla Detinko and Alexander Hulpke.
Nikolay Bogachev, Skoltech
Jonah Gaster, Mathematics
Aleksandr Kolpakov, University of Neuchâtel
Julien Paupert, Arizona State University
Max Riestenberg, University of Texas at Austin
Vertical arcs and the Markov Unicity Conjecture
Jonah Gaster, University of Wisconsin-Milwaukee
The Markov Unicity Conjecture concerns a correspondence on the modular torus that ties together geometry, topology, and number theory. I will describe a new geometric reformulation of the conjecture
Geometric and arithmetic properties of hyperbolic orbifolds, and the Vinberg algorithm
We will discuss recent devolopements and progress in the theory of arithmetic hyperbolic reflection groups, software implementations of the Vinberg algorithm, as well as some other interesting connections between geometric and arithmetic properties of hyperbolic orbifolds. Based on a series of papers, including the recent ones with A. Kolpakov, and with M. Belolipetsky, A. Kolpakov, L. Slavich.
Computing reflection centralisers in hyperbolic reflection groups.
In 1996 Brink proved that the non-reflective part of a reflection centraliser in a Coxeter group is a free group. Later on, in 2013, Allcock refined Brink's theorem, and provided a method for computing the Coxeter diagram of the reflective part. We implement Alcock's algorithm and perform some computations with it. This is related to the previous work together with N. Bogachev on (quasi-)arithmetic Coxeter facets of (quasi-)arithmetic hyperbolic Coxeter polytopes.
A quantified local-to-global principle for Anosov representations
In 2014, Kapovich, Leeb and Porti gave several new characterizations of Anosov representations, including one where geodesics in the word hyperbolic group map to "Morse quasigeodesics" in the associated symmetric space. In analogy with the negative curvature setting, they prove a local-to-global principle for Morse quasigeodesics and describe an algorithm which can verify the Anosov property of a given representation in finite time (unless the representation is not Anosov, in which case the algorithm never terminates). However, some parts of their proof involve non-constructive compactness and limiting arguments, so their theorem does not explicitly quantify the size of the local neighborhoods one needs to examine to guarantee global Morse behavior. In my thesis I obtained explicit criteria for their local-to-global principle by producing new estimates in the symmetric space. This makes their algorithm for verifying the Anosov property effective, however, the balls in the Cayley graph one needs to examine are still prohibitively large. As an alternative application, I produce explicit perturbation neighborhoods of certain Anosov representations
Presentations for cusped arithmetic hyperbolic lattices'
We present a general method to compute a presentation for any cusped arithmetic hyperbolic lattice Gamma, applying a classical result of Macbeath to a suitable Gamma-invariant horoball cover of the corresponding symmetric space. As applications we compute presentations for the Picard modular groups PU(2, 1, O_d) for d = 1, 3, 7 and the quaternion hyperbolic lattice PU(2, 1, H) with entries in the Hurwitz integer ring H. The implementation of the method for these groups is computer-assisted. This is joint work with Alice Mark.
Calculations in infinite matrix groups using congruence images
Alexander Hulpke, Colorado State University
I will describe, in theory and by demonstrating explicit calculations in the system GAP, algorithms that for investigating infinite matrix groups through suitable congruence images. In an interplay of algorithms for matrix groups and algorithms for finitely presented groups it is possible to prove arithmeticity of certain subgroups (and to prove infinite index if we are lucky).
First order sentences in random groups
Anna Felikson, Durham University
We will use Gromov's density model of randomness and prove, in particular, the following result. Let G be ``the random group" of some fixed density d<1/16. Let f be a universal sentence in the language of groups. Then G almost surely satisfies f if and only if a nonabelian free group F satisfies f. These are joint results with R. Sklinos.
Geometric algorithms for subgroups of Lie groups
The known geometric algorithms for discrete subgroups of $\mathrm{SL}(n,\R)$ come primarily in two forms, both requiring the subgroup to be ``geometrically nice.'' While the ultimate definition of ``niceness'' is, at this point, very much unclear, the known forms include (a) {\em the traditional geometric finiteness} using a finitely-sided fundamental domain (using either an invariant Riemannian metric or Selberg's 2-point invariant) in the associated symmetric space, (b) the relatively recent notion of {\em Anosov subgroups}. Both definitions allow for geometric local-to-global principles, which, in turn, make computations with such discrete subgroups possible. The lectures will describe theses two concepts, the local-to-global principles and the geometric algorithms.
Geometric algorithms for subgroups of Lie groups.
Lie Theory in GAP
We will start with an overview of the functionality for Lie algebras in GAP4 (ways to define a Lie algebra, semisimple Lie algebras and their representations, root systems, nilpotent orbits, real semisimple Lie algebras). Then we will look at some examples of research projects where this functionality has been used.
Computability Models: Algebraic, Topological and Geometric
Can one translate a topological or geometric algorithm into a computable algorithm? We consider the Gilman-Maskit PSL(2,R) two-generator discreteness algorithm under the various models and review complexity bounds in the BSS machine and symbolic computation models. We show that Teichmuller space, T(0,3), and Riemann space, R(0,3) are BSS computable. More generally we discuss the issues for the algorithm with respect to bit-computability. We discuss two models for bit-computation, extended domain bit-computability and two oracle upper and lower computability. These models are currently under development in joint work with Tsvietkova. If we add upper and lower computability oracles, the discreteness problem without parabolics (so that the corresponding quotient has no cusps) is semi-decidable.
PSL(2,C)-representations of knot groups by knot diagrams
Anastasiia Tsvietkova, Rutgers-Newark/IAS
We will discuss a new method of producing equations for representation and character varieties of the canonical component of a knot group into PSL(2,C). Unlike known methods, it does not involve any decomposition of the knot complement, and uses only a knot diagram. In many cases, it can be applied to an infinite family of knots at once. The idea goes back to computing the complete hyperbolic structure from a link diagram by Thistlethwaite and the speaker, but is generalized to yield the variety. This is joint work with Kate Petersen.
Centrality of the congruence subgroup kernel
Tyakal Venkataramana, Tata Institute of Fundamental Research
Peter Sarnak, Institute for Advanced Study and Princeton University
We give a new proof of an old result that the congruence subgroup kernel associated to a higher rank non-cocompact arithmetic group is central in the arithmetic completion of the discrete group.
A cyclotomic family of thin hypergeometric monodromy groups in Sp(4)
Simion Filip, University of Chicago
The monodromy of differential equations is a rich source of subgroups of Lie groups. I will describe joint work with Charles Fougeron exhibiting an infinite family of discrete groups in Sp(4), obtained as monodromies of certain hypergeometric differential equations. Besides discreteness, the groups have a number of additional interesting properties. The family was discovered experimentally, but our proof does not rely on computers.
Manifolds with non-integral trace.
Alan Reid, Rice University
A basic consequence of Mostow-Prasad Rigidity is that if M=H^3/G is an orientable hyperbolic 3-manifold of finite volume, then the traces of the elements in $\G$ are algebraic numbers. Say that M has non-integral trace if G contains an element whose trace is an algebraic non-integer. This talk will consider manifolds with non-integral trace and show for example, that there are infinitely many non-homeomorphic hyperbolic knot complements S^3\ K_i with non-integral trace.
Verified Length Spectrum
Maria Trnkova, UC Davis
A computer program "SnapPea" and its descendant "SnapPy" compute many invariants of a hyperbolic 3-manifold M. In this talk we will discuss verified computations of geodesics length as a product of matrices and will mention some applications when it is crucial to know the precise length spectrum up to some cut off.
Markoff triples and cryptography
Elena Fuchs, University of California, Davis
In this talk, I will explore various questions arising from considering the mod-p Markoff graphs as candidates for a hash function. As I discuss several potential path finding algorithms in these graphs, several questions about lifts of mod p solutions to the Markoff equation will come up as well. This is joint work with K. Lauter, M. Litman, and A. Tran.
Billiards in orthoschemes and pictures of a group cocycle.
Subscribe to ICERM Calendar (iCal)
ICERM provides an iCalendar (.ics) feeds for all events and talks. You may copy and paste the feed URL below into your preferred calendar app or click the Import Event Calendar button below. This feed includes all scheduled events for this program.
VIRTUAL ONLY: Computational Aspects of Discrete Subgroups of Lie Groups Feed
The above feed only contains events from this program. To subscribe to a feed with all programs and events, please use the full calendar feed URL from the calendar page.
Import Event Calendar
All event times are listed in ICERM local time in Providence, RI (Eastern Standard Time / UTC-5).
All event times are listed in .
ICERM local time in Providence, RI is Eastern Standard Time (UTC-5). Would you like to switch back to ICERM time or choose a different custom timezone?
Customize Schedules
By default, all schedules on the ICERM website display in ICERM's local timezone. You may customize the timezone settings or enable 24-hour times using the controls below.
(GMT-11:00) Midway Island, Samoa (GMT-10:00) Hawaii (GMT-09:00) Alaska (GMT-08:00) Baja California (GMT-08:00) Pacific Time (US and Canada) (GMT-07:00) Arizona (GMT-07:00) Chihuahua, La Paz, Mazatlan (GMT-07:00) Mountain Time (US and Canada) (GMT-06:00) Central America (GMT-06:00) Central Time (US and Canada) (GMT-06:00) Guadalajara, Mexico City, Monterrey (GMT-06:00) Saskatchewan (GMT-05:00) Bogota, Lima, Quito (GMT-05:00) Eastern Time (US and Canada) (GMT-05:00) Indiana (East) (GMT-04:30) Caracas (GMT-04:00) Asuncion (GMT-04:00) Atlantic Time (Canada) (GMT-04:00) Cuiaba (GMT-04:00) Georgetown, La Paz, Manaus, San Juan (GMT-04:00) Santiago (GMT-03:30) Newfoundland and Labrador (GMT-03:00) Brasilia (GMT-03:00) Buenos Aires (GMT-03:00) Cayenne, Fortaleza (GMT-03:00) Greenland (GMT-03:00) Montevideo (GMT-03:00) Salvador (GMT-02:00) Mid-Atlantic (GMT-01:00) Azores (GMT-01:00) Cape Verde Islands (GMT) Greenwich Mean Time : Dublin, Edinburgh, Lisbon, London (GMT) Casablanca, Monrovia (GMT+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna (GMT+01:00) Belgrade, Bratislava, Budapest, Ljubljana, Prague (GMT+01:00) Brussels, Copenhagen, Madrid, Paris (GMT+01:00) Sarajevo, Skopje, Warsaw, Zagreb (GMT+01:00) West Central Africa (GMT+01:00) Windhoek (GMT+02:00) Athens, Bucharest (GMT+02:00) Beirut (GMT+02:00) Cairo (GMT+02:00) Damascus (GMT+02:00) Eastern Europe (GMT+02:00) Harare, Pretoria (GMT+02:00) Helsinki, Kiev, Riga, Sofia, Tallinn, Vilnius (GMT+02:00) Istanbul (GMT+02:00) Jerusalem (GMT+03:00) Amman (GMT+03:00) Baghdad (GMT+03:00) Kalinigrad, Minsk (GMT+03:00) Kuwait, Riyadh (GMT+03:00) Nairobi (GMT+03:30) Tehran (GMT+04:00) Abu Dhabi, Muscat (GMT+04:00) Dubai (GMT+04:00) Baku (GMT+04:00) Moscow, St. Petersburg, Volgograd (GMT+04:00) Port Louis (GMT+04:00) Tblisi (GMT+04:00) Yerevan (GMT+04:30) Kabul (GMT+05:00) Islamabad, Karachi (GMT+05:00) Tashkent (GMT+05:30) Chennai, Kolkata, Mumbai, New Delhi (GMT+05:30) Sri Jayawardenepura (GMT+05:45) Kathmandu (GMT+06:00) Astana (GMT+06:00) Dhaka (GMT+06:00) Ekaterinburg (GMT+06:30) Yangon (Rangoon) (GMT+07:00) Bangkok, Hanoi, Jakarta (GMT+07:00) Novosibirsk (GMT+08:00) Beijing, Chongqing, Hong Kong SAR, Urumqi (GMT+08:00) Krasnoyarsk (GMT+08:00) Kuala Lumpur, Singapore (GMT+08:00) Perth (GMT+08:00) Taipei (GMT+08:00) Ulaanbaatar (GMT+09:00) Irkutsk (GMT+09:00) Osaka, Sapporo, Tokyo (GMT+09:00) Seoul (GMT+09:30) Adelaide (GMT+09:30) Darwin (GMT+10:00) Brisbane (GMT+10:00) Canberra, Melbourne, Sydney (GMT+10:00) Guam, Port Moresby (GMT+10:00) Hobart (GMT+10:00) Yakutsk (GMT+11:00) Solomon Islands, New Caledonia (GMT+11:00) Vladivostok (GMT+12:00) Auckland, Wellington (GMT+12:00) Fiji Islands, Kamchatka, Marshall Islands (GMT+12:00) Magadan (GMT+13:00) Nuku'alofa (GMT+13:00) Samoa
Schedule Timezone Updated
Request Reimbursement
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
1 roundtrip between your home institute and ICERM
Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
Ground Transportation to and from airports and ICERM.
Unacceptable Costs
Flights on non-U.S. or non-E.U. airlines
Flights on U.K. airlines
Seats in economy plus, business class, or first class
Change ticket fees of any kind
Multi-use bus passes
Meals or incidentals
Advance Approval Required
Personal car travel to ICERM from outside New England
Multiple-destination plane ticket; does not include layovers to reach ICERM
Arriving or departing from ICERM more than a day before or day after the program
Multiple trips to ICERM
Rental car to/from ICERM
Flights on a Swiss, Japanese, or Australian airlines
Arriving or departing from airport other than PVD/BOS or home institution's local airport
2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Reimbursement Requests
Request Reimbursement with Cube
Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.
Reimbursement Tips
Scanned original receipts are required for all expenses
Airfare receipt must show full itinerary and payment
ICERM does not offer per diem or meal reimbursement
Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
Keep all documentation until you receive your reimbursement!
Reimbursement Timing
6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.
Reimbursement Deadline
Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end. | CommonCrawl |
Does the Hilbert space include states that are not solutions of the Hamiltonian?
I've studied Quantum Mechanics and I know the usual answer "The dimension of the Hilbert Space is the maximum number of linear independent states the system can be found in". There is something about this statement that bothers me, let me try to explain it.
Imagine a particle whose dynamics satisfy Schrodinger equation. Before we give it a hamiltonian, in principle the particle can have any Square-integrable continuous function as a state. When we write a particular Hamiltonian we find the actual eigenstates of the particle and then every possible state is a linear combination of that eigenstates. Now, acording to the first definition of the hilbert space, it has all the eigenstates of the hamiltonian. Now, several questions arise:
1) Does the Hamiltonian determine the Hilbert Space?
2) What if I make two particles with diferent Hamiltonians interact? Do they live in diferent Hilbert spaces?
3) What about perturbation theory? Do I change the Hilbert space each time I add a new term in the Hamiltonian?
Now I tend to think that the Hilbert Space contains every possible state of the particle whether it is a solution of the Schrodinger equation or not. Please help me sort this problem out.
quantum-mechanics wavefunction hilbert-space schroedinger-equation hamiltonian
Qmechanic♦
120k1414 gold badges243243 silver badges14551455 bronze badges
P. C. SpanielP. C. Spaniel
There is a certain subtlety to your question.
For quantum systems with a finite number of degrees of freedom, as commonly dealt with in intro QM, things are relatively simple:
Yes and no: the Hamiltonian certainly determines a basis for the Hilbert space of states, but the working Hilbert space depends on the domain of definition of the problem and on associated boundary conditions. See particle in a 3D-box vs. free particle on the entire 3D-space, as well as particle in a box with Dirichlet b.c.-s vs. particle in a box with periodic b.c.-s, etc. Alternatively, the Hilbert space is determined by the algebra of system observables, as pointed out in user1620696's answer, but the two descriptions are eventually equivalent. Moreover, there exists an even deeper equivalence of Hilbert spaces, see point(3) below.
Each particle lives in its own Hilbert space, but the combined interacting system lives in the direct product of individual Hilbert spaces. Again, see relation to algebra of observables as in user1620696's answer.
Leaving aside spin and the spin interactions mentioned by Hosein, generally no, for a finite number of degrees of freedom the Hilbert space does not change under perturbations. According to the Stone-von Neumann theorem, in this case all possible Hilbert spaces are isomorphic to one another (or equivalently, there is a unique irreducible representation of the canonical commutation relations), hence distinguishing one over the other makes no formal difference. At most, the total Hilbert space decomposes into a direct sum of multiple isomorphic copies.
For systems with an infinite number of degrees of freedom, which are the domain of Quantum Field Theory, points (1) and (2) above remain largely valid, but the situation does change drastically in regards to point (3).
The Stone-von Neumann theorem does not hold for quantum fields, and one may find that certain unitary transformations defined on one Hilbert space, constructed around a given Hamiltonian, produce states that are orthogonal to that whole Hilbert space and living in an entirely new, inequivalent state space. This is the case of inequivalent vacua of many QFT Hamiltonians, from condensed matter ones (see boson condensation, superconductivity, etc.) to QCD.
Further, the nature of such inequivalent vacua (or better say, unitarily inequivalent representations of the dynamics) is determined by the nature of interactions between free-fields described by some free-particle Hamiltonian and corresponding state space.
For an idea of what is going on, see for instance Sec. 1.2 of this review on Canonical Transformations in Quantum Field Theory.
udrvudrv
It is the algebra of observables that determines its possible representations, i.e. the corresponding hilbert space(s).
The Hamiltonian describes the dynamics, within the given representation.
Edit. To clarify a little bit, the common mathematical description of quantum mechanical systems is the following.
The (bounded, complex) observables of a quantum system form an involutive Banach algebra called a C*-algebra. This structure allows for the observables to be added ($+$), multiplied ($\cdot$), adjoined ($^*$) in a closed way; and gives a meaning to the "magnitude" or norm of a given observable. The true physical observables are the self-adjoint elements of the C*-algebra $\mathfrak{A}$ that satisfy $a^*=a$ (and thus have real spectrum). The quantum states are the positive-preserving objects of the topological dual $\mathfrak{A}^*$ with norm one.
A common example of C*-algebras are the algebras of bounded operators on Hilbert spaces. It turns out that every C* algebra is an algebra of operators on some Hilbert space:
Theorem [Gel'fand]. Every C* algebra is *-isomorphic to an algebra of bounded operators on some Hilbert space.
Therefore as long as the quantum bounded observables are described by a C* algebra, they are representable as operators on some Hilbert space. Of course that representation is not unique; for every state $\omega\in\mathfrak{A}^*_+$, there is an associated representation $(H_\omega, \pi_\omega,\Omega)$ given by the so-called GNS construction. In addition, the aforementioned representation is irreducible only if the state $\omega$ is pure.
Said that, the next question may be the following. Are all the irreducible representations of a given algebra unitarily equivalent? (i.e. are all the representations roughly speaking equivalent up to a change of basis?) If the answer was affirmative, this would in some sense tell us that the Hilbert space associated to a given algebra of observables is unique. The answer, however, is in general no; a very important example given by the algebra of canonical commutation relations of (free) quantum field theories. In the case of quantum mechanics instead, every irreducible representation of the algebra of canonical commutation relations is unitarily equivalent to the usual Schrödinger representation.
The Hamiltonian is partly unrelated to that. It is the generator of the quantum dynamics $(U(t))_{t\in\mathbb{R}}$, and of course the latter should act on the algebra of observables (equivalently, on states). Suppose that the given algebra of observables is $\mathfrak{A}$, the evolution should be a group of automorphisms on the algebra with some suitable continuity properties with respect to the time $t$. However, in many concrete applications we have to consider a big enough algebra of observables for that to be possible with an evolution that matches the requirements we want (e.g. given by observations on the system). The algebra of canonical commutation relations $\mathrm{CCR}$ may not be enough, and in order to enlarge it we can for example fix an irreducible representation $(H,\pi)$ such that $\pi(a)\in\mathcal{L}(H)$ for any $a\in\mathrm{CCR}$ is a bounded operator. The bicommutant $\pi(\mathrm{CCR})''$ of the algebra of canonical commutation relations in the representation $\pi$ contains $\pi(\mathrm{CCR})$ and consists of all bounded operators on $\mathcal{L}(H)$ that commute with all operators that commute with all operators in $\pi(\mathrm{CCR})$ (and it is a C* algebra). On such bicommutant, or more generally on $\mathcal{L}(H)$, it may be possible to define the unitary evolution $(U(t))_{t\in\mathbb{R}}$ and its generator, the Hamiltonian. This Hamiltonian is, however, representation dependent (with repsect to the canonical commutation relations) because in general $U(t)[\pi(\mathrm{CCR})]\not\subset \pi(\mathrm{CCR})$.
yuggibyuggib
The point is that one first should identify the Hilbert space of a system, then writes its Hamiltonian. In ordinary problems it's easy to define the proper Hilbert space and one writes the Hamiltonian without spending time finding the Hilbert space. for example, one particle without spin. So when you write a Hamiltonian you should know the Hilbert space first.
One more point: sometimes people don't know the structure of the Hilbert space of a problem, they just write a Hamiltonian by guessing and then try to figure out the structure of the Hilbert space, one example is the quantization of free fields which turns out to give the Fock space-the direct sum of zero, one, two, $\dots$ particle states.
So the answer to your questions is:
Yes, in the sense that its eigenvectors are a basis of a Hilbert space, but whether this Hilbert space is convenient for describing your system which you want to build a model of or not is another story.
No, only if they are completely independent and there is not any interaction between them, then each one has its own Hilbert space. Nevertheless you can write a Hamiltonian for both these two particles by a tensor product.
As I said, if you have defined the Hilbert space, then each term in the Hamiltonian should be a well defined operator on that Hilbert space. But if you haven't defined the Hilbert space first, then yes the Hilbert space can be changed by adding new terms to the Hamiltonian. For example by adding a term which depends on the spin of particle to the Hamiltonian of a spinless particle which only depends on position and momentum operators.
Ziezi
HoseinHosein
A Hilbert space is by definition just one vector space $\mathcal{H}$ over $\mathbb{C}$ equiped with one inner product $\langle,\rangle :\mathcal{H}\times \mathcal{H}\to \mathbb{C}$ such that defining the distance $d :\mathcal{H}\times \mathcal{H}\to \mathbb{R}$,
$$d(v,w)=\sqrt{\langle v-w,v-w\rangle},$$
the resulting metric space $(\mathcal{H},d)$ is complete, in the sense that every Cauchy sequence converges to a point in $\mathcal{H}$.
One important result is:
Two Hilbert spaces are isometrically isomorphic if and only if they have the same dimension
So, for each dimension there is exactly one Hilbert space. If the dimension is $n\in\mathbb{N}$ then $\mathcal{H}\simeq \mathbb{C}^n$, and if the dimension is infinite, we have $\mathcal{H}\simeq \ell^2(\mathbb{C})$, being $\ell^2(\mathbb{C})$ the space of sequences $(a_n)_{n\in \mathbb{N}}$ of complex numbers $a_n\in \mathbb{C}$ such that $\sum |a_n|^2 < \infty$.
In Quantum Mechanics, the Hilbert space appears in the first postulate:
The states of a quantum system are described by vectors in a Hilbert space called the state space $\mathcal{E}$.
The observables appears in the second postulate:
To each physical quantity associated to the system there is one hermitian operator $A\in \mathcal{L}(\mathcal{H},\mathcal{H})$, such operator is called one observable.
The Hamiltonian is just one particular observable: the observable which is assciated with the total energy of the system.
Now let's tackle your questions, one by one:
The Hamiltonian doesn't determine the Hilbert space. Interestingly, what determines the Hilbert space is the observables. In truth, the observables form one algebra, called the observable algebra, and this observable algebra determines the Hilbert space. Think of a particle in one dimension: it can be subject to the infinite square well potential, to the one-dimensional harmonic oscilator potential or even to the delta potential, but in any of these cases, the Hilbert space is the same.
The two particle system is described by a different Hilbert space, not because of the hamiltonians, but because of the observable algebra. If particle one is described by $\mathcal{E}_1$ and particle two is described by $\mathcal{E}_2$, then the two particle system is described by $\mathcal{E}_1\otimes \mathcal{E}_2$. If the particless where non interacting, the resulting Hamiltonian would be $H = H_1\otimes \mathbf{1} + \mathbf{1}\otimes H_2$, otherwise, there would be interaction terms.
Again, the space isn't determined by the Hamiltonian. The Hamiltonian is just one particular observable. If you add terms to the Hamiltonian, you just change the energy observable, but you don't change the state space. Again, the state space is determined by the observable algebra, not by the particular form of one observable.
$\begingroup$ The separable Hilbert spaces are all isomorphic; however for infinite dimensional degrees of freedom there are irreducible representations of the canonical commutation relations that are not unitarily equivalent. So there are algebras of observables that admit many inequivalent representations (each one using "its own Hilbert space"). $\endgroup$ – yuggib Sep 19 '16 at 10:06
Not the answer you're looking for? Browse other questions tagged quantum-mechanics wavefunction hilbert-space schroedinger-equation hamiltonian or ask your own question.
Are there quantum systems for which the Hamiltonian has no eigenstates?
Tensor product of Hilbert spaces and non-interacting particles
Hilbert space and Hamiltonians
Quantum States, Hilbert Space and Time
Is bosonic state space a proper subspace of the original Hilbert space on which we have an N-particle free Hamiltonian?
How can a Hamiltonian determine the Hilbert space?
Switching between Dirac notation and creation/annihilation operators?
Perturbation theory: justifying expansion in terms of eigenstates of the basis Hamiltonian
What is the difference between a Hilbert space of state vectors and a Hilbert space of square integrable wave functions? | CommonCrawl |
I don't understand why escape velocity is necessary [duplicate]
Can we escape Earth's gravity slowly? 5 answers
I have read multiple explanations of escape velocity, including that on Wikipedia, and I don't understand it.
If I launch a rocket from the surface of the Earth towards the sun with just enough force to overcome gravity, then the rocket will slowly move away from the Earth and we see this during conventional rocket launches.
Let's imagine I then use slightly excessive force until the rocket reaches 50 miles per hour and then I cut back thrust to just counterbalance the force of gravity. Then my rocket will continue moving at 50 mph toward the sun. I don't see any reason why I can't just continue running the rocket at the same velocity and keep pointing it towards the sun. The rocket will never orbit earth (by "orbit" I mean go around it). It will just go towards the sun at 50 mph until it eventually reaches the sun. There seems to be no need whatsoever to ever go escape velocity (25,000 mph).
newtonian-mechanics newtonian-gravity orbital-motion velocity escape-velocity
Ambrose Swasey
Ambrose SwaseyAmbrose Swasey
marked as duplicate by knzhou, John Rennie newtonian-mechanics Users with the newtonian-mechanics badge can single-handedly close newtonian-mechanics questions as duplicates and reopen them as needed. Feb 11 at 16:46
$\begingroup$ With just enough force to overcome gravity - I read that as "with enough force to reach escape velocity". Mind you, gravity does not just stop at one point, it only decreases with the squared distance. If you want to properly escape gravity, ie. move with enough force so that you cannot be pulled back, that is literally escape velocity. $\endgroup$ – ComicSansMS Feb 11 at 13:14
$\begingroup$ Escape velocity decreases with altitude. Eventually, as you get far enough from the Earth, it will drop below 50 mph. (Well, it would, if the Earth was floating alone in interstellar space. In practice, the Sun's gravity will start to dominate long before that happens.) $\endgroup$ – Ilmari Karonen Feb 11 at 13:17
$\begingroup$ I don't see any reason why I can't just continue running the rocket at the same velocity and keep pointing it towards the sun. Who says you can't do this? If you look at the derivation of escape velocity, it is pretty obvious that you are assuming that you start off with the escape velocity and don't apply any other forces afterwards. But no one is saying that is the only way to "escape Earth" $\endgroup$ – Aaron Stevens Feb 11 at 14:02
$\begingroup$ Re, "I don't see any reason why I can't just continue running the rocket at the same velocity and keep pointing it towards the sun." It's because you can't build a rocket that carries enough fuel to do it that way. Also, if it's your intent to send a probe that goes 50 miles per hour all the way to the Sun, it's not even going to get half way there during your lifetime. Space is Big! $\endgroup$ – Solomon Slow Feb 11 at 15:07
$\begingroup$ Possible duplicate of Can we escape Earth's gravity slowly? $\endgroup$ – knzhou Feb 11 at 16:19
Escape velocity is the velocity an object needs to escape the gravitational influence of a body if it is in free fall, i.e. no force other than gravity acts on it. Your rocket is not in free fall since it is using its thruster to maintain a constant velocity so the notion of "escape velocity" does not apply to it.
ACuriousMind♦ACuriousMind
$\begingroup$ I think his rocket is not accelerating, but there is a constant thrust applied by the rocket. Either way, as you point out, what he describes is not escape velocity. $\endgroup$ – garyp Feb 11 at 12:27
$\begingroup$ @garyp Well, yes, I meant "accelerating" in the sense of "being under the influence of an acceleration other than that of gravity", not in the sense of "having non-zero net acceleration". It doesn't actually matter for this scenario whether the rocket is maintaining a constant speed or not, as long it's outputting thrust. $\endgroup$ – ACuriousMind♦ Feb 11 at 12:29
$\begingroup$ The rocket is not accelerating. It is moving at a constant speed of 50 mph. The acceleration with respect to the earth is 0. $\endgroup$ – Ambrose Swasey Feb 11 at 12:44
$\begingroup$ @AmbroseSwasey A body moving at a constant speed in a gravitational field is accelerating. Think about what happens when you throw a ball into the air - without any additional force input, it slows down and reverses direction. The rocket needs to constantly output thrust just to maintain its speed. $\endgroup$ – Nuclear Wang Feb 11 at 13:40
$\begingroup$ @NuclearWang accelerating means changing velocity, not "experiencing propulsive force". $\endgroup$ – Ján Lalinský Feb 11 at 14:00
This is not what happens in actual spaceflight. The actual rockets work for a short time and after that, the spacecraft is moving by inertia. And they don't really work against the Earth's gravity - the vertical launch purpose is to shoot the rocket to the high altitude where the atmosphere is thin. Then the rockets turn and accelerate horizontally to hain enough velocity to get on the orbit or the desirable escape trajectory. What you describe would be extremely ineffecient and no rocket exist to actually do that in real life.
To understand why no rocket can reach far this way let's do a quick calculation of the amount of fuel required. Let's assume that going at a constant speed 50 mph (80 kmh) you want to reach a 80 km altitude (the altitude one needs to be awarded by the astronaut wings in US). At that altitude the gravity acceleration $g$ is almost the same as on the ground. That's why we will assume it to be constant. Then you rocket fighting this acceleration for an 1 hour should have so much fuel that if it were in an empty space without any gravitating body it would speed itself to the velocity equal $\Delta v= 1 \mathrm{hour}\cdot g$. The Tsiolkovsky equation relates this speed to the ratio of the mass of the fueled rocket $m_0$ to its final mass $m_f$. \begin{equation} \frac{m_f}{m_0}=\exp\left[\frac{\Delta v}{g I_{sp}}\right]=\exp\left[\frac{1\,\mathrm{hour}}{I_{sp}}\right] \end{equation} where $I_{sp}$ is a so-called specific impulse depending on the type of the rocket. For the idealized LH2-LOX rocket $I_{sp}=450\,\mathrm{sec}$. This means that for such rocket $\frac{m_f}{m_0}=e^{8}\simeq 2980$. I.e. to elevate 1 ton just to this altitude this way you need the same amount of fuel as the mass of the whole Saturn V rocket. And this computation is idealized i.e. all rocket engines, the supporting structure, fuel tanks etc are included into this 1 ton. If we raise the altitude the mass ratio grows exponentially i.e. you need $\simeq 10^{17}$ tons of fuel just to elevate 1 ton to the altitude of the ISS.
Kyle Kanos
OONOON
$\begingroup$ Thank you for going beyond the other answers and explaining the issue with doing what OP described. I've had this question since I was a child; now, at 25, I finally understand why we bother with the notion of an escape velocity. It just seemed so irrelevant if we have rockets... it's not as if we try to reach the velocity on the ground and then take a ramp upwards. $\endgroup$ – Luc Feb 11 at 13:26
$\begingroup$ This is discussed further in Why are rockets so big $\endgroup$ – Kyle Kanos Feb 11 at 14:14
Escape velocity is necessary if you turn all your rockets off. Without any rocket thrust, and if you are going below escape velocity, you will go into orbit or crash into the object you are trying to escape. If you keep your rockets on, you don't need escape velocity.
To put this into practical terms, during the Apollo lunar missions, the rockets were off nearly all of the time. The burn called "translunar injection" gave the spacecraft enough velocity to get out of earth's gravitational influence into the moon's. Another burn was needed when the spacecraft arrived in the neighborhood of the moon. This burn was to put it into lunar orbit. There was another burn needed to escape the lunar pull, and put it back on a trajectory to earth. The were a few minor burns for mid course correction. And there was the great big burn, at launch time.
The lunar exploration module had to make a few more burns, to land on the moon, to return to lunar orbit, and to link up with the mother ship.
Other than that, the trajectory of the craft was determined by gravity and inertia (momentum).
Walter MittyWalter Mitty
If you're in a car at highway speeds and jump out in any direction, are you still going at highway speeds?
If you shoot a ball at 50mph with a cannon out of the back of a car that's driving 50mph it would stand still: https://www.youtube.com/watch?v=BLuI118nhzc
So imagine Earth is the car where you're jumping out of, and the Earth is traveling around the Sun at about 30km per second. You'd still need to add 19km/s to the "escape velocity" in the opposite way before you come to a standstill and fall down towards the sun. 19km/s is a lot of fuel!
DrTrunks BellDrTrunks Bell
$\begingroup$ Perhaps worth showing a picture of how accelerating directly towards the sun generates an ellipse? $\endgroup$ – akozi Feb 11 at 13:59
$\begingroup$ While correct, I don't see how this answers the OP's question. $\endgroup$ – Ilmari Karonen Feb 11 at 14:24
$\begingroup$ @IlmariKaronen I'm trying to explain why you can't just "point your rocket at the sun" and go there because of your initial orbit when you leave Earth's gravity. $\endgroup$ – DrTrunks Bell Feb 11 at 14:27
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics newtonian-gravity orbital-motion velocity escape-velocity or ask your own question.
Can we escape Earth's gravity slowly?
Why are rockets so big?
Escape velocity from long ladder
Could we make a trebuchet that could launch objects to a stable orbit?
Escape velocity from Earth
How does escape velocity relate to energy and speed?
Do we need Escape Velocity
Why is it called "escape velocity" and not "escape acceleration"?
If orbital velocity decreases further away from an orbited body, why does increasing a satellite's velocity increase its orbital distance?
Escape velocity of the solar system?
How is it possible to escape a gravitational body? | CommonCrawl |
Predictive algorithms for mobility and device lifecycle management in Cyber-Physical Systems
Borja Bordel Sánchez1,
Ramón Alcarria2,
Diego Sánchez de Rivera1 &
Alvaro Sánchez-Picot1
EURASIP Journal on Wireless Communications and Networking volume 2016, Article number: 228 (2016) Cite this article
Cyber-Physical Systems (CPS) are often composed of a great number of mobile, wireless networked devices. In order to guarantee the system performing, management policies focused on becoming transparent to high-level applications, the changes in the hardware platform have to be implemented. However, traditional reactive methodologies and basic proposed predictive solutions are not valid either due to the extremely dynamical behavior of CPS or because the high number of involved devices prevents fulfill the timing requirements. Therefore, in this paper, we present an advance predictive solution for managing the mobility and device lifecycle, being able to meet all requirements of CPS. The solution is based on an infinite loop, which calculates, in each iteration, a sequence of future system states using a CPS simulator and interpolation algorithms. Furthermore, an experimental validation is provided in order to determine the performing of the proposed solution.
CPS are the next generation of engineered systems in which computing, communication, and control technologies are tightly integrated [1]. In theory, any type of device could be part of a Cyber-Physical System: large or miniaturized, wired or wireless, fixed or mobile, etc. However, most practical discussions about the characteristics of CPS conclude that they are composed of a great number of wireless [2], embedded [3], mobile [2] devices.
Wireless mobile devices allow creating a more flexible network architecture than other types of devices [4]. But, on the contrary, these devices tend to interact and communicate opportunistically [4], so the access to them is not guarantee, in general. Nevertheless, most CPS applications (such as energy infrastructures, transport systems, or medical instruments) require a permanent and transparent access to the hardware capabilities, so mobility and device lifecycle management policies are essential in CPS.
Besides, this challenging situation has turn more complex due to the appearance of concepts such as the Cyber-Physical Internet [5]. These scenarios consider the possibility of deploying various communicated CPS in the same geographical area, so mobile devices not only can move inside the coverage area of one CPS but also they can move to other CPS and later return (or not).
In traditional systems, device lifecycle and mobility are supported by means of reactive techniques, i.e., the system makes decisions or actions when a certain change occurs [6]. However, these techniques suppose the changes in the system are continuous and slow, so there is enough time to determine and execute the proper actions before a fatal error occurs. For example, in Long-Term Evolution (LTE) networks [7], a handover is executed when quality or power measures from a user equipment go below a certain limit. Then, it is suppose these measures are not going to change abruptly, so there is time to execute the handover (although it is a long and complicated process) before losing the connectivity definitively. However, CPS present an extremely dynamical behavior, and the system state may change rapid and randomly [8]. For example, in underwater military applications [9], the medium can degrade so fast that there is no enough time to react properly before losing access to hardware devices, since a relevant change in the signal quality is detected. Thus, reactive techniques are not valid in CPS, as the decision and actions are calculated considering situations which may change strongly during the calculation process.
On the other hand, a small number of basic predictive solutions are also available. In these proposals, the system predicts the future changes and executes the appropriate actions before they occur. However, they are based on heavy simulators which are useful in simple scenarios but which fail when used in CPS due to the high number of involved devices. In these cases, the needed time to simulate a certain future situation is higher than remaining time to really reach it. Then, the essential timing requirement of predictive solutions is not fulfilled.
Therefore, in this paper, we propose a predictive algorithm for device lifecycle and mobility management in CPS, being able to predict the future states of the system and calculate and execute the appropriate actions before future changes or fatal errors occur. The proposed algorithm is based on a CPS simulator and interpolation functions being able to calculate the future system states. These results feed a decision-making module, which is not part of the proposed predictive algorithms, but which is required to manage the mobility and the device lifecycle. The proposed solution presents a flexible definition, so it could be adapted to any application scenario of CPS (low-rate networks, intelligent devices, etc.). Moreover, an experimental validation is provided, considering a particular implementation of the proposed algorithm. The proposed experiment proves that in more than 95 % of cases, the algorithm manages the system changes successfully.
The rest of the paper is organized as follows: Section 2 introduces the state of the art in mobility and device lifecycle in CPS. Section 3 presents the application scenario and proposes the predictive algorithm. Section 4 describes a particular implementation of the algorithm, used in the experimental validation. Finally, Sections 5 and 6 explain some results of this experimental validation and the conclusions of our work.
Works on managing the device lifecycle in CPS are not common. In general, papers which treat this topic are focused on configuring the whole CPS, not only on managing the hardware platform. Then, in general, particular details about hardware issues (such as controlling the remaining battery charge in wireless devices) are not addressed.
In this area, different types of proposals are found. Some works are focused on the use of artificial intelligence algorithms being able of automatically adding, removing, and configuring components in CPS [10]. Others describe special tools (such as middleware layers) which, at the end, need the human intervention [11]. Works describing algorithms where each device generates a description file when a change occurs in the system, which is processed by the rest of the platform, may be also found [12]. Finally, papers discussing the installation of an intelligent scheduler in each device, communicating with a central core where decisions are taken are also available [13, 14].
However, as can be seen, all the previous proposals follow a reactive scheme which is not valid in our general scenario (it must be noted that, sometimes, these solutions could be valid, if some assumptions are considered).
The topic of mobility support in CPS has received more attention than device lifecycle, although it is not a main research line nowadays. In general, two types of mobility may be defined in CPS: intra-system and inter-system.
In intra-system mobility scenarios, devices move inside the coverage area of one CPS. Proposal about this topic delegate the mobility control to the underlying network [15]. In these works, devices are clients of a mobile network through which a data service (representing the CPS applications) is provided. These solutions present a good performing; however, as we said in the introduction, nowadays concepts such as the Cyber-Physical Internet have to be considered, and this scheme cannot cover the requirements of these scenarios. Then, mobility problem in CPS is focused on inter-system mobility.
In the case of inter-system mobility, works used to propose application-specific mobile frameworks (based on SOA, for example), adapted to the particular mobility requirements of their scenario [16, 17]. Moreover, some authors incorporate GPS transponders into devices, in order to provide geographical data to a computational core. Then, the core may determine if devices are or not in the coverage area [18]. Nevertheless, all these proposals (including the cited for the intra-system case) follow a reactive scheme and are valid in very restrictive scenarios (as we said previously).
Finally, relating to inter-system mobility management, predictive schemes have been also proposed [19]. In these solutions, predictive algorithms are based on simulators which use numerical integration functions (one for each device) [20]. Then, the resulting algorithms present an order of complexity of ℴ(n 2) for both: the number of considered devices and the number of instants for which the system state is calculated. Thus, in scenarios where hundreds of devices are included, and/or where the dynamics requires to simulate the system state in several time instants, the needed time to simulate a certain future situation may be higher than remaining time to really reach it.
Therefore, basically, it is necessary to design a predictive algorithm being able to manage both the device lifecycle and the mobility and to predict the future system states before it reaches them. The solution should be as general as possible and should allow including a high number of devices and obtaining the system state in as many time instants as needed. For that, we propose an algorithm based on an infinite loop, where four steps are repeated in each iteration: (i) data about the real system state from hardware devices; (ii) a CPS simulator obtains a certain number of future system states using the data acquired; (iii) the rest of the system states are obtained using interpolation techniques; and (iv) the decision-making algorithms execute the adequate actions based on the predicted future system states. The use of interpolation techniques, as we are seeing, introduces a greater error in the predicted future states. However, they reduce strongly the needed time to obtain a sequence of future states. In the experimental validation section we are proving the resulting solution allow predicting and managing correctly the changes in the system, despite the additional error introduced by interpolation techniques.
Proposal: a predictive algorithm based on CPS simulation
In this section, we analyze the general characteristics of the base scenario, show the functional architecture for the proposed solution and describe the proposed predictive algorithm.
Base scenario, functional architecture
As we said in Section 2, we are looking for an algorithm as general as possible. However, CPS may be deployed in very different scenarios: from small-scale applications to large-scale solutions [21]. Then, a deep analysis of device lifecycle and mobility in CPS should be based on the definition and characterization of several different deployments (manufacturing [7], irrigation [22], etc.), architectures, implementations, and/or use cases. Nevertheless, in our case, we are going to work over some general characteristics (common to most cases), which are
Various small-scale CPS are deployed in the same area. Besides, the high-level applications and processes associated to one CPS are not modified or affected by changes in the hardware platform.
The communication among the different CPS is supported by the so-called Cyber-Physical Internet [6], to which CPS are connected through an "Inter-system services interface."
Two types of CPS are defined: internal-CPS and border-CPS. Internal-CPS present a coverage area completely surrounded by the coverage areas of other CPS. In border-CPS, the limits of the coverage area represent the geographical limits of the Cyber-Physical Internet.
The movement of the mobile devices can be defined as follows: (a) free, when there is not external control; (b) controlled, when it is possible to plan the and route for them and; (c) limited to one or several specific routes independently if they are free or controlled.
Only predictable changes are considered. In real systems, both predictable and unpredictable changes occur: devices run out of battery charge or leave a coverage area (which are predictable changes), but also they get broken down or the tasks they are executing get blocked (which are unpredictable changes). Unpredictable events have to be addressed using reactive techniques [23], which are not discussed in this work but which must complement the proposed solution in commercial deployments.
The first and second points focus the mobility problem on inter-system mobility. As we said, valid solutions for intra-system mobility may be found, and mobility problem in CPS is centered almost exclusively on inter-system mobility. The third point refers to the geographic limitation of the small-scale CPS and their underlying network which, in general, does not have worldwide coverage. Figure 1 shows graphically the resulting network architecture from the first three points.
Network architecture of CPS and Cyber-Physical Internet
The fourth point is directly related to the great diversity of devices which can be considered: from robots to wearable devices. Each one of these devices presents a different movement and the proposed solution should consider all of them.
Finally, the fifth point limits the scope of the proposed solution to predictable situations. These situations represent the majority of changes in the regular operation of a CPS (or, even, in traditional communication networks [24]). However, traditional reactive techniques (such as timers) may complement the proposed predictive solution, especially in commercial deployments where unpredicted system failures might occur and have to be managed.
The functional architecture for supporting the main proposal of this paper (the predictive algorithms, see Section 3.2) can be seen in Fig. 2. Various components may be distinguished.
Functional architecture
Peripherals: They include all the components which support the task execution in a CPS (sensors, actuators, processors, etc.). They communicate through an appropriate middleware (such as a serial port) with their associate controllers.
Hardware controllers: It refers to all the components which manage and control the operation of the peripherals. In particular, they include all the necessary drivers to operate the peripherals. Each hardware controller has associated a certain number of peripherals. A hardware device which implements both peripherals and, at least, one hardware controller is called self-manage device or intelligent device.
Hardware manager: The entire hardware platform is managed from this component. In particular, it includes the module which implements the proposed algorithms (the mobility and device lifecycle predictive manager). It also implements the decision-making algorithms, executes the transactions for system reconfiguration, and makes independent changes in the hardware platform from the high-level applications or processes.
A predictive algorithm for device lifecycle and mobility management
At the start of the system, in the mobility and device lifecycle predictive manager of the hardware manager showed in Fig. 2, a model M of the underlying hardware platform is created. In this model, the ith device is represented by a collection m i of n i relevant parameters (1).
$$ {m}_i=\left\{{\mathrm{id}}_i,\ {\mathrm{type}}_i,\ {\mathrm{param}}_1, \dots,\ {\mathrm{param}}_{n_i}\right\} $$
The device model m i includes a unique identification id i and a type identifier type i . The type identifier takes values from a set T, where each value represents a different class of device (2). The type identifier also indicates the pattern which must follow the collection m i in each case. Examples of possible relevant parameters are the geographical position or the battery charge level.
$$ T = \left\{\mathrm{device}\_{\mathrm{class}}_j\kern0.75em j=1,\dots, Q\right\} $$
Then, the entire model M is a list of collections of parameters m i (3). In order to allow its mathematical treatment (for example, in the interpolation functions), this model can also be expressed as matrix S (4). We are calling this matrix representing the value of all important parameters in the system "system state matrix" or, for simplicity, "system state."
$$ M = \left\{{m}_1,{m}_2,\dots, {m}_k\right\} $$
$$ S = \left(\begin{array}{c}\hfill {m}_1\hfill \\ {}\hfill \dots \hfill \\ {}\hfill {m}_k\hfill \end{array}\right) $$
In (2) and (3), k represents the total number of devices in the CPS. As, in general, not all device models m i have the same number of relevant parameters n i . In system state, matrix dimensions are homogenized by adding as many zeros as needed (these additional zeros allow the mathematical treatment of the system state matrix but are ignored in the decision-making algorithms).
In the mobility and device lifecycle predictive manager, a simulator being able to simulate CPS scenarios is also included. This simulator will take the system state in a certain instant t = t 0, \( {S}_{t_0} \), and will obtain the system state h seconds later, \( {S}_{t_0+h} \). Additional zeros added to homogenize dimensions in the system state matrix will not be taken into account.
Two additional important parameters are the time step h and the total simulated time T sim. Then, the predictive algorithm must obtain the system state each h seconds, between the current time t 0 and t 0 + T sim. In order to obtain useful information, the calculation should finish before the first time step passes, i.e., before t = t 0 + h.
In previous proposals about predictive mobility management systems, all the needed future states are obtained by means of the CPS simulator. However, in complex systems where a great number of devices are considered, and/or where several time instants have to be obtained (i.e., the time step h is very small), timing conditions cannot be fulfilled, i.e., the calculations do not finish before t = t 0 + h. As a solution, we propose to generate some future states, instead of through the CPS simulator using interpolation techniques (which are much faster and lighter, although they present a greater error). Then, three different types of system states may be found:
Intra-state: It represents the real state of the system in a certain instant t = t 0. In this state, the values of the parameters in the model are obtained from the physical devices through a data acquisition process. The cited data acquisition process could be based on a periodical transmission of information from the physical devices to the management unit; or in a solution where the management unit requests the devices for that information. In both cases, solutions based on JSON objects or XML description files are the most adequate (although any other could be used). Intra-state is the most precise way of describing the system; however, it is also the most costly (in time and processing capabilities). These states will be noted as S I or I. At least, one intra-state has to be composed at the beginning of each iteration in the infinite loop.
Predictive state: It refers the states obtained from the simulator. To obtain a predictive state, it is necessary to dispose, first, of an intra-state. From one intra-state, we may be able to obtain as many predictive states as desired. However, the precision in the prediction goes down, as we try to predict farther temporal instant is farther than what we desire to predict. On the other hand, the time needed to calculate a predictive state is lower than necessary to acquire an intra-state. These states will be noted as S P or P.
Interpolated state: It refers to the states obtained by interpolating two predictive states or one intra-state and one predictive state. These states are really fast to obtain but present the biggest error. This error goes up when more temporal distance exists between the states interpolated. These states will be noted as S B or B.
The proposed algorithm is based on the definition of a sequence of predictive and interpolated states, describing the system in collection of future instants {t 0, t 0 + h, …, t 0 + T sim}. Every sequence of states must start with an intra-state. And, later, a predictive or interpolated state is obtained for each time step, according to a predesigned pattern. The selected pattern might be very varied and should be adapted to the specific application scenario. In general, a balance between precision, calculation, speed, and the amount of data transmitted must be achieved. Figure 3 shows some possible schemes, depending on the considered scenario.
Possible sequences of states
Given a certain time step h and a total simulated time T sim, the precision of the sequence of states goes up, when the number of predictive states included increases. However, the calculation time also increases and, perhaps, the timing condition cannot be met. On the contrary, if more interpolated states are considered, the calculation time will go down strongly. Nevertheless, the error in the predicted states will be greater, and some changes could not be predicted. We argue that it is possible to find a pattern adapted to the considered CPS, such that it allows fulfilling the timing condition and the additional introduced error does not affect the correct prediction of changes in the system. In Sections 4 and 5, we prove that.
On the other hand, the precision in the predicted states also goes up when the time step h decreases. Nonetheless, in this case, meeting the timing condition (i.e., the calculation time must be smaller than the time step) gets more complicated. Finally, T sim may be used as a design parameter. In general, if the considered CPS presents a very variable behavior, the simulation time T sim will tend to be small (as well as the time step).
Then, in order to clarify the proposed algorithm, a numerical example is provided. In a CPS, four equal devices are available. Each device is represented by three relevant parameters. Numerically, the four models get described by
$$ {m}_1 = \left\{1,\ 1,\ 9,8,7\right\}\kern1em {m}_2 = \left\{2,\ 1,\ 9,8,7\right\}\kern1em {m}_3 = \left\{3,\ 1,\ 9,8,7\right\}\kern1em {m}_4 = \left\{4,\ 1,\ 9,8,7\right\} $$
These models are ordered as a matrix, which corresponds with the first intra-state obtained:
$$ S=\left(\kern1em \begin{array}{ccccc}1\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\kern1em \\ {}\kern1em 2\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\kern1em \\ {}\kern1em 3\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\kern1em \\ {}\kern1em 4\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\end{array}\kern1em \right)={S}_{t_0}^I $$
Using a CPS simulator, a predictive stet is obtained:
$$ {S}_{t_0+T}^P=\left(\kern1em \begin{array}{ccccc}1\kern1em & \kern1em 1\kern1em & \kern1em 6\kern1em & \kern1em 8\kern1em & \kern1em 5\kern1em \\ {}\kern1em 2\kern1em & \kern1em 1\kern1em & \kern1em 1\kern1em & \kern1em 0\kern1em & \kern1em 0\kern1em \\ {}\kern1em 3\kern1em & \kern1em 1\kern1em & \kern1em 2\kern1em & \kern1em 6\kern1em & \kern1em 6\kern1em \\ {}\kern1em 4\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\end{array}\kern1em \right) $$
And, finally, using matrix interpolation techniques, at least one interpolated state is calculated:
$$ {S}_{t_0+\frac{T}{2}}^B=\left(\kern1em \begin{array}{ccccc}1\kern1em & \kern1em 1\kern1em & \kern1em 8\kern1em & \kern1em 8\kern1em & \kern1em 6\kern1em \\ {}\kern1em 2\kern1em & \kern1em 1\kern1em & \kern1em 5\kern1em & \kern1em 4\kern1em & \kern1em 4\kern1em \\ {}\kern1em 3\kern1em & \kern1em 1\kern1em & \kern1em 6\kern1em & \kern1em 7\kern1em & \kern1em 6\kern1em \\ {}\kern1em 4\kern1em & \kern1em 1\kern1em & \kern1em 9\kern1em & \kern1em 8\kern1em & \kern1em 7\end{array}\kern1em \right) $$
Then, the situation of the devices for each temporal instant is obtained by reconstructing the device models from the previous matrixes
$$ t = \frac{T}{2}\kern2em {m}_1 = \left\{1,\ 1,\ 8,8,6\right\}\kern1em {m}_2 = \left\{2,\ 1,9,4,4\right\}\kern1em {m}_3 = \left\{3,\ 1,\ 6,7,6\right\}\kern1em {m}_4 = \left\{4,\ 1,\ 9,8,7\right\} $$
$$ t=T\kern2.25em {m}_1 = \left\{1,\ 1,\ 6,8,5\right\}\kern1em {m}_2 = \left\{2,\ 1,\ 1,0,0\right\}\kern1em {m}_3 = \left\{3,\ 1,\ 2,6,6\right\}\kern1em {m}_4 = \left\{4,\ 1,\ 9,8,7\right\} $$
As we said, the proposed sequence of predictive and interpolated states must be repeated in an infinite loop, which in each iteration starts with obtaining an intra-state. This new intra-state could be obtained for an instant posterior to the calculated interpolated and predictive states. However, as we said, obtaining an intra-state is a very costly task in time. Then, a temporal gap could appear in the management activities due to the delay of the intra-state generation process. In order to avoid that, the new intra-state may be acquired for a moment before the end of the current sequence (see Fig. 4).
Possible restarts of the sequence of states
Moreover, in the second case, and if devices in the hardware platform present a certain level of intelligence, the intra-state may be generated using differential acquisition (see Fig. 4). In differential acquisition, each device is able to predict (and interpolate) its future states. Then, in order to reduce the required time to collect the information about the real device state, they only transmit to the hardware manager the differences between one of the last predictive or interpolated states in the sequence and the reality. Later, the hardware manager reconstructs the complete intra-state. This solution is in general much more efficient (especially if information compression is also enabled) and, overall, faster (which helps to meet the timing condition).
Then, considering all previous discussion, Algorithm 1 presents the general management algorithm.
In summary, Algorithm 1 creates the first intra-state and calculates the sequence of interpolated and predictive states using the obtained intra-state and given the desired pattern. Then, it calls the decision-making process with the calculated sequence of future states and performs the appropriate actions. Finally, it waits until it is necessary to obtain the next intra-state, and then acquires it (using differential acquisition if available).
In Algorithm 1, two procedures remain undefined: the decision-making function and the calculation process of the future states (including the simulation and interpolation process). With respect to the first function (decision-making), it is totally independent of the proposed predictive algorithms (see Section 3.3). Concerning the second function (calculation of the future states), Algorithm 2 represents its implementation.
As can be seen in Algorithm 2, two parameters are taken as input: the original intra-state and the scheme of predictive and interpolated states. This scheme has the following structure (5).
$$ \mathrm{S}\mathrm{S} = \left\{i{\mathrm{nstant}}_{\mathrm{pred}},\ {\mathrm{instant}}_{\mathrm{interp}}\right\} $$
SS is a list of two elements where the fist element, instantpred, is a list of the temporal instants for which a predictive state must be calculated. The second element, instantinterp, is a list of lists indicating the structure of the interpolated states (6).
$$ {\mathrm{instant}}_{\mathrm{interp}} = \left\{{\left\{{T}_1,\dots, {T}_q\right\}}_i,\ i=1,\dots, \mathrm{length}\left({\mathrm{instant}}_{\mathrm{pred}}\right)\right\} $$
In instantinterp, each list indicates the temporal instants for which an interpolated state must be calculated. The first list indicates the instants between the first intra-state and the first predictive state, the second list indicates the instants between the first and the second predictive states, etc.
Considering Algorithm 2, a first numerical evaluation of the impact of using interpolated states in the calculation time may be done. As we said, simulators present an order of complexity of ℴ(n 2) for both, the number of considered devices and the number of instants for which the system state is calculated. On the contrary, interpolation algorithms present an order of complexity of ℴ(n) for both, the number of considered devices and the number of instants for which the system state is calculated. Then, we consider the needed time to calculate an intra-state for a certain CPS T I , the required time to calculate one predictive state in the same CPS T P , and, finally, the needed time to calculate one interpolated state in that CPS T B . If we suppose the selected pattern for the sequence of future states is the default configuration (see Fig. 3), the time employed in the calculation process if interpolated states are not considered is (7). In (7) T 3 − P is the calculation time for three predictive states.
$$ {T}_I+{T}_{3-P} = {T}_I+{3}^2{T}_P $$
In the same situation, if interpolated states are considered, the needed time is (8).
$$ {T}_I+{T}_P + {T}_{2-B} = {T}_I+{T}_P+2{T}_B $$
In the general case, T I > T P > T B . In order to obtain a first evaluation, we imagine that T I ≈ 3T P and T P ≈ T B . In those circumstances, the use of interpolated states reduces the calculation time around 50 %. Then, it is clear that the use of interpolated states helps to fulfill the timing condition.
On the other hand, it is important to remark that interpolation techniques also present some disadvantages. As we said, interpolated states have a low precision, although they can be calculated in a very fast way. The second part helps to fulfill timing requirements, but the first idea refers to the need of more complex decision-making algorithms (depending on the case). If interpolation techniques are correctly designed and the CPS presents a regular behavior, then the loss of precision does not affect significantly the mobility and device lifecycle management. However, in CPS whose behavior tends to be very dynamical and changing (or if interpolation techniques are not correctly planned), the loss of precision might be remarkable and the decision-making modules should implement mechanism to operate in those conditions (what complicates the algorithm design and programming).
Finally, in Algorithm 2, any of the available CPS or Internet-of-Things (IoT) simulators is valid. For example, the simulator CyPhySim [20, 25] proposed by the Berkeley University could be employed. Other options are the NS3 simulator [26], the SimpleIoTsimulator suite [27], or the junction of MATLAB and Simulink [28]. Relating to the interpolation process, also any of the available techniques may be used: from linear interpolation [29] to cubic splines [30].
Decision-making and predictive solutions
Once a sequence of future states is generated, the decision-making algorithms evaluate the information and decide to execute the appropriate actions. Many possible decision-making algorithms may be employed in the predictive algorithms. On the most efficient case, intelligent decision-making [31] would be used. However, in this work, we have designed a more simple strategy.
In our proposal, in each sequence of future states, the decision-making algorithm looks for the relevant changes defined in the system (such as a device running out of battery charge). Then, if any of them is found, the system will initiate the transaction for the system reconfiguration immediately. Once all the parameters have been negotiated, all data transmitted, etc., the transaction is pending. In the moment when the change really occurs, the transaction is finally accepted and the CPS is reconfigured. If, at the end, the change never happens, the transaction is canceled. Algorithm 3 presents the decision-making function specifically designed for this work.
On the other hand, in advanced scenarios, decision-making module may also change the scheme of interpolated and predictive states. As we said, CPS present a very dynamical behavior. In some circumstances, the selected scheme for the interpolated and predictive states at first might not be adequate when the CPS evolves. In those cases, the decision-making algorithms may implement mechanism to detect the increase in the number of unpredicted changes and to reconfigure the scheme of states in order to be more effective.
Practical implementation and experimental validation
For the first practical implementation of the proposed solution, we have selected as deployment scenario a laboratory at the Technical University of Madrid. The laboratory is organized in three rooms (see Fig. 5), being deployed a different CPS (i.e., a different hardware manager) in each room.
Deployment scenario
In this scenario, three predictable relevant changes can occur: (a) devices move from the coverage area of one CPS to the coverage area of another, (b) devices leave all the deployed CPS, or (c) devices run out of battery and become unavailable. In the first case, a handover must be executed. In the second and third case, if necessary, tasks being executed by the device have to be delegated or canceled. In advance scenarios, techniques for avoiding the predicted events could also be employed. For example, in applications involving humans, messages to prevent them from leaving the system could be sent. These advanced solutions, however, are not considered in this work.
In CPS, processes are not assigned to only one device. Instead, processes are divided in several tasks which are executed in the hardware platform. For this reason, if one device is transferred to other CPS, the whole context (process) cannot be transferred. Then, handovers in our CPS are managed as a case of IPv6 mobility [32] and not as traditional handovers from mobile networks. In that way, the target CPS acts only as a proxy, connecting the source CPS with the transferred device.
In the case that the device leaves all CPS or runs out of battery charge, if the estimated time to finish the assigned tasks is greater than the remaining time until the device becomes unavailable, resources are allocated in other device or devices, and the necessary data are transmitted to them. When the device becomes unavailable, the new execution is activated. If, at the end, the device remains available, the allocated resources are released.
We performed a system deployment in order to validate the proposed solution. For that, the hardware platform in the described CPS is made of various smartphones and tablets, where an application called CPS Client was installed. This application (see Fig. 6) allows the device to be part of hardware platform when pressing the connect button and releases the device by pressing the disconnect button. When acting as an element of the hardware platform, the device tries always to be connected and sends periodically the needed data to the hardware manager to create the mandatory intra-states. It also shows some information about the activity in the system.
CPS client application
In this scenario, differential acquisition is not available, and the employed simulator is an integration of the NS-3 simulator and the suites MATLAB and Simulink [33]. The pattern of predictive and interpolated states is as follows (9) (time given in seconds). As can be seen, h = 1 s and T sim = 3 s. In order to select these values, we tried to not commit errors up to the maximum typical error considered in engineering. We considered people walking to have a medium speed of \( 1.1\frac{m}{s} \) [34]. Then, in a room with 30 m long, in 3 s a person has varied its situation in more than a 10 % (the typical limit to consider a value negligible in engineering).
$$ {S}_{t=0}^I\kern0.75em {S}_{t=1}^B\kern0.75em {S}_{t=2}^B\kern0.75em {S}_{t=3}^P $$
In the proposed scheme of future system states, two interpolated states have been considered. Walking people tend to follow a very predictable path and batteries are discharged slowly, so various interpolated states may be included without committing great errors. It must be noted that a matrix cubic spline technique was implemented as interpolation technology.
In this scenario, thirty-six (36) people are provided with the CPS Client and are requested to operate in the coverage area of the CPS. They also could leave that zone. The human behavior perfectly represents the complicated dynamic of CPS. Data about the number of managed changes, the success rate and the calculation time were collected.
More than 400 relevant changes were registered while performing the experimental validation. Figure 7 shows the distribution of the registered changes, depending on their type. As can be seen, most of them were handovers (55 %). This fact agrees with the scenario configuration, where three different CPS were deployed. On the other hand, changes about devices running out of battery charge were few (3 %), as most devices had batteries with enough capacity to be independent for dozens of hours.
Classification of the registered events into types
Most of these changes were successfully managed, i.e., the expected actions are taken when the change occurs (and only if the change occurs). The aggregated success rate is slightly over 95 % (see Fig. 8). With respect to the wrongly managed events (around a 4 %), all errors are due to changes unpredicted by the simulator, so the management algorithm failed. The underlying cause of these misinterpretations is the dynamical model employed to represent the system evolution, which for some phenomena (such as the batteries discharge process) implements laws which only are approximated mathematical functions due to the complex behavior of these phenomena. For example, typically the battery charge is overestimated, and devices may turn off before executing the adequate actions. Advanced CPS models [35] would solve this situation.
Aggregated success rate
In conclusion, the proposed predictive algorithm, where patterns include interpolated states, clearly allows the correct management of changes in the hardware platform (despite the greater error included).
In fact, around the 4 % of events are unpredicted by the simulator. However, some additional considerations should be given to Fig. 8. If the success rate is disaggregated into the three event types, some relevant facts might be exposed (see Fig. 9).
Disaggregated success rate
First, as can be seen, the success rate in the case of transferring a device to a second CPS (a handover) is higher than the aggregated rate. If we consider a device leaving the entire deployment, the success rate is, more or less, equal to the aggregated rate. These high values for the success rate may be associated to a good model for the device movement in the simulator. Then, most displacements are correctly predicted.
On the other hand, success rate in the case that a device runs out of battery charge is much fewer (around a 30 % fewer) than the aggregated rate. Clearly, that is due to the complexity associated to the battery charge simulation. Generic models are employed in these simulations, but they poorly fit the real evolution of battery charge. Then, an important percentage of the events of this type are not predicted. Nevertheless, as this type of events only represents the 3 % of the total amount events, the impact in the aggregated success rate is limited.
Finally, Fig. 10 represents the histogram of the calculation time employed in obtaining each one of the 250 sequences of future states generated during the experimental validation. As can be seen, the most probable time is the interval of 0.3–0.4 s, so it is clearly smaller than the time step (h = 1 s). Besides the maximum calculation time was 0.57 s, which continuous being smaller than the time step. In conclusion, the proposed solution meets the timing condition.
Calculation time distribution
Most practical discussions about the characteristics of Cyber-Physical systems (CPS) conclude that they are composed of wireless, embedded, mobile devices. Thus, techniques for mobility and device lifecycle management are necessary. Traditional reactive solutions are not valid in CPS, as they present a complex, and sometimes random, dynamic. The proposed basic predictive solution cannot be employed as they do not fulfill the timing requirements. Therefore, in this paper, we propose an advanced predictive technique for managing the mobility and device lifecycle, being able to meet all requirements of CPS. The solution is based on an infinite loop, which calculates, in each iteration, a sequence of future system states using a CPS simulator and interpolation algorithms.
As we saw, the obtained success rate is higher than the 95 %, so the proposed solution correctly manages the changes in the hardware platform. However, most complicated elements (such as the battery charge) present difficulties to be simulated correctly. Besides, timing requirements are comfortably met.
The proposed solution is valid for all types of CPS; however, improved simulation models are needed and tools for generating them automatically are also necessary. Obtaining these technologies will determine the future commercial success of our proposal. Moreover, reactive techniques which complement the proposed predictive solution are necessary, in order to create a really useful hardware management algorithm for CPS.
KD Kim, PR Kumar, Cyber-Physical Systems: a perspective at the centennial. Proc. IEEE 100(Special Centennial Issue), 1287–1308 (2012)
K Wan, KL Man, D Hughes, Specification, analyzing challenges and approaches for Cyber-Physical Systems (CPS). Eng. Lett. 18(3), 308 (2010)
M Jiménez, R Palomera, I Couvertier, Introduction to embedded systems (Springer, New York 2013). ISBN: 978-1-4614-3142-8. doi: 10.1007/978-1-4614-3143-5
S Aram, I Khosa, E Pasero. Conserving energy through neural prediction of sensed data. JoWUA. 6(1), 74-97 (2015)
A Koubâa, B Andersson, A vision of cyber-physical internet. In 8th International Workshop on Real-Time Networks (RTN'09) (Instituto Politécnico do Porto. Porto, 2009). http://hdl.handle.net/10400.22/3837
A Bindel, P Conway, L Justham, A West, New lifecycle monitoring system for electronic manufacturing with embedded wireless components. Circuit World 36(2), 33–39 (2010)
B. Bordel Sanchez, Á. Sánchez-Picot, & D. Sanchez De Rivera, Using 5G technologies in the Internet of things handovers, problems and challenges. in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on (IEEE, 2015), pp. 364-369.
B Bordel Sánchez, R Alcarria, D Martín, T Robles, TF4SM: a framework for developing traceability solutions in small manufacturing companies. Sensors 15(11), 29478–29510 (2015)
V Katiyar, N Chand, N Chauhan, Recent advances and future trends in wireless sensor networks. Int. J. Appl. Eng. Res. 1(3), 330 (2010)
N. Keddis, G. Kainz, C. Buckl, & A. Knoll, Towards adaptable manufacturing systems. in Industrial Technology (ICIT), 2013 IEEE International Conference on (IEEE, 2013), pp. 1410-1415
DD Hoang, HY Paik, CK Kim, Service-oriented middleware architectures for Cyber-Physical Systems. Int. J. Comput. Sci. Netw. Secur 12(1), 79–87 (2012)
U Mönks, H Trsek, L Dürkop, V Geneiß, V Lohweg, Assisting the design of sensor and information fusion systems. Procedia Technol 15, 35–45 (2014)
T. Dillon, V. Potdar, J. Singh, & A. Talevski, Cyber-Physical Systems: providing quality of service (QoS) in a heterogeneous systems-of-systems environment. in Digital Ecosystems and Technologies Conference (DEST), 2011 Proceedings of the 5th IEEE International Conference on (IEEE, 2011), pp. 330-335.
TS Dillon, H Zhuge, C Wu, J Singh, E Chang, Web‐of‐things framework for Cyber-Physical Systems. Concurrency Comput. Pract. Exp 23(9), 905–923 (2011)
D. Work, A. Bayen, & Q. Jacobson, Automotive cyber physical systems in the context of human mobility. in National Workshop on high-confidence automotive Cyber-Physical Systems (Troy, Berkeley University 2008), http://bayen.eecs.berkeley.edu/sites/default/files/conferences/cps1.pdf
V. C. M. X. Hu, W. Leung, B.C. Du, P. Seet, & P. Nasiopoulos, A service oriented mobile social networking platform for disaster situations in Proc. 46th HICSS (2013), pp. 136–145
X Hu, T Chu, H Chan, V Leung, Vita: a crowdsensing-oriented mobile Cyber-Physical System. Emerg. Top. Comput. IEEE Trans 1(1), 148–165 (2013)
C Fok, A Petz, D Stovall, N Paine, C Julien, S Vishwanath, Pharos: a testbed for mobile Cyber-Physical Systems. (The University of Texas, The Center for Advanced Research in Software Engineering, Austin, 2011). TR-ARiSE-2011-001
J Fink, A Ribeiro, V Kumar, Robust control for mobility and wireless communication in Cyber-Physical Systems with application to robot teams. Proc. IEEE 100(1), 164–178 (2012)
E.A. Lee, M. Niknami, T.S. Nouidui, & M. Wetter, Modeling and simulating Cyber-Physical Systems using CyPhySim. in Proceedings of the 12th International Conference on Embedded Software (IEEE Press, 2015), pp. 115-124
R Poovendran, K Sampigethaya, SKS Gupta, I Lee, KV Prasad, D Corman, JL Paunicka, Special issue on Cyber-Physical Systems [scanning the issue]. Proc. IEEE 100(1), 6–12 (2012)
T Robles, R Alcarria, D Martın, M Navarro, R Calero, S Iglesias, M López, An IoT based reference architecture for smart water management processes. J. Wirel. Mob. Netw. Ubiquit. Comput. Dependable Appl 6(1), 4–23 (2015)
F-Y Leu, H-L Chen, C-C Cheng, Improving multi-path congestion control for event-driven wireless sensor networks by using TDMA. J. Internet Serv. Inf. Secur 5(4), 1–19 (2015)
P Satam, H Alipour, Y Al-Nashif, S Hariri, Anomaly behavior analysis of DNS protocol. J. Internet Serv. Inf. Secur 5(4), 85–97 (2015)
C. Brooks, E. A. Lee, D. Lorenzetti, T. S. Nouidui, & M. Wetter, M. CyPhySim: a Cyber-Physical Systems simulator. in Proceedings of the 18th International Conference on Hybrid Systems: Computation and Control (ACM, 2015), pp. 301-302
NS3 simulator webpage. Available online: https://www.nsnam.org/ Accessed on 25 May 2016
SimpleIoTsimulator webpage. Available online: http://www.smplsft.com/SimpleIoTSimulator.html Accessed on 25 May 2016
CM Ong, Dynamic simulation of electric machinery: using MATLAB/Simulink, vol. 5 (Prentice Hall PTR, Upper Saddle River, 1998)
G. Barequet, & M. Sharir, Piecewise-linear interpolation between polygonal slices. in Proceedings of the tenth annual symposium on Computational geometry (ACM, 1994), pp. 93-102
C De Boor, A practical guide to splines, vol. 27 (Springer, New York, 1978), p. 325
NK Janjua, FK Hussain, OK Hussain, Semantic information and knowledge integration through argumentative reasoning to support intelligent decision making. Inf. Syst. Front. 15(2), 167–192 (2013)
D Johnson, C Perkins, J Arkko, Mobility support in IPv6 (No. RFC 3775), 2004
Dmitry Kachan. Integration of NS-3 with MATLAB/Simulink. Available online: http://epubl.ltu.se/1653-0187/2010/062/LTU-PB-EX-10062-SE.pdf. Accesed 25 May 2016
Y Feng, K Liu, Q Qian, F Wang, X Fu, Public-transportation-assisted data delivery scheme in vehicular delay tolerant networks. Int. J. Distrib. Sens. Netw. (2012)
D Martín, J García Guzmán, J Urbano, A Amescua, Modeling software development practices using reusable project patterns: a case study. J. Softw. Evol. Process 26(3), 339–349 (2014)
The research leading to these results has received funding from the Ministry of Economy and Competitiveness through SEMOLA project (TEC2015-68284-R) and from the Autonomous Region of Madrid through MOSI-AGIL-CM project (grant P2013/ICE-3019, co-funded by EU Structural Funds FSE and FEDER).
The contributions described in this work are distributed among the authors in the way that follows: all the authors conceived and designed the solution; BB and RA wrote the paper; DSdeR performed the experiments; and ÁS-P programmed the simulator. All authors read and approved the final manuscript.
Department of Telematics Systems Engineering, Universidad Politécnica de Madrid, Avenida Complutense n° 30, 28040, Madrid, Spain
Borja Bordel Sánchez
, Diego Sánchez de Rivera
& Alvaro Sánchez-Picot
Department of Topographic Engineering and Cartography, Universidad Politécnica de Madrid, Campus Sur, 28031, Madrid, Spain
Ramón Alcarria
Search for Borja Bordel Sánchez in:
Search for Ramón Alcarria in:
Search for Diego Sánchez de Rivera in:
Search for Alvaro Sánchez-Picot in:
Correspondence to Borja Bordel Sánchez.
Bordel Sánchez, B., Alcarria, R., Sánchez de Rivera, D. et al. Predictive algorithms for mobility and device lifecycle management in Cyber-Physical Systems. J Wireless Com Network 2016, 228 (2016) doi:10.1186/s13638-016-0731-0
Device lifecycle
Predictive models
CPS simulation
Interpolation algorithms
Intelligent Mobility Management for Future Wireless Mobile Networks | CommonCrawl |
Environmental Engineering Research
Korean Society of Environmental Engineers (대한환경공학회)
The Environmental Engineering Research (EER) is published quarterly by the Korean Society of Environmental Engineers (KSEE). The EER covers a broad spectrum of the science and technology of air, soil, and water management while emphasizing scientific and engineering solutions to environmental issues encountered in industrialization and urbanization. Particularly, interdisciplinary topics and multi-regional/global impacts (including eco-system and human health) of environmental pollution as well as scientific and engineering aspects of novel technologies are considered favorably. The scope of the Journal includes the following areas, but is not limited to: 1. Atmospheric Environment & Climate Change: Global and local climate change, greenhouse gas control, and air quality modeling 2. Renewable Energy & Waste Management: Energy recovery from waste, incineration, landfill, and green energy 3. Environmental Biotechnology & Ecology: Nano-biosensor, environmental genomics, bioenergy, and environmental eco-engineering 4. Physical & Chemical Technology: Membrane technology and advanced oxidation 5. Environmental System Engineering: Seawater desalination, ICA (instrument, control, and automation), and water reuse 6. Environmental Health & Toxicology: Micropollutants, hazardous materials, ecotoxicity, and environmental risk assessment
http://submit.eeer.org/ KSCI KCI SCOPUS SCIE
Control of Methyl Tertiary-Butyl Ether via Carbon-Doped Photocatalysts under Visible-Light Irradiation
Lee, Joon-Yeob;Jo, Wan-Kuen 179
https://doi.org/10.4491/eer.2012.17.4.179 PDF KSCI KPUBS
The light absorbance of photocatalysts and reaction kinetics of environmental pollutants at the liquid-solid and gas-solid interfaces differ from each other. Nevertheless, many previous photocatalytic studies have applied the science to aqueopus applications without due consideration of the environment. As such, this work reports the surface and morphological characteristics and photocatalytic activities of carbon-embedded (C-$TiO_2$) photocatalysts for control of gas-phase methyl tertiary-butyl ether (MTBE) under a range of different operational conditions. The C-$TiO_2$ photocatalysts were prepared by oxidizing titanium carbide powders at $350^{\circ}C$. The characteristics of the C-$TiO_2$ photocatalysts, along with pure TiC and the reference pure $TiO_2$, were then determined by X-ray diffraction, scanning emission microscope, diffuse reflectance ultraviolet-visible-near infrared (UV-VIS-NIR), and Fourier transform infrared spectroscopy. The C-$TiO_2$ powders showed a clear shift in the absorbance spectrum towards the visible region, which indicated that the C-$TiO_2$ photocatalyst could be activated effectively by visible-light irradiation. The MTBE decomposition efficiency depended on operational parameters, including the air flow rate (AFR), input concentration (IC), and relative humidity (RH). As the AFRs decreased from 1.5 to 0.1 L/min, the average efficiencies for MTBE increased from 11% to 77%. The average decomposition efficiencies for the ICs of 0.1, 0.5, 1.0, and 2.0 ppm were 77%, 77%, 54%, and 38%, respectively. In addition, the decomposition efficiencies for RHs of 20%, 45%, 70%, and 95% were 92%, 76%, 50%, and 32%, respectively. These findings indicate that the prepared photocatalysts could be effectively applied to control airborne MTBE if their operational conditions were optimized.
Removal of Perchlorate Using Reverse Osmosis and Nanofiltration Membranes
Han, Jonghun;Kong, Choongsik;Heo, Jiyong;Yoon, Yeomin;Lee, Heebum;Her, Namguk 185
Rejection characteristics of perchlorate ($ClO_4^-$) were examined for commercially available reverse osmosis (RO) and nanofiltration (NF) membranes. A bench-scale dead-end stirred-cell filtration system was employed to determine the toxic ion rejection and the membrane flux. Model water solutions were used to prepare $ClO_4^-$ solutions (approximately, $1,000{\mu}g/L$) in the presence of background salts (NaCl, $Na_2SO_4$, and $CaCl_2$) at various pH values (3.5, 7, and 9.5) and solution ionic strengths (0.001, 0.01, and 0.01 M NaCl) in the presence of natural organic matter (NOM). Rejection by the membranes increased with increasing solution pH owing to increasingly negative membrane charge. In addition, the rejection of the target ion by the membranes increased with increasing solution ionic strength. The rejection of $ClO_4^-$ was consistently higher for the RO membrane than for the NF membrane and $ClO_4^-$ rejection followed the order $CaCl_2$ < NaCl < $Na_2SO_4$ at conditions of constant pH and ionic strength for both the RO and NF membranes. The possible influence of NOM on $ClO_4^-$ rejection by the membranes was also explored.
A Study on the Mass Balance Analysis of Non-Degradable Substances for Bioreactor Landfill
Chun, Seung-Kyu 191
Analysis of hydrological safety as well as the determination of many substance concentrations are necessary when bioreactor systems are introduced to landfill operations. Therefore, hydrological and substance balance model was developed since it can be applied to various bioreactor landfill operation systems. For the final evaluation of the model's effectiveness, four different methods of injections (leachate alone, leachate and organic waste water, leachate and reverse osmosis concentrate, and all the above three combination) was applied to 1st landfill site of Sudokwon landfill. As a result, the water content of the hypothetical cases for four different systematic bioreactors is projected to be increased up to 35.5% in next 10 years, and this indicated that there will be no problems in meeting the hydrological safety. Also, the final $Cl^-$ concentration after 10-yr time period was projected to be between from minimum 126 to maximum 3,238 mg/L, which could be still a decrease from the original value of 3,278 mg/L. According to the proposed model, whether the substance concentration becomes increased or decreased largely depends on the ratio of initial quantity of inner landfill leachate and the rate of injection.
Removal of Phosphorus in Wastewater by Ca-Impregnated Activated Alumina
Kang, Seong Chul;Lee, Byoung Ho 197
Phosphorus removal during discharge of wastewater is required to achieve in a very high level because eutrophication occurs even at a very low phosphorus concentration. However, there are limitations in the traditional technologies in the removal of phosphorus at very low concentration, such as at a level lower than 0.1 mg/L. Through a series of experiments, a possible technology which can remove phosphate to a very low level in the final effluent of wastewater was suggested. At first Al, Zn, Ca, Fe, and Mg were exposed to phosphate solution by impregnating them on the surface of activated alumina to select the material which has the highest affinity to phosphate. Kinetic tests and isotherm tests on phosphate solution have been performed on four media, which are Ca-impregnated activated alumina, activated alumina, Ca-impregnated loess ball, and loess ball. Results showed that Ca-impregnated activated alumina has the highest capacity to adsorb phosphate in water. Scanning electron microscope image analysis showed that activated alumina has high void volume, which provides a large surface area for phosphate to be adsorbed. Through a continuous column test of the Ca-impregnated activated alumina it was discovered that about 4,000 bed volumes of wastewater with about 0.2 mg/L of phosphate can be treated down to lower than 0.14 mg/L of concentration.
An Advanced Kinetic Method for HO2·/O2-· Determination by Using Terephthalate in the Aqueous Solution
Kwon, Bum Gun;Kim, Jong-Oh;Kwon, Joong-Keun 205
Hydroperoxyl radical/superoxide anion radical ($HO_2{\cdot}/O_2^-{\cdot}$, $pK_a$=4.8) as an intermediate is of considerable importance in oxidation processes. Hence, the method of detecting $HO_2{\cdot}/O_2^-{\cdot}$ with high sensitivity is necessary to be developed. To achieve this objective, this study newly employed terephthalate (TA) as a probe for the measurement of $HO_2{\cdot}/O_2^-{\cdot}$ in the kinetic method presented in our previous study. This method was based on the hydroxylation of TA to produce mainly hydroxyterephthalic acid or hydroxyterephthalate (OHTA), which was analyzed by fluorescence detection (${\lambda}_{ex}$=315nm, ${\lambda}_{ex}$=425nm). The life-time of $HO_2{\cdot}/O_2^-{\cdot}$ and its concentration formed from the photolysis technique of $H_2O_2$ were reported in this study. At range of pH 2-10, the life-time of $HO_2{\cdot}/O_2^-{\cdot}$ was 51-422 sec. In particular, an increase in the life-time with pH was observed. The sensitivities of the kinetic method by using TA were always higher with 1.7-2.5 times at pH 8.0 than those by using benzoic acid. From these results, this study can contribute to understanding the basic functions of $HO_2{\cdot}/O_2^-{\cdot}$ in oxidation processes.
Assessment of Airborne Microorganisms in a Swine Wastewater Treatment Plant
Kim, Ki-Youn;Ko, Han-Jong;Kim, Daekeun 211
Quantification of the airborne microorganisms (bacteria and fungi) at a swine wastewater treatment plant was performed. Microbial samples were collected at three different phases of the treatment process over a 1-yr period. Cultivation methods based on the viable counts of mesophilic heterotrophic bacteria and fungi were performed. The concentrations of airborne bacteria ranged up to about $5{\times}10^3$ colony-forming unit (CFU)/$m^3$, and those of airborne fungi ranged up to about $9{\times}10^2CFU/m^3$. The primary treatment (e.g., screen, grit removal, and primary sedimentation) was found to be the major source of airborne microorganisms at the site studied, and higher levels of airborne bacteria and fungi were observed in summer. High levels of the respirable bioaerosol (0.65 to $4.7{\mu}m$ in size) were detected in the aeration phase. Among the environmental factors studied, temperature was strongly associated with fungal aerosol generation (with a Spearman correlation coefficient of 0.90 and p-value <0.01). Occupational biorisks are discussed based on the observed field data.
Treatability Evaluation of N-Hexadecane and 1-Methylnaphthalene during Fenton Reaction
Chae, Myung-Soo;Woo, Sung-Geun;Yang, Jae-Kyu;Bae, Sei-Dal;Choi, Sang-Il 217
In this study, the treatability of two target contaminants during the Fenton reaction, n-hexadecane and 1-methylnaphthalene, was evaluated as a function of the amounts of $FeCl_2$ and $H_2O_2$ injected into open and closed reaction systems. In the Fenton reaction of n-hexadecane and 1-methylnaphthalene, the mass recovery of the target contaminants was above 95% in the closed system. However, when the Fenton reaction was performed with high amounts of $H_2O_2$ and $FeCl_2$ injected in the open system, a reduction of approximately 40% of the initial mass of 1-methylnaphthalene was observed. This trend may be explained by the unique physical properties of 1-methylnaphthalene, which has higher volatility than n-hexadecane. Further, this trend was well correlated with the rise in high temperature at the initial reaction stage. Considering the mass recovery of the two target contaminants, the reaction temperature, and the residual concentration of $H_2O_2$ at different amounts of $FeCl_2$ and $H_2O_2$ injected, it can be suggested that the Fenton reaction should be performed with controlled conditions that can provide a suitable reaction environment between oxidant and contaminants. | CommonCrawl |
18th International Conference on Bioinformatics
DDI-PULearn: a positive-unlabeled learning method for large-scale prediction of drug-drug interactions
Yi Zheng1,
Hui Peng1,
Xiaocai Zhang1,
Zhixun Zhao1,
Xiaoying Gao2 &
Jinyan Li ORCID: orcid.org/0000-0003-1833-74131
Drug-drug interactions (DDIs) are a major concern in patients' medication. It's unfeasible to identify all potential DDIs using experimental methods which are time-consuming and expensive. Computational methods provide an effective strategy, however, facing challenges due to the lack of experimentally verified negative samples.
To address this problem, we propose a novel positive-unlabeled learning method named DDI-PULearn for large-scale drug-drug-interaction predictions. DDI-PULearn first generates seeds of reliable negatives via OCSVM (one-class support vector machine) under a high-recall constraint and via the cosine-similarity based KNN (k-nearest neighbors) as well. Then trained with all the labeled positives (i.e., the validated DDIs) and the generated seed negatives, DDI-PULearn employs an iterative SVM to identify a set of entire reliable negatives from the unlabeled samples (i.e., the unobserved DDIs). Following that, DDI-PULearn represents all the labeled positives and the identified negatives as vectors of abundant drug properties by a similarity-based method. Finally, DDI-PULearn transforms these vectors into a lower-dimensional space via PCA (principal component analysis) and utilizes the compressed vectors as input for binary classifications. The performance of DDI-PULearn is evaluated on simulative prediction for 149,878 possible interactions between 548 drugs, comparing with two baseline methods and five state-of-the-art methods. Related experiment results show that the proposed method for the representation of DDIs characterizes them accurately. DDI-PULearn achieves superior performance owing to the identified reliable negatives, outperforming all other methods significantly. In addition, the predicted novel DDIs suggest that DDI-PULearn is capable to identify novel DDIs.
The results demonstrate that positive-unlabeled learning paves a new way to tackle the problem caused by the lack of experimentally verified negatives in the computational prediction of DDIs.
Drug-drug interactions refer to the efficacy change of one drug caused by a co-administration of another drug. DDIs may occur when two or more drugs are taken together or concomitantly. DDIs account for around one-third of all adverse drug reactions [1–3], leading to significant morbidity and mortality worldwide [4]. Currently a few DDIs are identified via wet-lab experiments, however, a large number of DDIs remain unknown [5]. Thus, there is an urgent need to detect potential DDIs to reduce patients' risks and economic costs.
Conducting experimental trials to detect potential interactions between a great number of drug pairs is unrealistic due to the huge time and monetary cost. Recently, several computational methods have been successfully applied to detect DDIs. Here, we categorize these methods roughly into three categories: similarity-based methods, knowledge-based methods, and classification-based methods.
The similarity-based methods assume that drugs with similar properties tend to interact with the same drug [6]. Based on this assumption, different drug similarity measures have been designed employing various drug properties. Vilar et al. measured the drug similarity as the Tanimoto coefficient between molecular fingerprints [6] and between interaction profile fingerprints of drug pairs [4]. Gottlieb et al. [7] built their DDI predictive model by integrating seven drug similarity measures, namely chemical structure similarity, ligand similarity, side-effect similarity, annotation similarity, sequence similarity, closeness similarity in the protein-protein network, and Gene Ontology similarity. By using the drug-drug similarity indirectly, Zhang et al. [8] designed a label propagation framework to predict DDIs based on drug chemical structures, labeled side-effects, and off-labeled side-effects. Similarity-based methods have achieved remarkable prediction performance, however, interactions for drugs lacking similarity information cannot be predicted. In addition, the assumption of similarity-based methods has one limit: dissimilar drugs may interact with the same drug.
The knowledge-based methods detect DDIs from scientific literature [9], electronic medical records[10], and the Food and Drug Administration Adverse Event Reporting System (FAERS) [11, 12]. He et al. [9] presented a Stacked generalization-based approach for automatic DDI extraction from biomedical literature. Tatonetti et al. [11] identified drug interactions and effects from FAERS using statistical methods. They found that interaction between paroxetine and pravastatin increased blood glucose levels. Knowledge-based methods rely on the accumulation of post-marketing clinical evidence. Consequently, they are incapable to detect all DDIs and cannot warn the public of the potentially dangerous DDIs before drugs reach the market.
Classification-based methods formulate DDI prediction as a binary classification task. Cami et al. [13] represented drug-drug pairs as feature vectors using three types of covariates from their constructed pharmacointeraction network. Then they defined the presence or absence of interactions as labels and finally built logistic regression models for predictions. Cheng et al. [5] encoded each drug pair as a 4-dimensional vector of four different similarities, and employed five classical prediction algorithms for predictions. Compared with similarity-based methods and knowledge-based methods, classification-based methods don't have the assumption limitation or dependence on evidence accumulation. Nevertheless, two classes of data are required for classification methods: positive samples and negative samples. Existing classification-based methods used drug-pairs known to interact as positive samples, and other unlabeled drug-pairs as negative samples [5, 13]. These unlabeled drug pairs may include a considerable number of real positive samples which can degrade the prediction performance.
From the above survey, it is understood that similarity-based methods and knowledge-based methods are limited to their application ranges, while classification-based methods are lack of reliable negative samples. In this work, we explore an advanced learning technique named positive-unlabeled learning (PU learning) to solve the problem of lacking negative samples for the classification-based methods.
PU learning and our new ideas
PU learning is to learn from the positive samples and unlabeled samples. PU learning has been successfully applied in several bioinformatic research fields, such as disease-gene association identification [14, 15], drug target detection [16] and glycosylation site prediction [17], and achieved remarkable performances. However, this advanced learning technique has not been explored enough in the prediction of drug interactions.
Conventional PU learning algorithms usually consist of two steps: the first step is to identify reliable negative samples from the unlabeled samples; the second step is to construct classifiers based on positive samples and identified reliable negative samples for subsequent predictions. The difference among different PU learning algorithms lies in different strategies used in the first or second step. In the first step, the spy strategy [18], 1-DNF [19], Rocchio [20] and Naive Bayesian (NB) [21] are widely used. The spy strategy selects a certain number of positive samples randomly as spies and puts them into the unlabeled samples first; then it determines the threshold of reliable negative samples (RNSs) under the condition that most spies are truly predicted as positives. The 1-DNF strategy extracts the features of positive samples and then selects RNSs which don't have the positive features. Rocchio and NB first label validated positive samples as +1 and unlabeled samples -1 to train the Rocchio and NB classifier respectively. Then the trained classifier is employed to classify unlabeled samples. Those unlabeled samples which are classified as negatives are taken as RNSs. In the second step, Expectation Maximization (EM) and Support Vector Machine (SVM) are commonly used. Most conventional PU learning algorithms are designed for text classification, thus there are barriers to apply them directly to DDI predictions.
Apart from the above methods, clustering provides another solution to identify likely negatives from the unlabeled data. For example, Hameed et al. [22] successfully improved the clustering approach Self Organizing Map (SOM) for drug interaction predictions. However, they only obtained 589 inferred negatives after clustering, which is much less than the validated 6,036 positives (i.e., validated DDIs), let alone all potential negatives (\(C_{548}^{2} - 6,036 = 143,842\)) of their 548 drugs. Performing cross-validation directly on the very few negatives are incapable to convince readers of the generalization of their methods. Inspired by the clustering process of k-means a typical clustering method, we find a possibility to infer reliable negative samples via ranking of KNN. If we treat "positives" and "negatives" as two clusters, k-means clusters samples into "positives" if they are close to positives. Samples far from positives will be clustered as negatives. Therefore, we can use KNN to measure the distances between unlabeled samples and labeled positives. Unlabeled samples far from positives are inferred negatives.
One-class Support Vector Machine (OCSVM) [23] has been widely used for classification in the absence of positive or negative samples [24]. It learns a hypersphere to describe the training data and ensures most training data are in the hypersphere. OCSVM requires one-class data only, thus it is an ideal technique to identify reliable negatives in the PU learning context.
In this work, we design a novel two-step PU learning approach for drug-drug interaction predictions (DDI-PULearn hereafter). In the first step, DDI-PULearn infers highly-reliable negative sample (RNS) seeds using two techniques OCSVM and KNN. To be specific, DDI-PULearn learns an OCSVM hypersphere from all labeled positive samples (i.e., validated DDIs) with a high-recall (>0.95). Then DDI-PULearn predicts labels for all unlabeled samples and adds the predicted negatives to the RNS seeds. Meanwhile, DDI-PULearn infers several reliable negative samples using the KNN strategy and adds them to the RNS seeds. In the second step, DDI-PULearn identifies all reliable negatives from the remaining unlabeled samples using SVM trained by the RNS seeds and labeled positives iteratively. The labeled positives and identified RNSs are finally used for prediction and validation. The performance of DDI-PULearn is evaluated on simulated DDI prediction for 548 drugs. Comparison experiments with the two baseline methods and five state-of-the-art methods both demonstrate the superior performance of DDI-PULearn.
We first report the number of components for PCA. Then we present the prediction performances under different representations of DDIs using multi-source drug property data. Following that, we show the performance improvement brought by reliable negative samples generated by DDI-PULearn via comparing with randomly selected negative samples and all potential negative samples. We also demonstrate the superior prediction performance of DDI-PULearn by comparing with five state-of-theart methods. Finally, we apply DDI-PULearn to predict unobserved DDIs and verify the results in DrugBank.
Components for PCA
To obtain the best setting for the PCA component number (PCN), we tried the following settings: PCN∈{1, 5, 10, 20, 30, 40, 50, 65, 80, 95, 110, 125, 140, 150, 160, 175, 200, 225, 250, 275, 300, 350, 400, 450, 500, 550, 600, 750, 800, 1000, 1250, 1750, 2000 }. The F1-scores of DDI-PULearn with different PCNs are illustrated in Fig. 1. It can be observed that the F1-score increases with PCN when PCN≤50. Besides, the F1-score values plateau when the PCN is larger than 50. The same conclusion can be drawn from the AUC results, as shown in Figure S1 in Additional file 1. Based on the above observation and considering the computational memory and time cost (computational memory and time increase with PCN), we set PCN as 50 for DDI-PULearn in our experiments.
F1-scores of DDI-PULearn with different PCNs. The x-axis is the PCA component number and the y-axis is the F1-score. Panel (a) shows the F1-scores for PCN between 1 and 2000, and Panel (b) is an amplification of the range [20,150] (amplification ratio = 5)
Representation of DDIs using multi-source drug property data
As mentioned in the "Feature vector representation for DDIs" subsection, we perform the feature ranking analysis to decide which drug property to use for DDI representation. Here, we conduct more experiments to confirm the analysis results. Specifically, we use the drug chemical substructures, drug targets and drug indications as basic drug properties (BDPs) for representation. Then we test the following 8 combinations of drug features for predictions: (1) BDPs; (2) BDPs + substituents; (3) BDPs + targets; (4) BDPs + pathways; (5) BDPs + substituents + targets; (6) BDPs + substituents + pathways; (7) BDPs + targets + pathways; (8) BDPs + substituents + targets + pathways. Apart from the feature vector representation, other details of the eight combinations are the same with DDI-PULearn. Fig. 2 shows the bar charts of the prediction results. It can be observed that all performance evaluation indices (i.e., precision/recall/F1-score) vary very slightly among the above 8 combinations. Employing more drug features for predictions bring redundant information which doesn't improve the prediction performance. It indicates that drug properties including drug substituents, drug targets and drug pathways play a minor role in the DDI predictions while the basic drug properties decide the prediction performance. The results further confirm the conclusion drawn in the previous feature ranking analysis. The detailed evaluation index values of the predictions are listed in Table S1 in Additional file 1.
Prediction results using different combinations of drug features. BDPs refer to the basic drug properties namely drug chemical substructures, drug targets, and drug indications
Performance improvement brought by identified reliable negative samples
Existing classification-based models either use all potential negative samples (all-negatives hereafter) or random negative samples (random-negatives hereafter) for predictions [5, 13]. All-negatives refer to all potential non-DDIs (i.e., unobserved DDIs) which are not in the positive samples. Random-negatives are generated by selecting a random number of negatives from all-negatives. To demonstrate the prediction performance improvement brought by reliable negative samples identified by DDI-PULearn, we compare DDI-PULearn with the above two baseline methods. Specifically, we obtain 101,294 (\(C_{548}^{2}-48,584\)) negatives for all-negatives. And we randomly select the same number of negatives (i.e., 45,026) with DDI-PULearn as random-negatives. Besides the negative samples, other details of prediction using all-negatives and random-negatives are the same with DDI-PULearn. To avoid bias, random-negatives are repeated 5 times and the average results are used for the final evaluation. Related prediction results are shown Table 1. It can be clearly seen that the prediction performances are significantly improved owing to the identified reliable negative samples. For example, the F1-score improvement over random-negatives and all-negatives are 0.147 (20.47%) and 0.315 (57.27%). It suggests that a better decision boundary has been learned with the identified reliable negative samples.
Table 1 Prediction performance comparison with the two baseline methods, namely all-negatives and random-negatives
Comparison with existing state-of-the-art methods
To further confirm the superior performance of DDI-PULearn, we compare it with several state-of-the-art methods reported in a recent study [25] using the same dataset. Same as [25], we evaluated DDI-PULearn by 20 runs of 3-fold cross-validation and 5-fold cross-validation under the same condition. The macro-averaging results of the 20 runs are used for final evaluation. The comparison results are listed in Table 2. Vilar's substructure-based method [6] and Vilar's interaction-fingerprint-based method [4] are two similarity-based methods proposed by Vilar et al.; Zhang's weighted average ensemble method, Zhang's L1 classifier ensemble method and Zhang's L2 classifier ensemble method are three ensemble methods which integrate neighbor recommendation, random walk and matrix perturbation by Zhang et al [25]. As shown in Table 2, DDI-PULearn achieves better performance than other state-of-the-art methods on all metrics. For example, using 5-fold cross-validation, DDI-PULearn outperforms the otherfive methods by 0.633 (276.6%), 0.415 (92.9%), 0.150 (21.1%), 0.139 (19.3%), 0.143 (19.9%) in F1-score respectively.
We also compared the proposed method with Hameed's PU learning method [22]. Both our work research on the 548 benchmark drugs. We inferred 45,026 reliable negatives which cover all the 548 researched drugs. By contrast, Hameed inferred 589 negatives and just covers only 256 researched drugs. To fairly compare with Hameed's method, we extracted the top 589 negatives in terms of inference scores from our inferred negatives and use the same strategy with Hameed to extract 589 random positives (hereinafter referred to as DDI-PULearn-Top).
Table 2 Performances of DDI-PULearn and the benchmark methods evaluated by 20 runs of 3-fold cross-validation and 5-fold cross-validation
We also constructed 10 training sets using the 589 top inferred negatives and randomly selected 589 known DDIs. The average performances of the 10 balanced training samples from 5-fold cross-validation are shown in Table 3. Note that SFR1 and SFR2 are two feature representation methods used by Hameed et al. [22]. It can be observed that DDI-PULearn-Top achieves comparable performance with Hameed's GSOM-based PU learning methods. Specifically, DDI-PULearn-Top achieves better recall and F1-score than Hameed's method using SFR1. It is slightly inferior to Hameed's method using SFR2. Comparing with Hameed's PU learning methods, DDI-PULearn has the following advantages: (1) DDI-PULearn infers many more negatives (45,026 vs 589) which is closer to the practical prediction task i.e., large-scale drug interaction prediction. Hameed's inferred negatives cover part of researched drugs (256 from 589), thus only interactions between the covered drugs are predicted and evaluated. By contrast, our inferred negatives cover all researched drugs, the possible interaction between all researched drugs are predicted and evaluated. (2) The key goal of DDI-PULearn and Hameed's method is to infer reliable negatives for classification. The 1178 evaluation samples (589 positives + 589 negatives) constructed by Hameed are quite few for the whole sample space (\(C_{548}^{2}=149,878)\). Consequently, classifiers may not be able to learn enough knowledge to distinguish positive/negative from negative/positive for non-evaluation samples (148,700 = 149,878-1,178) though they perform well on the evaluation samples.
Table 3 Performance assessment of DDI-PULearn-Top and Hameed's approaches using 10 training set and 5-fold cross-validation
The above comparison results with existing state-of-the-art methods and another PU Learning method both demonstrate the superior performances and advantages of the proposed positive-unlabeled learning method DDI-PULearn.
Novel DDIs predicted by DDI-PULearn
We employ DDI-PULearn to predict labels for the 101,294 unobserved DDIs, which are not available in the benchmark dataset. In the prediction, a larger prediction score of a drug pair suggests they have a higher interaction probability. We can obtain a recommendation list of novel DDIs by ranking them in descending order of their prediction scores. Like other data mining results, it is unrealistic to expect all highly ranked DDIs to be of value to domain experts. Therefore, we shortlist the top 25 novel interactions predicted by DDI-PULearn in Table 4. We further verify them in the DrugBank database which stores the latest DDI information. We highlight the confirmed DDIs in bold font. From Table 4, we can see that a significant ratio of predicted interactions is confirmed in DrugBank (11 out of 25). It indicates that DDI-PULearn does have the capability to predict novel drug-drug interactions.
Table 4 Top 25 novel DDIs predicted by the proposed method DDI-PULearn
Most existing methods are based on the closed-world assumption, taking validated interacted drug pairs as positives and unlabeled drug pairs as negatives to perform the prediction directly [4–7,13]. However, drugs from the unlabeled drug pairs still have considerable probabilities to interact. It means that the assumed negatives may include a considerable number of real positives which are yet unknown. As a result, classifiers trained with unlabeled drug pairs as negatives cannot learn a good boundary to classify true positives and true negatives.
Instead of taking unlabeled drug pairs as negatives directly, we develop a PU-Learning method to generate reliable negatives by learning from the positive and unlabeled samples. The comparison experiments with two baseline methods, five state-of-the-art methods, and a PU-learning method demonstrate that DDI-PULearn achieves superior performance. Investigation on the top-predicted novel DDIs also shows the competence of DDI-PULearn on predicting novel DDIs. The superior performance of DDI-PULearn can be attributed to the following aspects: (1) In the first step of generating reliable negative seeds, it takes advantage of the converse negative proposition of the similarity-based methods (achieved remarkable performance), i.e., dissimilar drugs are less likely to interact. It also utilizes the advanced one-class learning technique OCSVM. The combination of the above two techniques ensures that the most reliable negative seeds are generated. (2) In the second step, SVM trained with validated positives and the generated negative seeds is employed to predict the remaining unlabeled drug pairs. Then, the newly predicted negatives are added to the negative set to train SVM for the next round prediction. The process is repeated iteratively until no new negatives are obtained. The initial training with reliable negative seeds ensures the classification boundary is properly learnt and the iterative process extracts all possible negatives. Through the above learning from the validated positive samples and unlabeled samples, a better classification boundary has been learnt.
In this work, we propose a novel positive-unlabeled learning method named DDI-PULearn for large-scale drug-drug interaction predictions. DDI-PULearn first generates seeds of reliable negative samples from the unlabeled samples using two techniques namely OCSVM and KNN. Then trained with the generated seeds, DDI-PULearn employs SVM to identify all reliable negative samples iteratively. Following that, DDI-PULearn represents the labeled positive samples and identified negative samples as vectors by a similarity-based representation method using abundant drug properties. Finally, the vectors are compressed via PCA and further used as input for binary classifications. The innovation of this work lies in the design of the novel PU-Learning method and in the method for DDI representations. In the experimental part, we discussed the determination of PCA components number and different drug properties for DDI representations. We demonstrate the superior performance of DDI-PULearn by comparing it with two baseline methods and five state-of-the-art methods. All experimental results show that the DDI prediction performance is significantly improved owing to DDI-PULearn. Besides, results for prediction of novel DDIs suggest that DDI-PULearn is competent to identify novel DDIs.
DDI-PULearn is useful in various areas and able to guide drug development at different stages. For instance, at the early stage of drug candidate selection, DDI-PULearn can help to decide whether the drug molecules should be dropped or kept for further study. In addition, warnings about the potential interactions which may cause serious side-effects can be given to the public on time.
Drug properties
We extract drug properties from different data sources. Drug chemical substructures and drug substituents are extracted from DrugBank [26], a comprehensive drug database. Drug targets are obtained by fusing drug-target associations from both DrugBank and DrugCentral [27]. The drug-side-effect associations are downloaded from SIDER [28], a large labeled side-effect database. The drug-indication associations, drug-pathway associations, and drug-gene associations are retrieved from the CTD (comparative toxicogenomics database) [29].
We use a recent benchmark dataset [25] collected from TWOSIDES [30], a database which contains DDIs mined from FAERS. It contains 548 drugs and 48,584 pairwise drug-drug interactions. The specific drug list and all verified DDIs are available in Additional file 2.
Proposed methods
The framework of the proposed method is illustrated in Fig. 3. It consists of five components listed as follows: reliable negative sample identification, feature vector representation for DDIs, PCA compression, DDI prediction, and performance evaluation. First, reliable negative samples are generated using DDI-PULearn. Then both the labeled positive samples and the reliable negative samples are represented as vectors according to the drug properties, such as chemical substructures, associated side-effects, and indications. Next, the sample vectors are compressed into a lower-dimension space using PCA. Following that, the compressed vectors together with their labels are used as input for DDI prediction. Finally, the prediction performance is evaluated according to the confusion matrix.
The framework of the proposed method. It consists of the following five components: reliable negative sample identification, feature vector representation for DDIs, PCA compression, DDI prediction, and performance evaluation. RN: reliable negative samples; PCA: principal component analysis; DDI: drug-drug interaction
Reliable negative sample identification
We propose a novel two-step strategy to generate reliable negative samples. In the first step, we generate RNS seeds from the unlabeled samples using OCSVM and KNN. Then we employ SVM trained with labeled positive samples and RNS seeds to generate reliable negative samples iteratively. Labeled positive samples are validated DDIs and unlabeled samples are unobserved DDIs between every two drugs which are not in labeled positive samples. Fig. 4 details the flow for identification of reliable negative samples.
The flow chart for the identification of reliable negative samples. OCSVM: one-class support vector machine; KNN: k-nearest neighbor; RNS: reliable negative samples; RU: remaining unlabeled
A. RNS seed generation
In the first step, we employ two techniques namely OCSVM and KNN to generate the RNS seeds. For OCSVM, we feed it with all labeled positive samples and optimize its parameters via 5-fold cross-validation. To ensure that the majority of true DDIs are correctly predicted, a high recall (>0.95) is required for OCSVM. With the optimized parameter settings (nu: 0.05, gamma: 0.001), OCSVM achieves a recall of 0.951 and generates 1,602 RNS seeds from the 101,294 (\(C_{548}^{2}\)-48,584) unlabeled samples.
As described in the next subsection, each DDI is represented as a 3,111-dimensional vector. We use the cosine function as the similarity measure for KNN:
$$ {\begin{aligned} sim({ddi}_{i}, {ddi}_{j}) &= cosine(vector({ddi}_{i}), vector({ddi}_{j}))\\&=\frac{\sum_{l=1}^{3,111}{[{vector}_{l}({ddi}_{i})*{vector}_{l}({ddi}_{j})]}}{\sum_{l=1}^{3,111}{vector_{l}({ddi}_{i})^{2}}*\sum_{l=1}^{3,111}{vector_{l}({ddi}_{j})^{2}}} \end{aligned}} $$
where vector(ddii) and vector(ddij) are vectors of the DDI/sample ddii and ddij respectively. The specific process to generate RNS seeds using KNN is described in Algorithm 1. After optimizing, we set k as 5 and the threshold as 4.026. Using the KNN strategy, we obtain 5000 RNS seeds. Merging the RNS seeds generated by OCSVM and KNN, we finally obtain 6602 RNS seeds (see Table S6 in Additional file 2).
B. Iterative SVM for RNS identification
In the second step, we run SVM trained by labeled positive samples and RNS seeds iteratively to identify all reliable negatives from the remaining unlabeled data. The pseudo-code is shown in Algorithm 2. We aim to identify all reliable negative samples from the unlabeled data, thus we use the last SVM classifier at convergence as the best classifier instead of selecting a good classifier from the classifiers built by SVM. Through the iteration, we finally obtained 45,026 reliable negative samples.
Feature vector representation for DDIs
We collected a variety of drug properties which may help to improve the prediction, namely drug chemical substructures, drug substituents, drug targets, drug side-effects, drug indications, drug-associated pathways, and drug-associated genes. We investigate which drug property to use for drug representation by feature importance ranking using Random Forrest. The implementation details and experiment results are described in Additional file 1. The feature ranking analysis shows that drug properties including drug chemical substructures, drug targets, and drug indications play a leading role in DDI prediction, thus, we decide to employ them for drug representation. Specifically, we represent each drug as a 3111-dimensional feature vector using 881 drug chemical substructures, 1620 side-effects, and 610 indications. The drug chemical substructures correspond to 881 substructures defined in the PubChem database [31]. The side-effects and indications are 1,620 unique side-effects in SIDER [28], and 610 unique indications in DrugBank [26] respectively. Each bit of the feature vector denotes the absence/presence of the corresponding substructure/side-effect/indication by 0/1. Further, we propose a similarity-based representation for DDIs based on the following formula:
$$ \begin{aligned} {vector}_{k}({drug}_{i}, {drug}_{j}) = \\\frac{feature_{k}({drug}_{i}) + {feature}_{k}({drug}_{j})}{2} \end{aligned} $$
where featurek(drugi) and featurek(drugj) are the k-th bit of the feature vectors of drug drugi and drugj respectively, vectork is the k-th bit of vector for the DDI drugi- drugj.
PCA compression
There are 149,878 \(\left (C_{548}^{2}\right)\) possible DDIs between the 548 drugs used for experiments. Thus the size of the classification input could be around the order of magnitude of billion (149,878∗3,111). Such high dimensionality inevitably incurs a huge computational cost. To speed up the prediction process, we employ PCA to map the raw vectors of DDIs into lower-dimension space. Specifically, all training DDI vectors are used to fit the PCA first. Then the fitted PCA is used to transform both the training and testing DDI vectors into lower-dimensional vectors. Finally, the compressed vectors are used as input to train and validate the binary classifier.
DDI prediction
We formalize the DDI prediction task as a binary classification problem to predict a DDI is true or not. The inputs for the binary classifiers are the compressed vectors of DDIs and their labels. Specifically, we label labeled positive samples (i.e., validated DDIs) as +1 and the generated reliable negative samples as -1. Finally, we train and test a binary classifier with the above vectors and labels. We employ "Random Forrest" as the binary classifier in this work.
5-fold CV (cross-validation) is performed to evaluate the prediction performance: (i) DDIs in the gold standard set are split into 5 equal-sized subsets; (ii) each subset is used as the test set, and the remaining 4 subsets are taken as the training set in turn to train the predictive models; (iii) the final performance is evaluated on all results over 5-folds. To avoid the bias of data split, 5 independent runs of 5-fold CV are implemented and average results are used for final evaluation. Precision, recall, F1-score, and AUC (area under the receiver operating characteristic curve) are used as evaluation metrics.
The data used in this study all are available in the Additional files.
Area under the receiver operating characteristic curve
BDPs:
basic drug properties
CTD:
DDI-PULearn:
The proposed PU learning method
Expectation maximization
FAERS:
Food and drug administration adverse event reporting system
KNN:
k-nearest neighbors
Naive Bayesian
OCSVM:
One-class support vector machine
PCN:
PCA component number
PU learning:
Positive and unlabeled learning
RNSs:
Reliable negative samples
Self organizing map
Strandell J, Bate A, Lindquist M, Edwards IR, Swedish IX-rd-didtSg. Finnish: Drug–drug interactions–a preventable patient safety issue?Br J Clin Pharmacol. 2008; 65(1):144–6.
Huang S-M, Temple R, Throckmorton D, Lesko L. Drug interaction studies: study design, data analysis, and implications for dosing and labeling. Clin Pharmacol Ther. 2007; 81(2):298–304.
Zheng Y, Peng H, Zhang X, Zhao Z, Yin J, Li J. Predicting adverse drug reactions of combined medication from heterogeneous pharmacologic databases. BMC Bioinformatics. 2018; 19(19):517.
Vilar S, Uriarte E, Santana L, Tatonetti NP, Friedman C. Detection of drug-drug interactions by modeling interaction profile fingerprints. PLoS ONE. 2013; 8(3):58321.
Cheng F, Zhao Z. Machine learning-based prediction of drug–drug interactions by integrating drug phenotypic, therapeutic, chemical, and genomic properties. J Am Med Inf Assoc. 2014; 21(e2):278–86.
Vilar S, Harpaz R, Uriarte E, Santana L, Rabadan R, Friedman C. Drug—drug interaction through molecular structure similarity analysis. J Am Med Inf Assoc. 2012; 19(6):1066–74.
Gottlieb A, Stein GY, Oron Y, Ruppin E, Sharan R. Indi: a computational framework for inferring drug interactions and their associated recommendations. Mol Syst Biol. 2012; 8(1):592.
Zhang P, Wang F, Hu J, Sorrentino R. Label propagation prediction of drug-drug interactions based on clinical side effects. Sci Rep. 2015; 5:12339.
He L, Yang Z, Zhao Z, Lin H, Li Y. Extracting drug-drug interaction from the biomedical literature using a stacked generalization-based approach. PLoS ONE. 2013; 8(6):65814.
Duke JD, Han X, Wang Z, Subhadarshini A, Karnik SD, Li X, Hall SD, Jin Y, Callaghan JT, Overhage MJ, et al. Literature based drug interaction prediction with clinical assessment using electronic medical records: novel myopathy associated drug interactions. PLoS Comput Biol. 2012; 8(8):1002614.
Tatonetti NP, Denny J, Murphy S, Fernald G, Krishnan G, Castro V, Yue P, Tsau P, Kohane I, Roden D, et al. Detecting drug interactions from adverse-event reports: interaction between paroxetine and pravastatin increases blood glucose levels. Clin Pharmacol Ther. 2011; 90(1):133–42.
Tatonetti NP, Fernald GH, Altman RB. A novel signal detection algorithm for identifying hidden drug-drug interactions in adverse event reports. J Am Med Inf Assoc. 2011; 19(1):79–85.
Cami A, Manzi S, Arnold A, Reis BY. Pharmacointeraction network models predict unknown drug-drug interactions. PLoS ONE. 2013; 8(4):61468.
Yang P, Li X-L, Mei J-P, Kwoh C-K, Ng S-K. Positive-unlabeled learning for disease gene identification. Bioinformatics. 2012; 28(20):2640–7.
Yang P, Li X, Chua H-N, Kwoh C-K, Ng S-K. Ensemble positive unlabeled learning for disease gene identification. PLoS ONE. 2014; 9(5):97079.
Lan W, Wang J, Li M, Liu J, Li Y, Wu F-X, Pan Y. Predicting drug–target interaction using positive-unlabeled learning. Neurocomputing. 2016; 206:50–57.
Li F, Zhang Y, Purcell AW, Webb GI, Chou K-C, Lithgow T, Li C, Song J. Positive-unlabelled learning of glycosylation sites in the human proteome. BMC Bioinformatics. 2019; 20(1):112.
Liu B, Lee WS, Yu PS, Li X. Partially supervised classification of text documents. In: ICML, vol. 2. Citeseer: 2002. p. 387–94.
Yu H, Han J, Chang KC-C. Pebl: positive example based learning for web page classification using svm. In: Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM: 2002. p. 239–48. https://doi.org/10.1145/775082.775083.
Yu H, Zuo W, Peng T. A new pu learning algorithm for text classification. In: Mexican International Conference on Artificial Intelligence. Springer: 2005. p. 824–32. https://doi.org/10.1007/11579427_84.
He J, Zhang Y, Li X, Shi P. Learning naive bayes classifiers from positive and unlabelled examples with uncertainty. Int J Syst Sci. 2012; 43(10):1805–25.
Hameed PN, Verspoor K, Kusljic S, Halgamuge S. Positive-unlabeled learning for inferring drug interactions based on heterogeneous attributes. BMC Bioinformatics. 2017; 18(1):140.
Xiao Y, Wang H, Xu W. Parameter selection of gaussian kernel for one-class svm. IEEE Trans Cybern. 2015; 45(5):941–53.
Khan SS, Madden MG. A survey of recent trends in one class classification. In: Irish Conference on Artificial Intelligence and Cognitive Science. Dublin: Springer: 2009. p. 188–97. https://doi.org/10.1007/978-3-642-17080-5_21.
Zhang W, Chen Y, Liu F, Luo F, Tian G, Li X. Predicting potential drug-drug interactions by integrating chemical, biological, phenotypic and network data. BMC Bioinformatics. 2017; 18(1):18.
Wishart DS, Feunang YD, Guo AC, Lo EJ, Marcu A, Grant JR, Sajed T, Johnson D, Li C, Sayeeda Z, et al. Drugbank 5.0: a major update to the drugbank database for 2018. Nucleic Acids Res. 2017; 46(D1):1074–82.
Ursu O, Holmes J, Knockel J, Bologa CG, Yang JJ, Mathias SL, Nelson SJ, Oprea TI. Drugcentral: online drug compendium. Nucleic Acids Res. 2016;:993. https://doi.org/10.1093/nar/gkw993.
Kuhn M, Letunic I, Jensen LJ, Bork P. The sider database of drugs and side effects. Nucleic Acids Res. 2015; 44(D1):1075–9.
Davis AP, Grondin CJ, Johnson RJ, Sciaky D, King BL, McMorran R, Wiegers J, Wiegers TC, Mattingly CJ. The comparative toxicogenomics database: update 2017. Nucleic Acids Res. 2016; 45(D1):972–8.
Tatonetti NP, Patrick PY, Daneshjou R, Altman RB. Data-driven prediction of drug effects and interactions. Sci Transl Med. 2012; 4(125):125–3112531.
Kim S, Chen J, Cheng T, Gindulyte A, He J, He S, Li Q, Shoemaker BA, Thiessen PA, Yu B, et al. Pubchem 2019 update: improved access to chemical data. Nucleic Acids Res. 2018; 47(D1):1102–9.
This article has been published as part of BMC Bioinformatics, Volume 20 Supplement 19, 2019: 18th International Conference on Bioinformatics. The full contents of the supplement are available at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-19.
Publication of this supplement was funded by Faculty of Engineering and Information Technology, University of Technology Sydney.
Advanced Analytics Institute, Faculty of Engineering and Information Technology, University of Technology Sydney, 15 Broadway Ultimo, Sydney, 2007, Australia
Yi Zheng, Hui Peng, Xiaocai Zhang, Zhixun Zhao & Jinyan Li
School of Engineering and Computer Science, Victoria University of Wellington, Cotton Building, Kelburn Campus, Wellington, 6140, New Zealand
Xiaoying Gao
Yi Zheng
Hui Peng
Xiaocai Zhang
Zhixun Zhao
Jinyan Li
YZ and JL conceived the work. YZ and HP developed the method. YZ implemented the algorithms. JL and XG supervised the study. YZ, XZ and ZZ wrote the manuscript. All authors revised and approved the final manuscript.
Correspondence to Jinyan Li.
Jinyan Li is a member of the editorial board (Associate Editor) of BMC Bioinformatics.
The supplementary results for this work.
• "Feature importance ranking using Random Forrest" : implementation details and experiment results of the feature importance ranking analysis using Random Forrest.
• Figure S1: AUCs of DDI-PULearn with different PCNs (PDF 316 kb).
This file contains lists of researched drugs, verified DDIs, reliable negative samples generated by DDI-PULearn, and the detailed feature importance ranking results.
• Table S1: DDI prediction results using different combinations of drug features.
• Table S2: 548 drugs researched in this work.
• Table S3: 45,026 reliable negative samples generated by DDI-PULearn.
• Table S4: 48,584 verified DDIs in the benchmark dataset.
• Table S5: Detailed feature importance ranking results by Random Forrest.
• Table S6: 6602 reliable negative sample seeds generated by OCSVM and KNN (XLSX 1,661 kb).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Zheng, Y., Peng, H., Zhang, X. et al. DDI-PULearn: a positive-unlabeled learning method for large-scale prediction of drug-drug interactions. BMC Bioinformatics 20, 661 (2019). https://doi.org/10.1186/s12859-019-3214-6
Accepted: 12 November 2019
Drug interaction prediction
Positive-unlabeled learning | CommonCrawl |
enVision Math Answer Key
Go Math Answer Key
Big Ideas Math Answers
Big Ideas Math Geometry Answers
Math in Focus Answer Key
Big Ideas Math Algebra 1 Answers
Big Ideas Math Answers Grade 8
Big Ideas Math Answers Grade K
Big Ideas Math Algebra 2 Answers Chapter 2 Quadratic Functions
April 7, 2022 April 5, 2022 by Prasanna
If you stuck at solving complex problems on Quadratic Functions then stop calculating the problem and start practicing the concepts of Chapter 2 from Big Ideas Math Algebra 2 Answers. It holds all chapter's answer keys in pdf format. Here, in this article, you will collect the details about Big Ideas Math Algebra 2 Answers Chapter 2 Quadratic Functions. This material is the complete guide for high school students to learn the concepts of the Quadratic functions. Hence, download the topic-wise BIM Algebra 2 Ch 1 Textbook Solutions from the below available links and start your practice sessions before any examination.
Big Ideas Math Book Algebra 2 Answer Key Chapter 2 Quadratic Functions
Students can access these Topicwise Big Ideas Math Algebra 2 Ch 1 Answers online or offline whenever required and kickstart their preparation. You can easily clear all your subject-related queries using the BIM Algebra 2 Ch 1 Answer key. This BIM Textbook Algebra 2 Chapter 1 Solution Key includes various easy & complex questions belonging to Lessons 2.1 to 2.4, Assessment Tests, Chapter Tests, Cumulative Assessments, etc. Apart from the Quadratic functions exercises, you can also find the exercise on the Lesson Focus of a Parabola. Excel in mathematics examinations by practicing more and more using the BigIdeas Math Algebra 2 Ch 2 Answer key.
Quadratic Functions Maintaining Mathematical Proficiency – Page 45
Quadratic Functions Mathematical Practices – Page 46
Lesson 2.1 Transformations of Quadratic Functions – Page (48-54)
Transformations of Quadratic Functions 2.1 Exercises – Page (52-54)
Lesson 2.2 Characteristics of Quadratic Functions – Page (56-64)
Characteristics of Quadratic Functions 2.2 Exercises – Page (61-64)
Quadratic Functions Study Skills Using the Features of Your Textbook to Prepare for Quizzes and Tests – Page 65
Quadratic Functions 2.1 – 2.2 Quiz – Page 66
Lesson 2.3 Focus of a Parabola – Page (68-74)
Focus of a Parabola 2.3 Exercises – Page (72-74)
Lesson 2.4 Modeling with Quadratic Functions – Page (76-82)
Modeling with Quadratic Functions 2.4 Exercises – Page (80-82)
Quadratic Functions Performance Task: Accident Reconstruction – Page 83
Quadratic Functions Chapter Review – Page (84-86)
Quadratic Functions Chapter Test – Page 87
Quadratic Functions Cumulative Assessment – Page (88-89)
Quadratic Functions Maintaining Mathematical Proficiency
Find the x-intercept of the graph of the linear equation.
Question 1.
y = 2x + 7
y = -6x + 8
y = -10x – 36
y = 3(x – 5)
y = -4(x + 10)
3x + 6y = 24
Find the distance between the two points.
(2, 5), (-4, 7)
(-1, 0), (-8, 4)
(3, 10), (5, 9)
Question 10.
(7, -4), (-5, 0)
(4, -8), (4, 2)
(0, 9), (-3, -6)
ABSTRACT REASONING Use the Distance Formula to write an expression for the distance between the two points (a, c) and (b, c). Is there an easier way to find the distance when the x-coordinates are equal? Explain your reasoning
Quadratic Functions Mathematical Practices
Decide whether the syllogism represents correct or flawed reasoning. If flawed, explain why the conclusion is not valid.
All mammals are warm-blooded.
All dogs are mammals.
Therefore, all dogs are warm-blooded.
My pet is warm-blooded.
Therefore, my pet is a mammal.
If I am sick, then I will miss school.
I missed school.
Therefore, I am sick.
I did not miss school.
Therefore, I am not sick.
Lesson 2.1 Transformations of Quadratic Functions
How do the constants a, h, and k affect the graph of the quadratic function g(x) = a(x – h)2 + k?
The parent function of the quadratic family is f(x) = x2. A transformation of the graph of the parent function is represented by the function g(x) = a(x – h)2 + k, where a ≠ 0.
EXPLORATION 1
Identifying Graphs of Quadratic Functions
Work with a partner. Match each quadratic function with its graph. Explain your reasoning. Then use a graphing calculator to verify that your answer is correct.
a. g(x) = -(x – 2)2
b. g(x) = (x – 2)2 + 2
c. g(x) = -(x + 2)2 – 2
d. g(x) = 0.5(x – 2)2 + 2
e. g(x) = 2(x – 2)2
f. g(x) = -(x + 2)2 + 2
Communicate Your Answer
How do the constants a, h, and k affect the graph of the quadratic function g(x) =a(x – h)2 + k?
Write the equation of the quadratic function whose graph is shown at the right. Explain your reasoning. Then use a graphing calculator to verify that your equation is correct.
2.1 Lesson
Describe the transformation of f(x) = x2 represented by g. Then graph each function.
g(x) = (x – 3)2
g(x) = (x + 2)2 – 2
g(x) = (x + 5)2 + 1
g(x) = (\(\frac{1}{3} x\))2
g(x) = 3(x – 1)2
g(x) = -(x + 3)2 + 2
Let the graph of g be a vertical shrink by a factor of \(\frac{1}{2}\) followed by a translation 2 units up of the graph of f(x) = x2. Write a rule for g and identify the vertex.
Let the graph of g be a translation 4 units left followed by a horizontal shrink by a factor of \(\frac{1}{3}\) of the graph of f(x) = x2 + x. Write a rule for g.
WHAT IF? In Example 5, the water hits the ground 10 feet closer to the fire truck after lowering the ladder. Write a function that models the new path of the water.
Transformations of Quadratic Functions 2.1 Exercises
Vocabulary and Core Concept Check
COMPLETE THE SENTENCE The graph of a quadratic function is called a(n) ________.
VOCABULARY Identify the vertex of the parabola given by f(x) = (x + 2)2 – 4.
Monitoring Progress and Modeling with Mathematics
In Exercises 3–12, describe the transformation of f(x) = x2 represented by g. Then graph each function.
g(x) = x2 – 3
g(x) = x2 + 1
g(x) = (x + 2)2
g(x) = (x – 9)2 + 5
g(x) = (x + 10)2 – 3
ANALYZING RELATIONSHIPS In Exercises 13–16, match the function with the correct transformation of the graph of f. Explain your reasoning.
y = f(x – 1)
y = f(x) + 1
y = f(x – 1) + 1
y = f(x + 1)
In Exercises 17–24, describe the transformation of f(x) = x2 represented by g. Then graph each function.
g(x) = -x2
g(x) = (-x)2
g(x) = 3x2
g(x) = \(\frac{1}{3}\)x2
g(x) = (2x)2
g(x) = -(2x)2
g(x) = \(\frac{1}{5}\)x2 – 4
g(x) = \(\frac{1}{2}\)(x – 1)2
ERROR ANALYSIS In Exercises 25 and 26, describe and correct the error in analyzing the graph of f(x) = −6x2 + 4.
USING STRUCTURE In Exercises 27–30, describe the transformation of the graph of the parent quadratic function. Then identify the vertex.
f(x) = 3(x + 2)2 + 1
f(x) = -4(x + 1)2 – 5
f(x) = -2x2 + 5
f(x) = \(\frac{1}{2}\)(x – 1)2
In Exercises 31–34, write a rule for g described by the transformations of the graph of f. Then identify the vertex.
f(x) = x2 vertical stretch by a factor of 4 and a reflection in the x-axis, followed by a translation 2 units up
f(x) = x2; vertical shrink by a factor of \(\frac{1}{3}\) and a reflection in the y-axis, followed by a translation 3 units right
f(x) = 8x2 – 6; horizontal stretch by a factor of 2 and a translation 2 units up, followed by a reflection in the y-axis
f(x) = (x + 6)2 + 3; horizontal shrink by a factor of \(\frac{1}{2}\) and a translation 1 unit down, followed by a reflection in the x-axis
USING TOOLS In Exercises 35–40, match the function with its graph. Explain your reasoning.
g(x) = 2(x – 1)2 – 2
g(x) = \(\frac{1}{2}\)(x + 1)2 – 2
g(x) = -2(x – 1)2 + 2
g(x) = 2(x + 1)2 + 2
g(x) = -2(x + 1)2 – 2
g(x) = 2(x – 1)2 + 2
JUSTIFYING STEPS In Exercises 41 and 42, justify eachstep in writing a function g based on the transformationsof f(x) = 2x2 + 6x.
translation 6 units down followed by a reflection in the x-axis
reflection in the y-axis followed by a translation 4 units right
MODELING WITH MATHEMATICS The function h(x) = -0.03(x – 14)2 + 6 models the jump of a red kangaroo, where x is the horizontal distance traveled (in feet) and h(x) is the height (in feet). When the kangaroo jumps from a higher location, it lands 5 feet farther away. Write a function that models the second jump.
MODELING WITH MATHEMATICS The function f(t) = -16t2 + 10 models the height (in feet) of an object t seconds after it is dropped from a height of 10 feet on Earth. The same object dropped from the same height on the moon is modeled by g(t) = –\(\frac{8}{3}\)t2 + 10. Describe the transformation of the graph of f to obtain g. From what height must the object be dropped on the moon so it hits the ground at the same time as on Earth?
MODELING WITH MATHEMATICS Flying fish use their pectoral fins like airplane wings to glide through the air.
a. Write an equation of the form y = a(x – h)2 + k with vertex (33, 5) that models the flight path, assuming the fish leaves the water at (0, 0).
b. What are the domain and range of the function? What do they represent in this situation?
c. Does the value of a change when the flight path has vertex (30, 4)? Justify your answer.
HOW DO YOU SEE IT? Describe the graph of g as a transformation of the graph of f(x) = x2.
COMPARING METHODS Let the graph of g be a translation 3 units up and 1 unit right followed by a vertical stretch by a factor of 2 of the graph of f(x) = x2.
a. Identify the values of a, h, and k and use vertex form to write the transformed function.
b. Use function notation to write the transformed function. Compare this function with your function in part (a).
c. Suppose the vertical stretch was performed first, followed by the translations. Repeat parts (a) and (b).
d. Which method do you prefer when writing a transformed function? Explain.
THOUGHT PROVOKING A jump on a pogo stick with a conventional spring can be modeled by f(x) = -0.5(x – 6)2 + 18, where x is the horizontal distance (in inches) and f(x) is the vertical distance (in inches). Write at least one transformation of the function and provide a possible reason for your transformation.
MATHEMATICAL CONNECTIONS The area of a circle depends on the radius, as shown in the graph. A circular earring with a radius of r millimeters has a circular hole with a radius of \(\frac{3 r}{4}\) millimeters. Describe a transformation of the graph below that models the area of the blue portion of the earring.
Maintaining Mathematical Proficiency
A line of symmetry for the figure is shown in red. Find the coordinates of point A. (Skills Review Handbook)
Lesson 2.2 Characteristics of Quadratic Functions
What type of symmetry does the graph of f(x) = a(x – h)2 + k have and how can you describe this symmetry?
Parabolas and Symmetry
Work with a partner.
a. Complete the table. Then use the values in the table to sketch the graph of the function
f(x) = \(\frac{1}{2}\)x2 – 2x – 2 on graph paper.
b. Use the results in part (a) to identify the vertex of the parabola.
c. Find a vertical line on your graph paper so that when you fold the paper, the left portion of the graph coincides with the right portion of the graph. What is the equation of this line? How does it relate to the vertex?
d. Show that the vertex form f(x) = \(\frac{1}{2}\)(x – 2)2 – 4 is equivalent to the function given in part (a).
Work with a partner. Repeat Exploration 1 for the function given by f(x) = –\(\frac{1}{3}\)x2 + 2x + 3 = –\(\frac{1}{3}\)(x – 3),sup>2 + 6.
Describe the symmetry of each graph. Then use a graphing calculator to verify your answer.
a. f(x) = -(x – 1)2 + 4
b. f(x) = (x + 1)2 – 2
c. f(x) = 2(x – 3)2 + 1
d. f(x) = \(\frac{1}{2}\)(x + 2)2
e. f(x) = -2x2 + 3
f. f(x) = 3(x – 5)2 + 2
Graph the function. Label the vertex and axis of symmetry.
f(x) = -3(x + 1)2
h(x) = x2 + 2x – 1
p(x) = -2x2 – 8x + 1
Find the minimum value or maximum value of
(a) f(x) = 4x2 + 16x – 3 and
(b) h(x) = -x2 + 5x + 9. Describe the domain and range of each function, and where each function is increasing and decreasing.
Graph the function. Label the x-intercepts, vertex, and axis of symmetry.
f(x) = -(x + 1)(x + 5)
g(x) = \(\frac{1}{4}\)(x – 6)(x – 2)
WHAT IF? The graph of your third shot is a parabola through the origin that reaches a maximum height of 28 yards when x = 45. Compare the distance it travels before it hits the ground with the distances of the first two shots.
Characteristics of Quadratic Functions 2.2 Exercises
Vocabulary and Core Concept and Check
WRITING Explain how to determine whether a quadratic function will have a minimum value or a maximum value.
WHICH ONE DOESN'T BELONG? The graph of which function does not belong with the other three? Explain.
f(x) = (x – 3)2
h(x) = (x + 4)2
y = (x – 7)2 – 1
y = -4(x – 2)2 + 4
g(x) = 2(x + 1)2 – 3
f(x) = -2(x – 1)2 – 5
h(x) = 4(x + 4)2 + 6
y = –\(\frac{1}{4}\)(x + 2)2 + 1
y = \(\frac{1}{2}\)(x – 3)2 + 2
f(x) = 0.4(x – 1)2
g(x) = 0.75x2 – 5
ANALYZING RELATIONSHIPS In Exercises 15–18, use the axis of symmetry to match the equation with its graph.
y = 2(x – 3)2 + 1
y = (x + 4)2 – 2
REASONING In Exercises 19 and 20, use the axis of symmetry to plot the reflection of each point and complete the parabola.
In Exercises 21–30, graph the function. Label the vertex and axis of symmetry.
y = x2 + 2x + 1
y = 3x2 – 6x + 4
y = -4x2 + 8x + 2
f(x) = -x2 – 6x + 3
g(x) = -x2 – 1
f(x) = 6x2 – 5
g(x) = -1.5x2 + 3x + 2
f(x) = 0.5x2 + x – 3
y = \(\frac{3}{2}\)x2 – 3x + 6
y = –\(\frac{5}{2}\)x2 – 4x – 1
WRITING Two quadratic functions have graphs with vertices (2, 4) and (2, -3). Explain why you can not use the axes of symmetry to distinguish between the two functions.
WRITING A quadratic function is increasing to the left of x = 2 and decreasing to the right of x = 2. Will the vertex be the highest or lowest point on the graph of the parabola? Explain.
ERROR ANALYSIS In Exercises 33 and 34, describe and correct the error in analyzing the graph of y = 4x2 + 24x − 7.
MODELING WITH MATHEMATICS In Exercises 35 and 36, x is the horizontal distance (in feet) and y is the vertical distance (in feet). Find and interpret the coordinates of the vertex.
The path of a basketball thrown at an angle of 45° can be modeled by y = -0.02x2 + x + 6.
The path of a shot put released at an angle of 35° can be modeled by y = -0.01x2 + 0.7x + 6.
ANALYZING EQUATIONS The graph of which function has the same axis of symmetry as the graph of y = x2 + 2x + 2?
A. y = 2x2 + 2x + 2
B. y = -3x2 – 6x + 2
C. y = x2 – 2x + 2
D. y = -5x2 + 10x + 23
USING STRUCTURE Which function represents the widest parabola? Explain your reasoning.
A. y = 2(x + 3)2
B. y = x2 – 5
C. y = 0.5(x – 1)2 + 1
D. y = -x2 + 6
In Exercises 39–48, find the minimum or maximum value of the function. Describe the domain and range of the function, and where the function is increasing and decreasing.
y = 6x2 – 1
y = 9x2 + 7
y = -x2 – 4x – 2
g(x) = -3x2 – 6x + 5
f(x) = -2x2 + 8x + 7
g(x) = 3x2 + 18x – 5
h(x) = 2x2 – 12x
h(x) = x2 – 4x
f(x) = \(\frac{3}{2}\)x2 + 6x + 4
PROBLEM SOLVING The path of a diver is modeled by the function f(x) = -9x2 + 9x + 1, where f(x) is the height of the diver (in meters) above the water and x is the horizontal distance (in meters) from the end of the diving board.
a. What is the height of the diving board?
b. What is the maximum height of the diver?
c. Describe where the diver is ascending and where the diver is descending.
PROBLEM SOLVING The engine torque y (in foot-pounds) of one model of car is given by y = -3.75x2 + 23.2x + 38.8, where x is the speed (in thousands of revolutions per minute) of the engine.
a. Find the engine speed that maximizes torque. What is the maximum torque?
b. Explain what happens to the engine torque as the speed of the engine increases.
MATHEMATICAL CONNECTIONS In Exercises 51 and 52, write an equation for the area of the figure. Then determine the maximum possible area of the figure.
In Exercises 53–60, graph the function. Label the x-intercept(s), vertex, and axis of symmetry.
y = (x + 3)(x – 3)
y = 3(x + 2)(x + 6)
f(x) = 2(x – 5)(x – 1)
g(x) = -x(x + 6)
y = -4x(x + 7)
f(x) = -2(x – 3)2
y = 4(x – 7)2
USING TOOLS In Exercises 61–64, identify the x-intercepts of the function and describe where the graph is increasing and decreasing. Use a graphing calculator to verify your answer.
f(x) = \(\frac{1}{2}\)(x – 2)(x + 6)
y = \(\frac{3}{4}\)(x + 1)(x – 3)
g(x) = -4(x – 4)(x – 2)
h(x) = -5(x + 5)(x + 1)
MODELING WITH MATHEMATICS A soccer player kicks a ball downfield. The height of the ball increases until it reaches a maximum height of 8 yards, 20 yards away from the player. A second kick is modeled by y = x(0.4 – 0.008x). Which kick travels farther before hitting the ground? Which kick travels higher?
MODELING WITH MATHEMATICS Although a football field appears to be flat, some are actually shaped like a parabola so that rain runs off to both sides. The cross section of a field can be modeled by y = -0.000234x(x – 160), where x and y are measured in feet. What is the width of the field? What is the maximum height of the surface of the field?
REASONING The points (2, 3) and (-4, 2) lie on the graph of a quadratic function. Determine whether you can use these points to find the axis of symmetry. If not, explain. If so, write the equation of the axis of symmetry.
OPEN-ENDED Write two different quadratic functions in intercept form whose graphs have the axis of symmetry x= 3.
PROBLEM SOLVING An online music store sells about 4000 songs each day when it charges $1 per song. For each $0.05 increase in price, about 80 fewer songs per day are sold. Use the verbal model and quadratic function to determine how much the store should charge per song to maximize daily revenue.
PROBLEM SOLVING An electronics store sells 70 digital cameras per month at a price of $320 each. For each $20 decrease in price, about 5 more cameras per month are sold. Use the verbal model and quadratic function to determine how much the store should charge per camera to maximize monthly revenue.
DRAWING CONCLUSIONS Compare the graphs of the three quadratic functions. What do you notice? Rewrite the functions f and g in standard form to justify your answer.
f(x) = (x + 3)(x + 1)
h(x) = x2 + 4x + 3
USING STRUCTURE Write the quadratic function f(x) = x2 + x – 12 in intercept form. Graph the function. Label the x-intercepts, y-intercept, vertex, and axis of symmetry.
PROBLEM SOLVING A woodland jumping mouse hops along a parabolic path given by y = -0.2x2 + 1.3x, where x is the mouse's horizontal distance traveled (in feet) and y is the corresponding height (in feet). Can the mouse jump over a fence that is 3 feet high? Justify your answer.
HOW DO YOU SEE IT? Consider the graph of the function f(x) = a(x – p)(x – q).
a. What does f(\(\frac{p+q}{2}\)) represent in the graph?
b. If a < 0, how does your answer in part (a) change? Explain.
MODELING WITH MATHEMATICS The Gateshead Millennium Bridge spans the River Tyne. The arch of the bridge can be modeled by a parabola. The arch reaches a maximum height of 50 meters at a point roughly 63 meters across the river. Graph the curve of the arch. What are the domain and range? What do they represent in this situation?
Quadratic 76.
You have 100 feet of fencing to enclose a rectangular garden. Draw three possible designs for the garden. Of these, which has the greatest area? Make a conjecture about the dimensions of the rectangular garden with the greatest possible area. Explain your reasoning.
MAKING AN ARGUMENT The point (1, 5) lies on the graph of a quadratic function with axis of symmetry x = -1. Your friend says the vertex could be the point (0, 5). Is your friend correct? Explain.
CRITICAL THINKING Find the y-intercept in terms of a, p, and q for the quadratic function f(x) = a(x – p)(x – q).
MODELING WITH MATHEMATICS A kernel of popcorn contains water that expands when the kernel is heated, causing it to pop. The equations below represent the "popping volume" y (in cubic centimeters per gram) of popcorn with moisture content x (as a percent of the popcorn's weight).
Hot-air popping: y = -0.761(x – 5.52)(x – 22.6)
Hot-oil popping:y = -0.652(x – 5.35)(x – 21.8)
a. For hot-air popping, what moisture content maximizes popping volume? What is the maximum volume?
b. For hot-oil popping, what moisture content maximizes popping volume? What is the maximum volume?
c. Use a graphing calculator to graph both functions in the same coordinate plane. What are the domain and range of each function in this situation? Explain.
ABSTRACT REASONING A function is written in intercept form with a > 0. What happens to the vertex of the graph as a increases? as a approaches 0?
Solve the equation. Check for extraneous solutions. (Skills Review Handbook)
3\(\sqrt{x}\) – 6 = 0
2\(\sqrt{x-4}\) – 2 = 2
\(\sqrt{5x}\) + 5 = 0
\(\sqrt{3x+8}\) = \(\sqrt{x+4}\)
Solve the proportion. (Skills Review Handbook)
\(\frac{1}{2}\) = \(\frac{x}{4}\)
\(\frac{-1}{4}\) = \(\frac{3}{x}\)
\(\frac{5}{2}\) =-\(\frac{20}{x}\)
Quadratic Functions Study Skills Using the Features of Your Textbook to Prepare for Quizzes and Tests
Mathematical Practices
Why does the height you found in Exercise 44 on page 53 make sense in the context of the situation?
How can you effectively communicate your preference in methods to others in Exercise 47 on page 54?
How can you use technology to deepen your understanding of the concepts in Exercise 79 on page 64?
Using the Features of Your Textbook to Prepare for Quizzes and Tests
Read and understand the core vocabulary and the contents of the Core Concept boxes.
Review the Examples and the Monitoring Progress questions. Use the tutorials at BigIdeasMath.com for additional help.
Review previously completed homework assignments.
Quadratic Functions 2.1 – 2.2 Quiz
2.1 – 2.2 Quiz
Describe the transformation of f(x) = x2 represented by g. (Section 2.1)
Write a rule for g and identify the vertex. (Section 2.1)
Let g be a translation 2 units up followed by a reflection in the x-axis and a vertical stretch by a factor of 6 of the graph of f(x) = x2.
Let g be a translation 1 unit left and 6 units down, followed by a vertical shrink by a factor of \(\frac{1}{2}\) of the graph of f(x) = 3(x + 2)2.
Let g be a horizontal shrink by a factor of \(\frac{1}{4}\), followed by a translation 1 unit up and 3 units right of the graph of f(x) = (2x + 1)2 – 11.
Graph the function. Label the vertex and axis of symmetry. (Section 2.2)
f(x) = 2(x – 1)2 – 5
h(x) = 3x2 + 6x – 2
f(x) = 7 – 8x – x2
Find the x-intercepts of the graph of the function. Then describe where the function is increasing and decreasing.(Section 2.2)
g(x) = -3(x + 2)(x + 4)
g(x) = \(\frac{1}{2}\)(x – 5)(x + 1)
f(x) = 0.4x(x – 6)
A grasshopper can jump incredible distances, up to 20 times its length. The height (in inches) of the jump above the ground of a 1-inch-long grasshopper is given by h(x) = –\(\frac{1}{20}\)x2 + x, where x is the horizontal distance (in inches) of the jump. When the grasshopper jumps off a rock, it lands on the ground 2 inches farther. Write a function that models the new path of the jump. (Section 2.1)
A passenger on a stranded lifeboat shoots a distress flare into the air. The height (in feet) of the flare above the water is given by f(t) = -16t(t – 8), where t is time (in seconds) since the flare was shot. The passenger shoots a second flare, whose path is modeled in the graph. Which flare travels higher? Which remains in the air longer? Justify your answer. (Section 2.2)
Lesson 2.3 Focus of a Parabola
What is the focus of a parabola?
Analyzing Satellite Dishes
Work with a partner. Vertical rays enter a satellite dish whose cross section is a parabola. When the rays hit the parabola, they reflect at the same angle at which they entered. (See Ray 1 in the figure.)
a. Draw the reflected rays so that they intersect the y-axis.
b. What do the reflected rays have in common?
c. The optimal location for the receiver of the satellite dish is at a point called the focus of the parabola. Determine the location of the focus. Explain why this makes sense in this situation.
Analyzing Spotlights
Work with a partner. Beams of light are coming from the bulb in a spotlight, located at the focus of the parabola. When the beams hit the parabola, they reflect at the same angle at which they hit. (See Beam 1 in the figure.) Draw the reflected beams. What do they have in common? Would you consider this to be the optimal result? Explain.
Describe some of the properties of the focus of a parabola.
Use the Distance Formula to write an equation of the parabola with focus F(0, -3) and directrix y = 3.
Identify the focus, directrix, and axis of symmetry of the parabola. Then graph the equation.
y = 0.5x2
-y = x2
y2 = 6x
Write an equation of the parabola with vertex at (0, 0) and the given directrix or focus.
directrix: x = -3
focus: (-2, 0)
focus: (0, \(\frac{3}{2}\))
Write an equation of a parabola with vertex (-1, 4) and focus (-1, 2).
A parabolic microwave antenna is 16 feet in diameter. Write an equation that represents the cross section of the antenna with its vertex at (0, 0) and its focus 10 feet to the right of the vertex. What is the depth of the antenna?
Focus of a Parabola 2.3 Exercises
COMPLETE THE SENTENCE A parabola is the set of all points in a plane equidistant from a fixed point called the ______ and a fixed line called the __________ .
WRITING Explain how to find the coordinates of the focus of a parabola with vertex (0, 0)and directrix y = 5.
In Exercises 3–10, use the Distance Formula to write an equation of the parabola.
focus: (0, -2)
directrix: y = 2
vertex: (0, 0)
directrix: y = -6
focus: (0, 5)
focus: (0, -10)
ANALYZING RELATIONSHIPS Which of the given characteristics describe parabolas that open down? Explain your reasoning.
A. focus: (0, -6)
B. focus: (0, -2)
C.focus: (0, 6)
D. focus: (0, -1)
REASONING Which of the following are possible coordinates of the point P in the graph shown? Explain.
A. (-6, -1)
B. (3, –\(\frac{1}{4}\))
C. (4, –\(\frac{4}{9}\))
D. (1, –\(\frac{1}{36}\))
E. (6, -1)
F. (2, –\(\frac{1}{18}\))
In Exercises 13–20, identify the focus, directrix, and axis of symmetry of the parabola. Graph the equation.
y = \(\frac{1}{8}\)x2
y = –\(\frac{1}{12}\)x2
x = –\(\frac{1}{20}\)y2
x= \(\frac{1}{24}\)y2
y2 = 16x
-x2 = 48y
6x2 + 3y = 0
8x2 – y = 0
ERROR ANALYSIS In Exercises 21 and 22, describe and correct the error in graphing the parabola.
ANALYZING EQUATIONS The cross section (with units in inches) of a parabolic satellite dish can be modeled by the equation y = \(\frac{1}{38}\)x2. How far is the receiver from the vertex of the cross section? Explain.
ANALYZING EQUATIONS The cross section (with units in inches) of a parabolic spotlight can be modeled by the equation x = \(\frac{1}{20}\)y2. How far is the bulb from the vertex of the cross section? Explain.
In Exercises 25–28, write an equation of the parabola shown.
In Exercises 29–36, write an equation of the parabola with the given characteristics.
focus: (\(\frac{2}{3}\), 0)
directrix: x = –\(\frac{2}{3}\)
directrix: x = -10
directrix: y = \(\frac{8}{3}\)
focus: (-\(\frac{4}{5}\), 0)
In Exercises 41–46, identify the vertex, focus, directrix, and axis of symmetry of the parabola. Describe the transformations of the graph of the standard equation with p = 1 and vertex (0, 0).
x = \(\frac{1}{16}\)(y – 3)2 + 1
x = -3(y + 4)2 + 2
x = 4(y + 5)2 – 1
MODELING WITH MATHEMATICS Scientists studying dolphin echolocation simulate the projection of a bottlenose dolphin's clicking sounds using computer models. The models originate the sounds at the focus of a parabolic reflector. The parabola in the graph shows the cross section of the reflector with focal length of 1.3 inches and aperture width of 8 inches. Write an equation to represent the cross section of the reflector. What is the depth of the reflector?
MODELING WITH MATHEMATICS Solar energy can be concentrated using long troughs that have a parabolic cross section as shown in the figure. Write an equation to represent the cross section of the trough. What are the domain and range in this situation? What do they represent?
ABSTRACT REASONING As | p | increases, how does the width of the graph of the equation y = \(\frac{1}{4 p}\)x2 change? Explain your reasoning.
HOW DO YOU SEE IT? The graph shows the path of a volleyball served from an initial height of 6 feet as it travels over a net.
a. Label the vertex, focus, and a point on the directrix.
b. An underhand serve follows the same parabolic path but is hit from a height of 3 feet. How does this affect the focus? the directrix?
CRITICAL THINKING The distance from point P to the directrix is 2 units. Write an equation of the parabola.
THOUGHT PROVOKING Two parabolas have the same focus (a, b) and focal length of 2 units. Write an equation of each parabola. Identify the directrix of each parabola.
REPEATED REASONING Use the Distance Formula to derive the equation of a parabola that opens to the right with vertex (0, 0), focus (p, 0), and directrix x = -p.
PROBLEM SOLVING The latus rectum of a parabola is the line segment that is parallel to the directrix, passes through the focus, and has endpoints that lie on the parabola. Find the length of the latus rectum of the parabola shown.
Write an equation of the line that passes through the points.(Section 1.3)
(1, -4), (2, -1)
(-3, 12), (0, 6)
Use a graphing calculator to find an equation for the line of best fit.
Lesson 2.4 Modeling with Quadratic Functions
How can you use a quadratic function to model a real-life situation?
Modeling with a Quadratic Function
Work with a partner. The graph shows a quadratic function of the form
P(t) = at2 + bt + c
which approximates the yearly profits for a company, where P(t) is the profit in year t.
a. Is the value of a positive, negative, or zero? Explain.
b. Write an expression in terms of a and b that represents the year t when the company made the least profit.
c. The company made the same yearly profits in 2004 and 2012. Estimate the year in which the company made the least profit.
d. Assume that the model is still valid today. Are the yearly profits currently increasing, decreasing, or constant? Explain.
Modeling with a Graphing Calculator
Work with a partner. The table shows the heights h (in feet) of a wrench t seconds after it has been dropped from a building under construction.
a. Use a graphing calculator to create a scatter plot of the data, as shown at the right. Explain why the data appear to fit a quadratic model.
b. Use the quadratic regression feature to find a quadratic model for the data.
c. Graph the quadratic function on the same screen as the scatter plot to verify that it fits the data.
d. When does the wrench hit the ground? Explain.
Use the Internet or some other reference to find examples of real-life situations that can be modeled by quadratic functions.
WHAT IF? The vertex of the parabola is (50, 37.5). What is the height of the net?
Write an equation of the parabola that passes through the point (-1, 2) and has vertex (4, -9).
WHAT IF? The y-intercept is 4.8. How does this change your answers in parts (a) and (b)?
Write an equation of the parabola that passes through the point (2, 5) and has x-intercepts -2 and 4.
Write an equation of the parabola that passes through the points (-1, 4), (0, 1), and (2, 7).
The table shows the estimated profits y (in dollars) for a concert when the charge is x dollars per ticket. Write and evaluate a function to determine what the charge per ticket should be to maximize the profit.
The table shows the results of an experiment testing the maximum weights y (in tons) supported by ice x inches thick. Write a function that models the data. How much weight can be supported by ice that is 22 inches thick?
Modeling with Quadratic Functions 2.4 Exercises
WRITING Explain when it is appropriate to use a quadratic model for a set of data.
DIFFERENT WORDS, SAME QUESTION
Which is different? Find "both" answers.
In Exercises 3–8, write an equation of the parabola in vertex form.
passes through (13, 8) and has vertex (3, 2)
passes through (-7, -15) and has vertex (-5, 9)
passes through (0, -24) and has vertex (-6, -12)
passes through (6, 35) and has vertex (-1, 14)
In Exercises 9–14, write an equation of the parabola in intercept form.
x-intercepts of 12 and -6; passes through (14, 4)
x-intercepts of 9 and 1; passes through (0, -18)
x-intercepts of -16 and -2; passes through (-18, 72)
x-intercepts of -7 and -3; passes through (-2, 0.05)
WRITING Explain when to use intercept form and when to use vertex form when writing an equation of a parabola.
ANALYZING EQUATIONS Which of the following equations represent the parabola?
A. y = 2(x – 2)(x + 1)
B. y = 2(x + 0.5)2 – 4.5
C. y = 2(x – 0.5)2 – 4.5
D. y = 2(x + 2)(x – 1)
In Exercises 17–20, write an equation of the parabola in vertex form or intercept form.
ERROR ANALYSIS Describe and correct the error in writing an equation of the parabola.
MATHEMATICAL CONNECTIONS The area of a rectangle is modeled by the graph where y is the area (in square meters) and x is the width (in meters). Write an equation of the parabola. Find the dimensions and corresponding area of one possible rectangle. What dimensions result in the maximum area?
MODELING WITH MATHEMATICS Every rope has a safe working load. A rope should not be used to lift a weight greater than its safe working load. The table shows the safe working loads S (in pounds) for ropes with circumference C (in inches). Write an equation for the safe working load for a rope. Find the safe working load for a rope that has a circumference of 10 inches.
MODELING WITH MATHEMATICS A baseball is thrown up in the air. The table shows the heights y (in feet) of the baseball after x seconds. Write an equation for the path of the baseball. Find the height of the baseball after 1.7 seconds.
COMPARING METHODS You use a system with three variables to find the equation of a parabola that passes through the points (−8, 0), (2, −20), and (1, 0). Your friend uses intercept form to find the equation. Whose method is easier? Justify your answer.
MODELING WITH MATHEMATICS The table shows the distances y a motorcyclist is from home after x hours.
a. Determine what type of function you can use to model the data. Explain your reasoning.
b. Write and evaluate a function to determine the distance the motorcyclist is from home after 6 hours.
USING TOOLS The table shows the heights h (in feet) of a sponge t seconds after it was dropped by a window cleaner on top of a skyscraper.
a. Use a graphing calculator to create a scatter plot. Which better represents the data, a line or a parabola? Explain.
b. Use the regression feature of your calculator to find the model that best fits the data.
c. Use the model in part (b) to predict when the sponge will hit the ground.
d. Identify and interpret the domain and range in this situation.
MAKING AN ARGUMENT Your friend states that quadratic functions with the same x-intercepts have the same equations, vertex, and axis of symmetry. Is your friend correct? Explain your reasoning.
In Exercises 29–32, analyze the differences in the outputs to determine whether the data are linear, quadratic, or neither. Explain. If linear or quadratic, write an equation that fits the data.
PROBLEM SOLVING The graph shows the number y of students absent from school due to the flu each day x.
a. Interpret the meaning of the vertex in this situation.
b. Write an equation for the parabola to predict the number of students absent on day 10.
c. Compare the average rates of change in the students with the flu from 0 to 6 days and 6 to 11 days.
THOUGHT PROVOKING Describe a real-life situation that can be modeled by a quadratic equation. Justify your answer.
PROBLEM SOLVING The table shows the heights y of a competitive water-skier x seconds after jumping off a ramp. Write a function that models the height of the water-skier over time. When is the water-skier 5 feet above the water? How long is the skier in the air?
HOW DO YOU SEE IT? Use the graph to determine whether the average rate of change over each interval is positive, negative, or zero.
a. 0 ≤ x ≤ 2
b. 2 ≤ x ≤ 5
c. 2 ≤ x ≤ 4
d. 0 ≤ x ≤ 4
REPEATED REASONING The table shows the number of tiles in each figure. Verify that the data show a quadratic relationship. Predict the number of tiles in the 12th figure.
Factor the trinomial. (Skills Review Handbook)
x2 + 4x + 3
x2 – 3x + 2
3x2 – 15x + 12
5x2 + 5x – 30
Quadratic Functions Performance Task: Accident Reconstruction
2.3–2.4 What Did You Learn?
focus, p. 68
directrix, p. 68
Standard Equations of a Parabola with Vertex at the Origin, p. 69
Standard Equations of a Parabola with Vertex at (h, k), p. 70
Writing Quadratic Equations, p. 76
Writing Quadratic Equations to Model Data, p. 78
Explain the solution pathway you used to solve Exercise 47 on page 73.
Explain how you used definitions to derive the equation in Exercise 53 on page 74.
Explain the shortcut you found to write the equation in Exercise 25 on page 81.
Describe how you were able to construct a viable argument in Exercise 28 on page 81.
Performance Task
Was the driver of a car speeding when the brakes were applied? What do skid marks at the scene of an accident reveal about the moments before the collision?
To explore the answers to these questions and more, go to BigIdeasMath.com.
Quadratic Functions Chapter Review
Let the graph of g be a horizontal shrink by a factor of \(\frac{2}{3}\), followed by a translation 5 units left and 2 units down of the graph of f(x) = x2.
Let the graph of g be a translation 2 units left and 3 units up, followed by a reflection in the y-axis of the graph of f(x) = x2 – 2x.
Graph the function. Label the vertex and axis of symmetry. Find the minimum or maximum value of f. Describe where the function is increasing and decreasing.
g(x) = -2x2 + 16x + 3
h(x) = (x – 3)(x + 7)
You can make a solar hot-dog cooker by shaping foil-lined cardboard into a parabolic trough and passing a wire through the focus of each end piece. For the trough shown, how far from the bottom should the wire be placed?
Graph the equation 36y = x2. Identify the focus, directrix, and axis of symmetry.
Write an equation of the parabola with the given characteristics.
directrix: x = 2
Write an equation for the parabola with the given characteristics.
passes through (1, 12) and has vertex (10, -4)
passes through (4, 3) and has x-intercepts of -1 and 5
passes through (-2, 7), (1, 10), and (2, 27)
The table shows the heights y of a dropped object after x seconds. Verify that the data show a quadratic relationship. Write a function that models the data. How long is the object in the air?
Quadratic Functions Chapter Test
A parabola has an axis of symmetry y= 3 and passes through the point (2, 1). Find another point that lies on the graph of the parabola. Explain your reasoning.
Let the graph of g be a translation 2 units left and 1 unit down, followed by a reflection in the y-axis of the graph of f(x) = (2x + 1)2 – 4. Write a rule for g.
Identify the focus, directrix, and axis of symmetry of x = 2y2. Graph the equation.
Explain why a quadratic function models the data. Then use a linear system to find the model.
Write an equation of the parabola. Justify your answer.
A surfboard shop sells 40 surfboards per month when it charges $500 per surfboard. Each time the shop decreases the price by $10, it sells 1 additional surfboard per month. How much should the shop charge per surfboard to maximize the amount of money earned? What is the maximum amount the shop can earn per month? Explain.
Graph f(x) = 8x2 – 4x+ 3. Label the vertex and axis of symmetry. Describe where the function is increasing and decreasing.
Sunfire is a machine with a parabolic cross section used to collect solar energy. The Sun's rays are reflected from the mirrors toward two boilers located at the focus of the parabola. The boilers produce steam that powers an alternator to produce electricity.
a. Write an equation that represents the cross section of the dish shown with its vertex at (0, 0).
b. What is the depth of Sunfire? Justify your answer.
In 2011, the price of gold reached an all-time high. The table shows the prices (in dollars per troy ounce) of gold each year since 2006 (t = 0 represents 2006). Find a quadratic function that best models the data. Use the model to predict the price of gold in the year 2016.
Quadratic Functions Cumulative Assessment
You and your friend are throwing a football. The parabola shows the path of your friend's throw, where x is the horizontal distance (in feet) and y is the corresponding height (in feet). The path of your throw can be modeled by h(x) = −16x2 + 65x + 5. Choose the correct inequality symbol to indicate whose throw travels higher. Explain your reasoning.
The function g(x) = \(\frac{1}{2}\)∣x − 4 ∣ + 4 is a combination of transformations of f(x) = | x|. Which combinations describe the transformation from the graph of f to the graph of g?
A. translation 4 units right and vertical shrink by a factor of \(\frac{1}{2}\), followed by a translation 4 units up
B. translation 4 units right and 4 units up, followed by a vertical shrink by a factor of \(\frac{1}{2}\)
C. vertical shrink by a factor of \(\frac{1}{2}\) , followed by a translation 4 units up and 4 units right
D. translation 4 units right and 8 units up, followed by a vertical shrink by a factor of \(\frac{1}{2}\)
Your school decides to sell tickets to a dance in the school cafeteria to raise money. There is no fee to use the cafeteria, but the DJ charges a fee of $750. The table shows the profits (in dollars) when x students attend the dance.
a. What is the cost of a ticket?
b. Your school expects 400 students to attend and finds another DJ who only charges $650. How much should your school charge per ticket to still make the same profit?
c. Your school decides to charge the amount in part (a) and use the less expensive DJ. How much more money will the school raise?
Order the following parabolas from widest to narrowest.
A. focus: (0, −3); directrix: y = 3
B. y = \(\frac{1}{16}\)x2 + 4
C. x = \(\frac{1}{8}\)y2
D. y = \(\frac{1}{4}\)(x − 2)2 + 3
Your friend claims that for g(x) = b, where b is a real number, there is a transformation in the graph that is impossible to notice. Is your friend correct? Explain your reasoning.
Let the graph of g represent a vertical stretch and a reflection in the x-axis, followed by a translation left and down of the graph of f(x) = x2. Use the tiles to write a rule for g.
Two balls are thrown in the air. The path of the first ball is represented in the graph. The second ball is released 1.5 feet higher than the first ball and after 3 seconds reaches its maximum height 5 feet lower than the first ball.
a. Write an equation for the path of the second ball.
b. Do the balls hit the ground at the same time? If so, how long are the balls in the air? If not, which ball hits the ground first? Explain your reasoning.
Let the graph of g be a translation 3 units right of the graph of f. The points (−1, 6), (3, 14), and (6, 41) lie on the graph of f. Which points lie on the graph of g?
A. (2, 6)
B. (2, 11)
C. (6, 14)
D. (6, 19)
E. (9, 41)
F. (9, 46)
Gym A charges $10 per month plus an initiation fee of $100. Gym B charges $30 per month, but due to a special promotion, is not currently charging an initiation fee.
a. Write an equation for each gym modeling the total cost y for a membership lasting x months.
b. When is it more economical for a person to choose Gym A over Gym B?
c. Gym A lowers its initiation fee to $25. Describe the transformation this change represents and how it affects your decision in part (b).
Categories Big Ideas Math Post navigation
Big Ideas Math Algebra 2 Answers Chapter 10 Probability
Big Ideas Math Answers Grade 6 Advanced Chapter 1 Numerical Expressions and Factors
enVision Math Common Core Grade 7 Answer Key | enVision Math Common Core 7th Grade Answers
Envision Math Common Core Grade 3 Answer Key | Envision Math Common Core 3rd Grade Answers
enVision Math Common Core Grade 2 Answer Key | enVision Math Common Core 2nd Grade Answers
enVision Math Common Core Grade 1 Answer Key | enVision Math Common Core 1st Grade Answers
enVision Math Common Core Kindergarten Answer Key | enVision Math Common Core Grade K Answers
enVision Math Answer Key for Class 8, 7, 6, 5, 4, 3, 2, 1, and K | enVisionmath 2.0 Common Core Grades K-8
Go Math Grade 8 Answer Key PDF | Chapterwise Grade 8 HMH Go Math Solution Key
Copyright © 2022 Big Ideas Math Answers | CommonCrawl |
MitoCore: a curated constraint-based model for simulating human central metabolism
Anthony C. Smith1,
Filmon Eyassu1,
Jean-Pierre Mazat2,3 &
Alan J. Robinson1
The complexity of metabolic networks can make the origin and impact of changes in central metabolism occurring during diseases difficult to understand. Computer simulations can help unravel this complexity, and progress has advanced in genome-scale metabolic models. However, many models produce unrealistic results when challenged to simulate abnormal metabolism as they include incorrect specification and localisation of reactions and transport steps, incorrect reaction parameters, and confounding of prosthetic groups and free metabolites in reactions. Other common drawbacks are due to their scale, making them difficult to parameterise and simulation results hard to interpret. Therefore, it remains important to develop smaller, manually curated models.
We present MitoCore, a manually curated constraint-based computer model of human metabolism that incorporates the complexity of central metabolism and simulates this metabolism successfully under normal and abnormal physiological conditions, including hypoxia and mitochondrial diseases. MitoCore describes 324 metabolic reactions, 83 transport steps between mitochondrion and cytosol, and 74 metabolite inputs and outputs through the plasma membrane, to produce a model of manageable scale for easy interpretation of results. Its key innovations include a more accurate partitioning of metabolism between cytosol and mitochondrial matrix; better modelling of connecting transport steps; differentiation of prosthetic groups and free co-factors in reactions; and a new representation of the respiratory chain and the proton motive force. MitoCore's default parameters simulate normal cardiomyocyte metabolism, and to improve usability and allow comparison with other models and types of analysis, its reactions and metabolites have extensive annotation, and cross-reference identifiers from Virtual Metabolic Human database and KEGG. These innovations—including over 100 reactions absent or modified from Recon 2—are necessary to model central metabolism more accurately.
We anticipate MitoCore as a research tool for scientists, from experimentalists looking to interpret their data and test hypotheses, to experienced modellers predicting the consequences of disease or using computationally intensive methods that are infeasible with larger models, as well as a teaching tool for those new to modelling and needing a small, manageable model on which to learn and experiment.
Human central metabolism is a large and complex system under sensitive homeostatic control, and its disturbance is causative or associated with many diseases and responses to toxins. However, it is often difficult to relate more than a handful of these changes to their underlying origin or their down-stream impact, due to the highly connected nature of the reactions of central metabolism. Computer models are widely accepted in many fields as a tool to incorporate complexity and simulate changes, allowing predictions to be made and providing a unifying framework to interpret empirical data, especially from large, noisy and incomplete data sets. Yet modelling is treated with scepticism by many biomedical researchers despite their potential broad utility [1]. Simple models of enzyme kinetics (using the assumptions of Henri-Michaelis-Menten kinetics [2]) are familiar to biomedical scientists, but are impractical for simulations of central metabolism due to every reaction needing parameterisation, alongside the computational expense of solving the large set of differential equations. However, constraint-based models of metabolism used in conjunction with methods such as flux balance analysis [3] are particularly useful for simulating metabolic changes in large metabolic networks as they can incorporate flexibility, do not require kinetic parameters and are computationally inexpensive. Many genome-scale constraint-based models [4,5,6,7,8,9,10] have covered central metabolism and used successfully to model diseases [11, 12]. But these models do not simulate the realistic production rate of ATP (with the recent exception of Recon 2.2 [10]), a crucial element of modelling central metabolism. Furthermore, the interpretation of simulation results from thousands of reactions is difficult (especially for new-comers). In addition, attempts to simulate diseases can result in the prediction of physiologically improbable reaction fluxes due to erroneous "short-circuits" and energy-generating cycles [13]. These are caused by several common problems including: incorrect parameters for directionality constraints, the assignment of reactions to the wrong cellular compartments, or inaccurate representations of pathways, enzymes, transport steps, prosthetic groups and metabolites. These errors can introduce unrealistic bypasses and shuttles that appear to compensate for a disease state. For example, proton-coupled mitochondrial transporters running in reverse and thus pumping protons that contribute to ATP generation by the mitochondrial ATP synthase, and the confounding of free co-factors with prosthetic groups, especially the flavin adenine dinucleotide of mitochondrial succinate dehydrogenase (SDH) and electron-transferring flavoprotein (ETF), leading to incorrect electron transport between isolated complexes mediated by bound FAD/FADH. These problems are common in genome-scale models that include an initial auto-generation of the reaction network from databases that can include incomplete or incorrect annotation. These issues are particularly acute for modelling mitochondrial metabolism and metabolite transport, as all the current genome-scale models neglect the electrical gradient component (ΔΨ) of the proton motive force (PMF), and the correct proton cost of making ATP by the mitochondrial ATP synthase in animals [14]. It is also sometimes questionable whether the enormous size and complexity of these genome-scale models benefits simulations where only subsystems of cellular metabolism are of interest, such as central metabolism. Furthermore, the scale of genome-scale models can make some techniques computationally infeasible, such as elementary mode analysis [15], and the longer runtimes for their simulations and the complexity of the results can hinder exploratory analyses and hypothesis testing.
Many problems of genome-scale models can be avoided by using smaller, curated models validated against data from normal and disease metabolism. A more focussed and carefully defined model allows the user to examine and be confident in each reaction, and more clearly elucidate the behaviour of the system, including any short-comings, as well as interpret the results more easily. These 'core' models have been shown to be useful in a range of areas [16, 17] and we previously applied this approach for our iAS253 model of the mitochondrion, which we used to simulate metabolic diseases of the tricarboxylic acid cycle [18]. This model was then used as a basis to simulate other disorders including hypoxia during cardiac ischemia [19], fumarate hydratase deficiency [20] and common diseases of the mitochondrial electron transport chain [21]. These simulation results were used to generate detailed mechanistic hypotheses for data interpretation and to design further experiments. However, we recognised that this model could be improved upon by constructing a new model encompassing more of central metabolism and explicitly modelling physiochemical features such as the mitochondrial proton motive force. In particular, ease-of-use could be improved by providing extensive annotation of reactions and their parameters.
Here we present MitoCore, a new constraint-based model of central metabolism that addresses these issues and comprehensively expands upon and refines our previous mitochondrial models. This model has been designed to be easy-to-use, includes extensive annotation, has default parameters to simulate human cardiomyocyte metabolism, and is encoded in the widely used SBML format [22]. We anticipate the model will be of great use to those wishing to interpret empirical data by comparing it to simulations of central metabolism and thus investigate predictive models of disease and toxicology.
Building the MitoCore model of human central metabolism
We designed MitoCore as a constraint-based model of central metabolism with two compartments; one representing the cytosol, outer mitochondrial membrane, inter-membrane space and cytosolic side of the inner mitochondrial membrane, and the other the mitochondrial side of the inner membrane and the mitochondrial matrix. The accuracy and utility of a metabolic model is dependent upon the reactions included and the correct partitioning of these reactions between compartments. Thus to create MitoCore, we built a list of candidate reactions to include by considering human reactions in the KEGG [23], HumanCyc [24] and BRENDA [25] databases that use any metabolites involved in central metabolism, and assigned each reaction to the appropriate cellular compartment(s) by assessing the localisation evidence collated in the MitoMiner database [26] for its catalysing protein. For reactions catalysed by enzymes with a large amount of evidence for mitochondrial localisation but lacking specific evidence for being in the mitochondrial matrix or matrix side of the inner membrane, we applied the principle of metabolite availability [18]. A summary of this localisation evidence is provided in the mitochondrial evidence section of the supplementary annotation file (Additional file 1, 'Reaction & Fluxes' worksheet) and consists of confidence scores from the MitoCarta 2 [27] inventory of genes that encode mitochondrial proteins. Reaction directionality was assigned by taking the consensus from annotation in metabolic databases, estimates of Gibbs free energy [28, 29], and general rules of irreversibility [30]. For each reaction extensive additional annotation was recorded including the original KEGG identifiers, EC number, description, gene mappings (both HUGO gene symbol and Ensembl identifiers), and evidence for the gene's expression in heart and the protein's mitochondrial localisation.
The partitioning of metabolism between the mitochondrion and the cytosol logically led us to consider how to model the proton motive force (PMF) and the role of protons crossing the inner mitochondrial membrane as part of oxidative phosphorylation, which produces the majority of cellular ATP. The PMF is achieved by complexes I, III and IV pumping matrix protons across the mitochondrial inner membrane and into the intermembrane space. This proton-pumping creates a proton motive force (PMF) across the membrane that has two components: a proton gradient (ΔpH) coupled with an electrical membrane potential (ΔΨ). The energy for proton-pumping comes from the transfer of electrons down the respiratory chain from NADH and ubiquinone to oxygen, to form water. Additional electrons are passed into the respiratory chain from the TCA cycle by complex II, and from the degradation of fatty acids and amino acids by the electron-transfer flavoprotein (ETF). Mitochondrial ATP synthase uses the PMF to power ATP synthesis from ADP and phosphate by channelling protons back across the inner mitochondrial membrane. It is thus clear that it is necessary to distinguish as Peter Mitchell did [31], the protons involved in chemical reactions taking place in an isolated compartment ("scalar protons"), from the protons crossing between compartments ("vectorial protons"). Therefore, to represent the PMF and distinguish between scalar and vectorial protons, we modeled the PMF in MitoCore as a metabolite that is co-transported in steps that transport charged metabolites or protons across the inner mitochondrial membrane. We accounted for the relative contributions of both ΔΨ and ΔpH to the overall PMF by co-transporting 0.82 PMF metabolites for transport steps that affect ΔΨ, and 0.18 PMF metabolites for transport steps that affect ΔpH. These values are the average of published figures of the relative contributions of ΔpH and ΔΨ by several authors (see Additional file 2 for details and references). Therefore, the reactions that represent complexes I, III and IV of the respiratory chain move PMF metabolites that correspond to the number of protons they pump from the matrix to the cytosol, as a proton in this case affects both ΔΨ and ΔpH. For example mitochondrial ATP synthase needs to transport 2.7 PMF metabolites back to the matrix to synthesise one molecule of ATP (as it uses 2.7 protons per molecule of ATP [14]). We modelled electrogenic and proton-coupled transport steps between the two compartments in the same way; for example the mitochondrial ATP/ADP carrier 1 (SLC25A4) requires 0.82 PMF to be co-transported with each imported ADP3− and exported ATP4− nucleotides to reflect the charge difference that affects only ΔΨ (Eq. 1), whereas the proton-coupled phosphate carrier (SLC25A3) imports 0.18 PMF as overall transport is electro-neutral and so only affects ΔpH (Eq. 2):
$$ {\mathrm{ATP}}_{\mathrm{mito}}+{\mathrm{ADP}}_{\mathrm{cytosolic}}+0.82\ {\mathrm{PMF}}_{\mathrm{cytosolic}}\rightarrow {\mathrm{ATP}}_{\mathrm{cytosolic}}+{\mathrm{ADP}}_{\mathrm{mito}}+0.82\ {\mathrm{PMF}}_{\mathrm{mito}} $$
$$ {{\mathrm{H}}^{+}}_{\mathrm{cytosolic}}+0.18\ {\mathrm{PMF}}_{\mathrm{cytosolic}}+{\mathrm{Pi}}_{\mathrm{cytosolic}}\kern0.5em \rightarrow {{\mathrm{H}}^{+}}_{\mathrm{mito}}+0.18\ {\mathrm{PMF}}_{\mathrm{mito}}+{\mathrm{Pi}}_{\mathrm{mito}} $$
We next modelled the remaining transport steps that connect the compartments, and the influx and efflux of metabolites to the cytoplasm. We defined four different transport categories. First were transport steps based on the characterised mitochondrial transport proteins. Many carriers can transport a range of related substrates (although with different affinities) and we modelled each metabolite combination as a separate step including counter exchange and proton-coupling. In some cases proton-coupled transporters were represented by two reactions to model separately the forward and reverse directions, and a proton (plus co-transported PMF metabolite) only used for movement down the proton gradient. This prevented transporters being used in reverse to pump protons artificially (and so transferring PMF) and thereby unrealistically contributing to ATP production by the mitochondrial ATP synthase. The second category was for metabolites whose transporters are unidentified. We modelled these steps as uniporters that are not proton coupled. If the metabolite was charged and moving into the mitochondrial matrix we assumed this would impact ΔΨ and co-transported 0.82 PMF metabolite accordingly. The third category was for metabolites that can diffuse across the inner mitochondrial membrane—including oxygen, carbon dioxide and water—and these were modelled as reversible uniport transport steps. Finally we modelled the insertion of lipids into the mitochondrial inner membrane via the flippases, which was coupled to ATP hydrolysis.
To account for the generation of reactive oxygen species (ROS) from the respiratory chain in MitoCore, we defined their production as 0.001% of the flux through complex I—the primary site of ROS generation from the respiratory chain—and hence also reduce the efficiency of proton-pumping [32, 33]. Subsequent reactions model ROS being converted to water at the expense of NADPH.
Various reactions defined in KEGG do not distinguish between free and prosthetic redox cofactors, e.g. FAD/FADH, which can lead to unrealistic electron transport being mediated between prosthetic redox co-factors. To prevent this in MitoCore, we removed prosthetic co-factors from reactions (such as for complex II and the ETF), and instead directly coupled their reactions, e.g. to the reduction of ubiquinone to ubiquinol.
For convenience in setting up simulations, we created four 'pseudo' reactions that summarise aspects of MitoCore's biological activity and can be used by flux balance analysis as objective functions: ATP hydrolysis (representing cellular ATP demand), and the biosynthesis of heme, lipids and amino acids.
To enable comparison of results from MitoCore to those of genome-scale models such as Recon 2 [5], we re-used identifiers for metabolites and reactions present in the Virtual Metabolic Human database (https://vmh.uni.lu) for MitoCore where possible. However, it was necessary to create 105 new reactions for MitoCore (Additional file 3) that were either absent from the Virtual Metabolic Human database and Recon 2.2 [10] (such as transport steps or compartment specific versions), inaccurately described (such as specifying prosthetic FAD as a free co-factor), or needed to represent new features (such as the proton motive force). New reaction identifiers were appended with the suffix 'MitoCore'. Pathways represented in MitoCore include glycolysis, pentose phosphate pathway, TCA cycle, electron transport chain, synthesis and oxidation of fatty acids, ketone body and amino acid degradation and cover all parts of central metabolism involved directly or indirectly with ATP production. Finally, MitoCore was extensively tested to ensure: it contained no erroneous energy-generating cycles; its simulation results were consistent with disorders we investigated previously, such as ischemia and mitochondrial diseases [18,19,20,21]; each reaction was capable of having a flux (depending on the constraints placed on the cytosolic boundary transport steps) and so ensure it contained no reaction dead-ends.
To enable others to follow our reasoning for assigning reactions to specific compartments and their directionality, we recorded the provenance for reactions (Additional file 1). For example, the directionality evidence section describes why a constraint has been set; a manually evaluated consensus of information from the KEGG [23], HumanCyc [24] and BRENDA [25] databases, general rules of irreversibility [30], large ΔG values from eQuilibrator [29] or estimated using a group contribution method [28], and information from the literature. In cases for which reaction directionality was unclear, it was kept reversible. To address why a reaction has been included in our cardiomyocyte model, we included a heart expression section consisting of RNAseq and immunochemistry expression levels of genes taken from the Human Protein Atlas (version 14) [9]. The spreadsheet also includes gene mappings, identifiers from Recon 2 and KEGG, mitochondrial localisation evidence (as described above) and baseline reaction fluxes when the objective function was maximum ATP production under normal conditions. For distribution, we encoded MitoCore in SBML [22] (Additional file 4) and produced a companion annotation Excel spreadsheet (Additional file 1).
Simulating cardiomyocyte metabolism using MitoCore and flux balance analysis
MitoCore's default reactions and parameters are optimised for cardiomyocytes and use the metabolites available to healthy hearts of glucose, fatty acids, ketone bodies and amino acids (references listed in Additional file 1). To demonstrate MitoCore's capability of producing physiological relevant results using these parameters, we simulated cardiomyocyte metabolism by using flux balance analysis (FBA) [3]. To reflect the primary role of central metabolism in cardiomyocytes we set the simulation's objective as maximum ATP production and calculated the optimum reaction fluxes through central metabolism. The resultant reaction fluxes simulated core metabolism correctly with activity of all respiratory complexes, the TCA cycle and malate-aspartate shuttle (Fig. 1, Additional file 1). As metabolic fuels were provided in slight excess, the availability of oxygen limited the overall fluxes. Simulated ATP production was 100.9 μmol/min/g of dry weight. Sources of acetyl-CoA for the TCA cycle were fatty acid degradation (55.0%), glucose oxidation (26.4%), lactate oxidation (8.4%), ketone body degradation (6.1%), amino acid degradation (3.8%) and glycerol oxidation (0.3%). The amino acids degraded and used to produce ATP were histidine, isoleucine, leucine, lysine, threonine, valine, arginine, aspartate, cysteine, glycine, proline, serine, asparagine, and alanine. Ammonia, produced as a by-product of amino acid degradation, was exported from the system. To determine the robustness of the FBA simulation results with these parameters, we performed Flux Variable Analysis (FVA) at 100% and 98% of the optimal solution (See Methods and Additional file 1).
Summary of the major active pathways of central metabolism in the flux balance analysis simulation of the MitoCore model with default parameters and the objective function of maximum ATP production. Values of all fluxes are reported in Additional file 1.
Metabolite degradation and ATP yields with MitoCore and Recon 2.2
To demonstrate how different metabolites are degraded appropriately in the MitoCore model and the effect on ATP production of implementing the proton motive force, we performed a series of simulations using particular 'fuel' metabolites in isolation. We simulated the oxidation and degradation of glucose, lactate, hexadecanoic acid, hydroxybutanoate, acetoacetate and 20 different amino acids in separate simulations (Additional file 5). The simulations showed that in each case the degradation routes used were biologically plausible. Unsurprisingly fatty acids were the most energy rich fuels (ATP production of 112 μmol/min/g of dry weight), followed by the amino acids tryptophan (43 μmol/min/g of dry weight), isoleucine (38 μmol/min/g of dry weight), leucine (37 μmol/min/g of dry weight) and phenylalanine (36 μmol/min/g of dry weight) and then glucose (33 μmol/min/g of dry weight).
To compare the simulations by MitoCore with those from a genome-scale model, we performed the same metabolite degradation and ATP yield simulations with the human genome-scale model Recon 2.2 [10] (Table 1). The Recon 2.2 model used biological plausible routes in the degradation of many metabolites, such as histidine, isoleucine, leucine, lysine, phenylalanine, threonine, tryptophan, valine, arginine, cysteine, glutamate, glutamine, glycine, and tyrosine. However, non-canonical pathways were used with several other metabolites. For example, in glucose metabolism, Recon 2.2 produced lactate from pyruvate, transported this lactate into the mitochondrion and then converted it back to pyruvate, bypassing both the pyruvate carrier and the malate-aspartate shuttle. Lactate degradation followed a similar route. For fatty acid degradation, the Recon 2.2 simulation used summary reactions for β-oxidation (instead of the individual degradation steps that are also present in the model) that have FADH as a cofactor. The electrons from this FADH then incorrectly entered the electron transport chain via the prosthetic FADH of one of the reactions representing mitochondrial succinate dehydrogenase (SDH), also resulting in the two reactions representing the activity of the SDH complex not having the same flux. In addition, the carnitine shuttle was inactive in Recon 2.2 simulations. In some cases, the degradation of particular amino acids used reactions that are unlikely to be present in humans, such as 3-hydroxypropionate dehydrogenase (EC 1.1.1.59, Recon 2.2 reaction ID: r0365) used in methionine degradation, or used a reaction in a direction that is unlikely in vivo, such as for pyrroline-5-carboxylate reductase 1 (EC 1.5.1.2, Recon 2.2 reaction ID: PRO1xm) in proline degradation.
Table 1 Comparison of maximum ATP yields from different carbon source 'fuel' metabolites
Simulating the mitochondrial disease fumarase deficiency with MitoCore and Recon 2.2
To compare the ability MitoCore and Recon 2.2 to model diseases of central metabolism, we performed preliminary simulations of the mitochondrial disease fumarase deficiency (OMIM:606812). Fumarate deficiency is a mitochondrial disease resulting from a defect in the fumarate hydratase gene (ENSG00000091483) that encodes both cytosolic and mitochondrial isoforms, and that converts fumarate to malate—a key step in the TCA cycle. This can leave patients with almost no residual enzyme activity and causes developmental delay, severe mental retardation, seizures and dysmorphic facial features [34,35,36,37]. A key diagnostic marker for the disease is the presence of fumarate in the cerebrospinal fluid.
To simulate the disease in each model we used the default cardiomyocyte uptake parameters of MitoCore (replicating them in Recon 2.2) and inactivated both the cytosolic and mitochondrial reactions that represent the enzyme. The MitoCore model showed a 72% reduction in ATP production upon deletion of fumarate hydratase compared to the default model (Fig. 2, Additional file 6). Flux through the TCA cycle continued at a lower level until reaching fumarate, which was effluxed. To bridge the break in the TCA cycle due to the inactive fumarate hydratase and allow the TCA cycle to complete, oxaloacetate was produced from pyruvate (by using mitochondrial pyruvate carboxylase) and the malate-aspartate shuttle and fed into the TCA cycle. Amino acid degradation further bolstered flux through the TCA cycle. All these compensatory reactions seemed biologically plausible. In contrast, Recon 2.2 showed a much lower drop in ATP production of 17% upon inactivation of fumarate hydratase activity. This was largely due to compensation by a reaction that represents citrate (pro-3S)-lyase (Recon 2.2 reaction ID:CITL, EC: 4.1.3.6) that allowed the degradation of fatty acids and ketone bodies to continue at a higher rate than was possible in the MitoCore model by converting citrate to oxaloacetate and acetate, which was effluxed. However, citrate (pro-3S)-lyase is absent from humans and the gene assigned to this reaction in the Recon 2.2 model has been characterised to convert glyoxylate and acetyl-CoA to malate [38].
Summary of the compensatory pathways used by the MitoCore and Recon 2.2 models when simulating fumarase deficiency with the objective function of maximum ATP. (Red arrows represent reactions active in both models; purple arrows are only active in the MitoCore model; and orange arrows are only active in the Recon 2.2 model)
The effect of proton leak on the electron transport chain in MitoCore
To show the MitoCore model can simulate experimentally induced conditions that are directly affected by the proton motive force, we performed a series of simulations representing an increasing proton leak across the mitochondrial inner membrane. We introduced a flux through the transport step that represents the leak of matrix protons to the cytosol through the uncoupling protein 2 (UCP2) in cardiomyocytes [39]. The optimum reaction fluxes for maximum ATP production (Additional file 7) were identical to those under default constraints with the exception of the steps involved in mitochondrial ATP synthesis. Maximum ATP production was reduced as the proton leak increased due to the progressively reduced flux through mitochondrial ATP synthase until it reversed under very high leak (Fig. 3), and used ATP synthesized in the TCA cycle. At the highest level of proton leak the flux also reversed through the mitochondrial ATP/ADP carrier and the phosphate carrier to import additional ATP from glycolysis.
The effect of increasing proton leak through UCP2 during simulations of the MitoCore model on: maximum ATP production, flux through mitochondrial ATP synthase, flux through the mitochondrial ATP/ADP transporter, and flux through the mitochondrial phosphate transporter. The flux through ATP synthase is distinct from all the other fluxes that are superimposed, because there is also synthesis of mitochondrial ATP from the TCA cycle
Here we present MitoCore, a curated, constraint-based model of human central metabolism, designed as a predictive model of metabolism in disease and toxicology, and for use by a wide range of researchers. It covers all major pathways involved in central metabolism using 407 reactions and mitochondrial transport steps, and 74 transport steps over the plasma membrane. To increase the metabolic flexibility of MitoCore, we included a large number of reactions that were not assigned to classical metabolic pathways, but could have potentially important roles in supporting central metabolism. MitoCore was parameterised and annotated for cardiomyocyte metabolism, which is useful for many types of analyses as the cardiomyocyte can metabolise a wide range of substrates and has reactions common to many other cell types, as well as representing the metabolism of an organ of utmost importance in human health, disease and toxicology. This allows the simulation results to be generalisable, without having features that are particularly cell specific, such as those found only in hepatocytes. However, we have included reactions that are inactivated in the default cardiac model, but can be activated to represent the metabolic capabilities of other cell types, e.g. gluconeogenesis, ketogenesis, β-alanine synthesis and folate degradation. Thus, the model allows biologically relevant flux distributions to be generated 'out of the box' without altering the model, while allowing for easy modification to represent metabolism in other cell types.
MitoCore has several unique features. The first is the more accurate partitioning of metabolism between the mitochondrion and the cytosol by using extensive localisation data and annotation. This partitioning has an important impact on model behaviour as the limited transport steps into the mitochondrial matrix result in dramatic differences in metabolite availability in the matrix compared to the cytosol. Therefore, it is important that reactions are assigned to the correct compartment. We achieved this by manually evaluating the subcellular localisation of each reaction's catalysing protein by using the mitochondrial localisation evidence in the MitoMiner database [26].
To model oxidative phosphorylation in MitoCore, we devised a new representation of the PMF and mitochondrial respiratory chain—the second unique feature of MitoCore. MitoCore's representation of the respiratory chain differs in many key aspects to other metabolic models, due to how it models vectorial protons and accounts for both components of the PMF. MitoCore represents the PMF as a metabolite that is co-transported in steps that transport charged metabolites or protons across the inner mitochondrial membrane, such as the reactions of the respiratory complexes, and in proton-coupled and electrogenic transport steps. This separate modelling of vectorial protons using an additional new PMF metabolite enabled the impact of electrogenic transporters on the PMF to be accounted for in flux balance analysis simulations for the first time, and prevented simulation artefacts where (scalar) protons generated or removed from other parts of metabolism allow unrealistically high ATP production.
The third unique feature of MitoCore is the physiochemical modelling of the transport steps that connect the cytosolic and mitochondrial compartments including their impact on the PMF, and the influx and efflux of metabolites over the plasma membrane. 83 transport steps connect the two compartments, of which 30 are modelled on the known transport mechanisms of characterised transport proteins of the inner mitochondrial membrane, whereas the other 53 represent known transport capabilities of the membrane, such as diffusion of small metabolites. 74 transport steps at the cytosolic boundary represent the import and export of metabolites across the plasma membrane, such as oxygen, carbon dioxide, glucose, fatty acids and amino acids.
MitoCore improves over other models by distinguishing between free and prosthetic redox co-factors. This keeps separate the electrons entering the respiratory chain from different sources, which can otherwise become connected via e.g. a shared flavin adenine nucleotide metabolite, and is particularly relevant under perturbed conditions where the erroneous connection of free and prosthetic flavin adenine nucleotides can cause unrealistic electron transport bypasses to occur. This problem is endemic in large-scale models that auto-generate reaction networks directly from metabolic databases without manual curation of reactions.
Other improvements to MitoCore included modelling protein complexes as one reaction rather than a series of linked reactions, as can often be found in metabolic pathway databases and other models. This is particularly important for simulations of gene knockouts, as a gene deficiency will disrupt the activity of the whole protein complex rather than just one of its subunit's reactions. Further, to facilitate gene-based analyses (such as knock out studies) we provide reaction-to-gene mappings with both gene symbols and Ensembl identifiers. Finally, to summarise different aspects of central metabolism during simulations with MitoCore, we defined four 'pseudo' reactions for biosynthesis of amino acids, lipids and heme that are required by cells for maintenance and growth, and ATP hydrolysis to reflect the energy demand of the cell. These pseudo reactions are designed to be used as objective functions during flux balance analysis and can determine if particular functions of central metabolism have become impaired during simulations of disease.
A weakness when it comes to interpreting many metabolic models is the lack of provenance about their components. For example recording why reactions have been included, why directionality constraints have been set, or the origin of reaction parameters. Thus for MitoCore we created a supplementary annotation spreadsheet (Additional file 1) to record this provenance. The spreadsheet also serves as a useful template to map flux distributions onto, as reaction fluxes can be grouped in an intuitive way directly against useful supplementary information. When combined with the small size of the model, simulations can be generated quickly and then easily interpreted.
To demonstrate that MitoCore produces physiologically realistic results, we simulated cardiomyocyte metabolism using the default parameters with flux balance analysis [3] and the objective of maximum ATP production. The reaction fluxes showed central metabolism was modelled correctly, with the largest fluxes through the TCA cycle and respiratory chain. Numerous fuel sources (fatty acids, glucose, lactate, ketone bodies and amino acids) were imported, degraded and entered the TCA cycle at several different points (Fig. 1). The combination of different fuel metabolites used in the simulations showed similar trends to experimental ranges in well-perfused heart (although the reported ranges are large, reflecting the wide range of metabolites that the heart can potentially use). For example, experimental measurements of the sources of acetyl-CoA for the TCA cycle [40] report the majority of acetyl-CoA (60–90%) derives from fatty acids, which is close to the 55% in the MitoCore simulation, whereas glycolysis (including lactate oxidation) accounts for 10–40% of acetyl-CoA compared to 35% in the MitoCore simulation, with the remainder of acetyl-CoA derived from amino acids. Finally, to show the robustness of the simulation results, we performed Flux Variability Analysis (FVA). The ranges for reaction fluxes calculated (Additional file 1) showed a realistic range of values and a lack of flux loops (apart from interrelated mitochondrial transport steps), with most of the fluxes uniquely determined or having a limited range with large non-fixed fluxes corresponding to isoenzymes or transport steps whose fluxes exchange while maintaining a fixed total flux.
To show the importance of explicitly modelling the PMF, we simulated the maximum ATP production achievable using 1 μmol/min/g of dry weight of common metabolic fuels in isolation (effectively calculating their ATP yields) and compared the results to those generated by the genome-scale model Recon 2.2 [10] and from theoretical calculations [41] (Table 1 and Additional file 5). Recon 2.2 is an update of the widely used Recon 2 model [5], which also separates protons used by the electron transport chain and the aspartate-glutamate carrier from the rest of metabolism to allow it to predict ATP yields, unlike its predecessor, Recon 2, that produces infinite ATP under these conditions [10]. However, Recon 2.2 does not directly capture the energetic cost on ATP production of other proton coupled mitochondrial transport steps or the ΔΨ part of the PMF. In MitoCore each glucose produced 33 ATP in comparison to 32 calculated theoretically [41] and 32 in Recon 2.2 [10]. For the fatty acid hexadecanonic acid 112 ATP were produced compared to 108 theoretically [41] and 107 in Recon 2.2. A significant difference between the models is the number of protons required by mitochondrial ATP synthase to produce one molecule of ATP: MitoCore uses 2.7 (based on the structure of the bovine mitochondrial ATP synthase [14]) whereas the theoretical calculations use 2.5 and Recon 2.2 uses 4.0 (presumably to account indirectly for the electroneutral proton-coupled phosphate carrier and electrogenic charge-coupled ATP/ADP exchange). Due to also considering additional factors that affect ATP production—including the impact and bioenergetic cost of all the transport steps on the PMF as well as ROS production and removal—we believe our figure is likely to be more accurate than both Recon 2.2 and these theoretical calculations. When comparing the flux distributions between MitoCore and Recon 2.2 it appears that despite having similar ATP yields for many metabolites, the pathways used by MitoCore are more biologically reasonable for some metabolites (i.e. using canonical degradation pathways) and lacked some of the unlikely elements found in Recon 2.2, such as using a lactate shuttle instead of the pyruvate carrier during glucose oxidation. In some Recon 2.2 simulations the malate-aspartate shuttle was inactive, presumably due to its use having a direct penalty on ATP production, the only transport step defined in Recon 2.2 to do so. MitoCore avoids this problem of unrealistic bypasses by using PMF penalties on all the relevant mitochondrial transport steps.
Further differences could be seen when comparing the results of preliminary simulations of fumarase deficiency (Fig. 2) that showed MitoCore replicating the efflux of fumarate seen in patients while also using plausible compensatory mechanisms, which would provide a promising starting point for further simulations and analysis (such as determining the effect of varying parameters). Recon 2.2 did not replicate the efflux of fumarate, and ATP production was maintained by using a reaction (CITL, a non-ATP dependent citrate lyase) that instead effluxed acetate. This reaction is unlikely to be present in human, which suggests the Recon 2.2 model would require further investigation and changes to simulate fumarase deficiency realistically.
An important aspect of modelling is exploring different hypotheses by running multiple simulations with different parameters. This is especially true when simulating mechanisms of disease, which may reveal compensatory mechanisms and the effect of supplying different metabolites to compensate for metabolite deficiencies. Therefore, it is beneficial if simulation runtimes are comparatively short. Even when using computationally expensive types of flux balance analysis, such as geometric FBA, MitoCore takes ~10 s per run, compared to ~3 h per run for Recon 2.2. The short runtime for MitoCore simulations makes the exploration of varying parameters much easier and routine, computationally expensive methods feasible, and allows rapid testing of new ideas and hypotheses.
When comparing maximum ATP production using default parameters to the previous iAS253 model using the same parameters [18], ATP production was notably lower (101 vs 140 μmol/min/g of dry weight) due to the introduction of PMF metabolites in the electron transport chain and transport steps, demonstrating it has an important effect. The results from the proton leak simulations (Additional file 7) show the use of metabolites to represent the PMF enables the model to replicate experimental observations under perturbed conditions that would otherwise be impossible, such as the reversal of mitochondrial ATP synthase, the ATP/ADP carrier and the phosphate carrier. Such behaviour is well-known in the absence of respiratory chain activity, for instance in ρ0 cells to maintain a proton motive force in mitochondria [42, 43].
Taken together these simulation results show the MitoCore model is capable of producing realistic results using a wide range of metabolites while avoiding some of the common problems of genome-scale models, including: unlikely shuttling between compartments, reactions that are not compartmentalized correctly, incorrect directionality constraints and the inclusion of reactions that are unlikely to be present in human. These problems, while understandable due to the scale of these models, nevertheless make producing biological relevant flux distributions difficult, especially under perturbed conditions, and show the continued need for carefully curated 'core' models for some types of simulation. Further, these comparisons illustrate an additional role for 'core' models, which is the ability to identify and diagnose problem areas for improvement in genome-scale models.
MitoCore is a constraint-based model of human central metabolism, provided with default parameters that provide physiologically realistic reaction fluxes for cardiomyocytes in FBA simulations. The model has several innovations including a new representation of the respiratory chain and proton motive force, and partitioning of reactions to subcellular compartments based on the latest localisation evidence. To achieve an accurate depiction of central metabolism, each of MitoCore's 407 reactions and transport steps was manually evaluated for directionality, expression of its gene in heart, and subcellular localisation of its protein. To allow MitoCore to be easily used and to make the results comparable with other models and compatible with other types of analyses, we used identifiers from the Virtual Metabolic Human database for both reactions and metabolites where possible, and recorded KEGG identifiers in the annotation. To help ease of use, we also provide an annotation spreadsheet that provides gene mappings, localisation and heart expression evidence, and notes on parameter choice. MitoCore is provided in SBML format to be compatible with a wide range of software. We hope MitoCore will be of use as a research tool to a wide range of biomedical scientists and students—from experienced modellers interested in central metabolism or using computationally intensive methods that are infeasible on a genome-scale model, to those new to modelling who would like to begin by using a small manageable model, with application as a predictive model of disease.
Identifying reactions of central metabolism to include in MitoCore
The starting point for the MitoCore model was an updated version of the iAS253 mitochondrial model [21]. The model was expanded by searching for human reactions in KEGG [23], HumanCyc [24] and BRENDA [25] that were missing from the iAS253 model and could impact central metabolism. Each reaction was reassessed for subcellular localisation and directionality (see below), and to ensure the reactions were occurring in most tissues including cardiac, tissue expression of their genes and proteins was verified by using the Human Protein Atlas [9].
Partitioning reactions between the cytosol and mitochondrion in MitoCore
To partition the reactions into either the cytosol or mitochondrion, each enzyme was manually evaluated by using the mitochondrial localisation evidence in the MitoMiner database [26]. MitoMiner collates GFP tagging, large-scale mass-spectrometry mitochondrial proteomics studies and mitochondrial targeting sequence predictions, with detailed annotation from the Gene Ontology and metabolic pathway data from KEGG. MitoMiner also contains homology information allowing localisation evidence to be shared amongst species. All available experimental localisation evidence, mitochondrial targeting sequence predictions and annotation were considered, including from homologs from mouse, rat and yeasts. For reactions with strong evidence for mitochondrial localisation, but where matrix localisation is unclear, the principal of metabolite availability was applied —a reaction can only be present in a compartment if all its substrates are available and its products can be used by reactions within the same compartment [18]. Reactions residing in the mitochondrial matrix or matrix side of the mitochondrial inner membrane were assigned to the mitochondrial compartment. Transport steps were created to connect the cytosol and matrix based on the transport properties of the membrane (active transport, diffusion, etc.). Each reaction was cross-referenced with the Virtual Metabolic Human database (https://vmh.uni.lu) and used its identifiers for reactions and metabolites where possible.
Assigning reaction directionality in MitoCore
Reaction directionality was manually evaluated for each reaction in MitoCore. The KEGG [23], HumanCyc [24] and BRENDA [25] databases were consulted, and general rules of irreversibility were taken into account (such as most reactions consume ATP rather than produce it and carbon dioxide is normally produced not consumed) [30]. The ΔG values for reactions were also considered, both calculated by eQuilibrator [29] and estimated by using the group contribution method [28], and large changes noted. If reaction directionality was conflicted or unclear, then the literature was consulted. If support for irreversibility was poor or a consensus could not be found, the reaction was assigned reversible. The information used to make each assessment was recorded in the directionality evidence column of the annotation spreadsheet (Additional file 1). Further refinement of reaction directionality was used to eliminate loops that could produce metabolites such as ATP and NADH for 'free', and to prevent the interconversion of NADH and NADPH, unless it had been experimentally verified.
Defining and updating reactions in MitoCore
Modelling of the proton motive force (PMF) was improved across the mitochondrial inner membrane, a pseudo-metabolite was introduced to model the effect on the PMF of proton and electrogenic transport steps across the inner mitochondrial membrane. In total 43 reactions representing respiratory complexes as well as proton-coupled transport steps were rewritten to use this new species. Next, to prevent unrealistic bypasses between free and complex-bound prosthetic flavin adenine dinucleotides, FADH and FAD+ were removed from all reactions and replaced with ubiquinone and ubiquinol. Finally, to represent better the current understanding of central metabolism pathways, some reactions were rewritten, such as the generation of ROS from complex I, and the stoichiometry of the proton-pumping of the respiratory complexes. In some cases a reaction's subcellular localisation was changed by defining mitochondrial-specific metabolite species, or new reactions were written using existing metabolite species.
To highlight where there are differences between a MitoCore reaction and the corresponding reaction in the Virtual Metabolic Human database, the MitoCore identifier used the Virtual Metabolic Human database identifier with the suffix 'MitoCore'. (N.B. Recon 2 also uses The Virtual Metabolic Human database identifiers.)
Testing the MitoCore model
The model was extensively tested: each reaction in the model was set as the objective function in a series of simulations to ensure all reactions were capable of having flux, with directionality constraints reevaluated for both the reaction and other members of the same pathway if this was not the case; erroneous energy-generating cycles were manually identified and removed by running a succession of simulations where ATP production was maximised and a wide range of different 'fuel' metabolites provided; that the model used physiologically characterised pathways under normal conditions and used plausible mechanisms under perturbed conditions.
The MitoCore model was encoded in SBML v2.1 and its validity checked with the SBML online validator [44].
Simulating metabolism
Metabolism was simulated by using flux balance analysis (FBA), which can be summarised as calculating the reaction turnover, or fluxes (flows) of metabolites through a network of biochemical reactions assuming a pseudo steady state [3]. The fluxes through the network are constrained by the stoichiometry and directionality of the reactions, as well as flux capacity and cytosol boundary uptake ranges. Cytosol boundary transport steps model the import and export of metabolites to the cell, but the overall rate of production and consumption of metabolites is assumed to be zero, hence a pseudo steady state. A metabolic objective function is chosen for each simulation, and FBA used to calculate an optimal set of reaction fluxes that maximise this function. For simulations here, maximum ATP production (by maximising flux through the pseudo reaction of ATP hydrolysis) was used as the objective function because energy generation is a major purpose of central metabolism, particularly in cardiomyocytes. To determine the robustness of the model solutions and identify the most important reactions contributing to the optimal solution, flux variability analysis (FVA) was used. FVA calculates the flux ranges of each reaction that still give the same (maximal) value of the objective function as the optimal solution, or a fraction of it. FVA was applied at 100% and 98% of the optimal ATP production.
For simulations of ATP yield, all cytosol boundary uptake fluxes for metabolites that can be degraded to produce ATP were set to zero, oxygen was increased to 50 μmol/min/g of dry weight (so that limited oxygen availability did not affect results), while other cytosol boundary conditions were unaltered. The uptake flux of each metabolite of interest was then increased to 1 μmol/min/g of dry weight. Geometric FBA simulations were then performed with maximum ATP production as the objective function. (When Recon 2.2 simulations did not obtain convergence, the 'epilson' and 'flexRel' parameters were relaxed from the defaults.)
For simulations of fumarase deficiency, the reactions corresponding to mitochondrial and cytosolic fumarate hydratase were inactivated in each model, whilst the uptakes values were left at default values. Simulations were performed with geometric FBA and an objective function of maximum ATP production. To enable geometric FBA of the Recon 2.2 model to converge to a solution, the efflux was prevented of acetone (Recon 2.2 reaction ID: EX_acetone), tetradecanoate (Recon 2.2 reaction ID: EX_ttdca), 3-(4-hydroxyphenyl)pyruvate (Recon 2.2 reaction ID: EX_34hpp_), 4-methyl-2-oxopentanoate (Recon 2.2 reaction ID: EX_34hpp_) and bicarbonate (Recon 2.2 reaction ID: EX_hco3).
For simulations of the effect of proton leak on maximum ATP production, the lower bound of the reaction representing the gene UCP2 (MitoCore reaction ID: HtmB_MitoCore) was increased over a series of simulations, thus forcing a minimum flux–representing proton leak—through the reaction.
FBA simulations were performed by using MATLAB (Math Works, Inc., Natick, MA) and the COBRA Toolbox [45], with the linear programming solver GLPK (http://www.gnu.org/software/glpk).
Kirk PDW, Babtie AC, Stumpf MPH. Systems biology (un)certainties. Science. 2015;350:386–8.
Cornish-Bowden A, Mazat J-P, Nicolas S. Victor Henri: 111 years of his equation. Biochimie. 2014;107 Pt B:161–6.
Orth JD, Thiele I, Palsson BO. What is flux balance analysis? Nat Biotechnol. 2010;28:245–8.
Duarte NC, Becker SA, Jamshidi N, Thiele I, Mo ML, Vo TD, et al. Global reconstruction of the human metabolic network based on genomic and bibliomic data. Proc Natl Acad Sci U S A. 2007;104:1777–82.
Thiele I, Swainston N, Fleming RMT, Hoppe A, Sahoo S, Aurich MK, et al. A community-driven global reconstruction of human metabolism. Nat Biotechnol. 2013;31:419–25.
Ma H, Sorokin A, Mazein A, Selkov A, Selkov E, Demin O, et al. The Edinburgh human metabolic network reconstruction and its functional analysis. Mol Syst Biol. 2007;3:135.
Mardinoglu A, Agren R, Kampf C, Asplund A, Nookaew I, Jacobson P, et al. Integration of clinical data with a genome-scale metabolic model of the human adipocyte. Mol Syst Biol. 2013;9:649.
Mardinoglu A, Agren R, Kampf C, Asplund A, Uhlén M, Nielsen J. Genome-scale metabolic modelling of hepatocytes reveals serine deficiency in patients with non-alcoholic fatty liver disease. Nat Comms. 2014;5:3083.
Uhlén M, Fagerberg L, Hallström BM, Lindskog C, Oksvold P, Mardinoglu A, et al. Tissue-based map of the human proteome. Science. 2015;347:1260419–9.
Swainston N, Smallbone K, Hefzi H, Dobson PD, Brewer J, Hanscho M, et al. Recon 2.2: from reconstruction to model of human metabolism. Metabolomics. 2016;12:109.
Yizhak K, Chaneton B, Gottlieb E, Ruppin E. Modeling cancer metabolism on a genome scale. Mol Syst Biol. 2015;11:817.
Bordbar A, Monk JM, King ZA, Palsson BO. Constraint-based models predict metabolic and associated cellular functions. Nat Rev Genet. 2014;15:107–20.
Fritzemeier CJ, Hartleb D, Szappanos B, Papp B, Lercher MJ. Erroneous energy-generating cycles in published genome scale metabolic networks: identification and removal. PLoS Comput Biol. 2017;13:e1005494.
Watt IN, Montgomery MG, Runswick MJ, Leslie AGW, Walker JE. Bioenergetic cost of making an adenosine triphosphate molecule in animal mitochondria. Proc Natl Acad Sci U S A. 2010;107:16823–7.
Trinh CT, Wlaschin A, Srienc F. Elementary mode analysis: a useful metabolic pathway analysis tool for characterizing cellular metabolism. Appl Microbiol Biotechnol. 2009;81:813–26.
Zielinski DC, Jamshidi N, Corbett AJ, Bordbar A, Thomas A, Palsson BO. Systems biology analysis of drivers underlying hallmarks of cancer cell metabolism. Sci Rep. 2017;7:41241.
Di Filippo M, Colombo R, Damiani C, Pescini D, Gaglio D, Vanoni M, et al. Zooming-in on cancer metabolic rewiring with tissue specific constraint-based models. Comput Biol Chem. 2016;62:60–9.
Smith AC, Robinson AJ. A metabolic model of the mitochondrion and its use in modelling diseases of the tricarboxylic acid cycle. BMC Syst Biol. 2011;5:102.
Chouchani ET, Pell VR, Gaude E, Aksentijević D, Sundier SY, Robb EL, et al. Ischaemic accumulation of succinate controls reperfusion injury through mitochondrial ROS. Nature. 2014;515:431–5.
Ashrafian H, Czibik G, Bellahcene M, Aksentijević D, Smith AC, Mitchell SJ, et al. Fumarate is cardioprotective via activation of the Nrf2 antioxidant pathway. Cell Metab. 2012;15:361–71.
Zieliński ŁP, Smith AC, Smith AG, Robinson AJ. Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling. Mitochondrion. 2016;31:45–55.
Hucka M, Finney A, Sauro HM, Bolouri H, Doyle JC, Kitano H, et al. The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics. 2003;19:524–31.
Kanehisa M, Sato Y, Kawashima M, Furumichi M, Tanabe M. KEGG as a reference resource for gene and protein annotation. Nucleic Acids Res. 2016;44:D457–62.
Romero P, Wagg J, Green ML, Kaiser D, Krummenacker M, Karp PD. Computational prediction of human metabolic pathways from the complete human genome. Genome Biol. 2005;6:R2.
Chang A, Schomburg I, Placzek S, Jeske L, Ulbrich M, Xiao M, et al. BRENDA in 2015: exciting developments in its 25th year of existence. Nucleic Acids Res. 2015;43:D439–46.
Smith AC, Robinson AJ. MitoMiner v3.1, an update on the mitochondrial proteomics database. Nucleic Acids Res. 2016;44:D1258–61.
Calvo SE, Clauser KR, Mootha VK. MitoCarta2.0: an updated inventory of mammalian mitochondrial proteins. Nucleic Acids Res. 2016;44:D1251–7.
Jankowski MD, Henry CS, Broadbelt LJ, Hatzimanikatis V. Group contribution method for thermodynamic analysis of complex metabolic networks. Biophys J. 2008;95:1487–99.
Flamholz A, Noor E, Bar-Even A, Milo R. eQuilibrator—the biochemical thermodynamics calculator. Nucleic Acids Res. 2012;40:D770–5.
Ma H, Zeng A-P. Reconstruction of metabolic networks from genome data and analysis of their global structure for various organisms. Bioinformatics. 2003;19:270–7.
Mitchell P. Coupling of phosphorylation to electron and hydrogen transfer by a chemi-osmotic type of mechanism. Nature. 1961;191:144–8.
Cochemé HM, Murphy MP. Complex I is the major site of mitochondrial superoxide production by paraquat. J Biol Chem. 2008;283:1786–98.
Murphy MP. How mitochondria produce reactive oxygen species. Biochem J. 2009;417:1–13.
Phillips TM, Gibson JB, Ellison DA. Fumarate hydratase deficiency in monozygotic twins. Pediatr Neurol. 2006;35:150–3.
Whelan DT, Hill RE, McClorry S. Fumaric aciduria: a new organic aciduria, associated with mental retardation and speech impairment. Clin Chim Acta. 1983;132:301–8.
Kerrigan JF, Aleck KA, Tarby TJ, Bird CR, Heidenreich RA. Fumaric aciduria: clinical and imaging features. Ann Neurol. 2000;47:583–8.
Bourgeron T, Chretien D, Poggi-Bach J, Doonan S, Rabier D, Letouzé P, et al. Mutation of the fumarase gene in two siblings with progressive encephalopathy and fumarase deficiency. J Clin Invest. 1994;93:2514–8.
Strittmatter L, Li Y, Nakatsuka NJ, Calvo SE, Grabarek Z, Mootha VK. CLYBL is a polymorphic human enzyme with malate synthase and β-methylmalate synthase activity. Hum Mol Genet. 2014;23:2313–23.
Laskowski KR, Russell RR. Uncoupling proteins in heart failure. Curr Heart Fail Rep. 2008;5:75–9.
Stanley WC, Recchia FA, Lopaschuk GD. Myocardial substrate metabolism in the normal and failing heart. Physiol Rev. 2005;85:1093–129.
Nelson DL, Cox MM, Lehninger AL. Lehninger principles of biochemistry. New York: W. H. Freeman; 2013.
Dupont CH, Mazat JP, Guerin B. The role of adenine nucleotide translocation in the energization of the inner membrane of mitochondria isolated from ϱ+ and ϱo strains of Saccharomyces Cerevisiae. Biochem Biophys Res Commun. 1985;132:1116–23.
Buchet K, Godinot C. Functional F1-ATPase essential in maintaining growth and membrane potential of human mitochondrial DNA-depleted ρ° cells. J Biol Chem. 1998;273:22983–9.
Bornstein BJ, Keating SM, Jouraku A, Hucka M. LibSBML: an API library for SBML. Bioinformatics. 2008;24:880–1.
Schellenberger J, Que R, Fleming RMT, Thiele I, Orth JD, Feist AM, et al. Quantitative prediction of cellular metabolism with constraint-based models: the COBRA toolbox v2.0. Nat Protoc. 2011;6:1290–307.
We wish to thank Lukasz Zielinski and Alexander Smith for their input and testing of the model, and Edmund Kunji for discussions on the transport of metabolites across the mitochondrial membrane.
ACS, FE and AJR were supported by the Medical Research Council, UK. JPM was supported by the Plan cancer 2014–2019 No BIO 2014 06 and the French Association against Myopathies.
All data generated or analysed during this study are included in this published article and its supplementary information files. In addition, the model and annotation file will be available at the MRC Mitochondrial Biology Unit website (http://www.mrc-mbu.cam.ac.uk/mitocore/).
Medical Research Council Mitochondrial Biology Unit, University of Cambridge, Cambridge Biomedical Campus, Hills Road, Cambridge, CB2 0XY, UK
Anthony C. Smith, Filmon Eyassu & Alan J. Robinson
Institute of Biochemistry and Genetics of the Cell, CNRS-UMR5095, 1 Rue Camille Saint Saëns, 33077, Bordeaux cedex, France
Jean-Pierre Mazat
University Bordeaux Segalen, 146 rue Léo Saignat, 33076, Bordeaux cedex, France
Anthony C. Smith
Filmon Eyassu
Alan J. Robinson
ACS and AJR conceived the study. ACS and JPM devised the methodology. ACS created the model. ACS, FE, JPM, and AJR participated in the curation and testing of the model. All authors contributed to, read and approved the final manuscript.
Correspondence to Alan J. Robinson.
Companion annotation spreadsheet recording evidence and provenance for reactions and parameters used in MitoCore, and reaction fluxes when simulating maximum ATP production using default parameters for cardiomyocytes. (XLSX 289 kb)
Experimental measurements of the relative contributions of ΔpH and ΔΨ to the proton motive force. (XLSX 11 kb)
105 new or altered reactions in the MitoCore model that differ from those in the Virtual Metabolic Human database (and used by the Recon 2.2 model). (XLSX 82 kb)
MitoCore model encoded in the SBML format. (XML 776 kb)
Flux distributions from the MitoCore and Recon 2.2 models for maximum ATP production with different metabolic fuels. (XLSX 445 kb)
Flux distributions from MitoCore and Recon 2.2 models when simulating fumarase deficiency and maximum ATP production. (XLSX 444 kb)
Flux distributions from the MitoCore model when simulating maximum ATP production with varying proton leak through the reaction representing UCP2. (XLSX 245 kb)
Smith, A.C., Eyassu, F., Mazat, JP. et al. MitoCore: a curated constraint-based model for simulating human central metabolism. BMC Syst Biol 11, 114 (2017). https://doi.org/10.1186/s12918-017-0500-7
Constraint-based model
Metabolic network
Flux balance analysis
Mitochondrial metabolism
Methods, software and technology` | CommonCrawl |
Models of tactile perception and development
Goren Gordon (2015), Scholarpedia, 10(3):32641. doi:10.4249/scholarpedia.32641 revision #150244 [link to/cite this article]
Curator: Goren Gordon
David S Deutsch
Martin J. Pearson
Dr. Goren Gordon, Personal Robots Group, Media Lab, MIT, Cambrdige, MA, United States
Models of tactile perceptions are mathematical constructs that attempt to explain the process with which the tactile sense accumulates information about objects and agents in the environment. Since touch is an active sense, i.e., the sensor organ is moved during the process of sensation, these models often describe the motion strategies that optimize the perceptual outcome.
Models of tactile development attempt to explain the emergence of perception and the accompanying motor strategies from more basic principles. These models often involve learning of exploration strategies and are aimed at explaining ontogenetic development of behavior.
These models are used in two complementary ways. The first is in an attempt to explain and predict animal and human behaviors. For this purpose, the vibrissae system of rodents is often used as it is a well-studied system in neuroscience. Vibrissae behaviors, i.e., movement strategies of rodent's facial hairs, during different perceptual tasks are modeled in an attempt to uncover the underlying common principles, as well as the neuronal mechanism of tactile perception and development. The same models are also used in artificial constructs, e.g., robots, in an attempt to both validate the emergence of tactile sensorimotor strategies, as well as to try and optimize tactile perception in novel robotic platforms.
1.1 Model types
1.2 Model application
2 Active sensing
2.1 Biological application
2.2 Robotic application
3 Tactile navigation
4 Development of tactile perception
Tactile perception means the information gathered on tactile objects in the environment. This information can be the position, shape, material or surface texture of the object. Models of tactile perception are thus aimed at explaining how this information is accumulated, integrated and used in tactile tasks, such as discrimination and localization.
Touch is an active sense, i.e., the sensory organ is usually moved in order to perceive the environment. Hence modeling tactile perception involves modeling the sensorimotor strategy that results in the accumulation of tactile information. In other words, these models describe the behavior, or motion, of the sensory organ as it interacts with the tactile object. Models attempt to either describe observed tactile-oriented behaviors in animals and humans or derive optimal perceptual strategies and then compare them to observed behaviors.
Figure 1: Active tactile perception models architecture.
Since touch, as opposed to vision, audition and smell, is a proximal sense, i.e., the sensory organ must be in contact with the object in order to perceive, locomotion is often part of the description of the tactile strategy. In nocturnal animals, such as many rodents, the vibrissae system, an array of moveable facial hairs, is used to perceive the environment in darkness. Navigation and object recognition is hence done mainly by the tactile sense. Several models of tactile-guided locomotion have been developed to address this cross-modality integration.
As any other sense, tactile perception changes during ontological development, based on the agent's experience and interaction with the environment. Part of this change is the emergence of sensorimotor tactile strategies that explore tactile objects. For example, pups' vibrissae have been shown to move in different ways as they mature to adulthood (Grant et al., 2012). Developmental models try to describe this emergence of exploration behavior using basic principles of sensory-guided motor learning and intrinsic motivation exploration.
Tactile perception modeling is usually composed of two main components, namely, perception and action. The perception component attempts to describe the integration of tactile information into a cohesive percept. The action component attempts to describe the motor strategies used in order to move the sensory organ so that it can acquire this information.
Tactile perception is usually modeled by either artificial neural networks or Bayesian inference. Artificial neural networks (ANN) are used in order to describe the learning process during the perceptual task. They are more closely related to the biological neural system and there are many computationally efficient tools to implement them. ANN are usually used in a supervised learning fashion, where the aim is to learn either tactile discrimination via labeled training sets, or continuous-variable forward models that capture the entire sensorimotor agent-environment interaction. Bayesian inference models capture the optimal integration of new observed information into a single framework of perceptual updates. Each new evidence from the possibly noisy environment is used in an optimal way to update the tactile perception in the current task. These models have fewer free parameters to tune and have been shown in recent years to describe many perceptual tasks in humans and animals very well.
Motor strategies for tactile perception are usually modeled either by optimal control theory or reinforcement learning. Optimal control theory is a mathematical formalism wherein one defines a cost function and then uses known mathematical techniques to find the optimal trajectory or policy that minimizes the cost. In tactile perceptual tasks, the cost function is usually a combination of perceptual errors, e.g., discrimination ambiguity, and energy costs of moving the sensory organ. Thus an optimal control solution can give the policy, or the optimal behavior, that maximizes perception while minimizing energy costs. Reinforcement learning is a computational paradigm that attempts to find the policy or behavior that maximizes future accumulated rewards. This is a gradual learning process where repeated interactions with the environment results in convergence to an optimal policy. In tactile perceptual tasks the reward is the completion of the task and the model results in a converged sensorimotor tactile strategy. A major difference between optimal control and reinforcement learning is that the former is solve "off-line", while the latter is a learning algorithm that takes into account the interaction with the environment. While both result in an optimal strategy or policy, their formalism and mathematical techniques are different.
Tactile perception and developmental models can be used in several ways. The first one is to attempt and describe, explain and predict animal and human tactile behavior. In each tactile task, the observed behavior is recorded and analyzed. Models are then constructed to attempt and re-capture the same behavior and then produce prediction of behaviors in novel tasks. The models are then validated in these new predicted tasks.
The second application for tactile models is the understanding of the underlying neuronal mechanism. For example, the vibrissae system of rodents have been studied for decades and have produced a deep understanding of the underlying neuronal network that result in tactile perception. Linking model components that describe tactile perception to specific brain areas or functions can increase understanding of these areas and may attempt to explain abnormal behavior in model and neurological terms.
Another application for tactile models is their implementation in artificial agents such as robots. Robotic platforms that have tactile senses are inspired by new understanding of biological tactile perception models. Integrating motors into the sensory organ, e.g., artificial whisker robots or tactile-sensor covered robotic fingers, enables new capabilities of object perception. However, controlling these robotic platforms become non-trivial as known motor-oriented control strategies fail in these perception-oriented domains. Implementing biologically-inspired sensorimotor models results in better performing robots.
Active sensing
Biological application
In an attempt to properly understand the tactile sensorimotor strategy rodents employ during a well-known perceptual task called pole-localization, humans were used as models for rodents (Saig et al, 2012). Subjects were equipped with artificial whiskers at their fingertips and were asked to localize a vertical pole, i.e., determine which pole was more posterior, using only information they got from their whiskers, since their vision and audition were blocked. Force and position sensors were placed on the finger-whisker connection, which enabled full access to the information into the "system", i.e., the human subject. It was shown that humans spontaneously employed similar strategies as rodents, i.e., they "whisked" with their artificial whiskers by moving their hands synchronously and perceiving temporal differences according to pole location. In other words, they determined which pole was more posterior by moving their hands together and detecting which hand touched a pole first. While there were other possible non-active strategies to solve the task, e.g. by positioning their hands over the pole and sensing the angular difference between the hands, participants chose to employ an active sensing strategy.
In order to model this behavior, a Bayesian inference approach was selected for the tactile perception, whereas an optimal control theory approach was selected for the motor strategy analysis. The task was then described as a simple binary discrimination task, i.e., which pole is more posterior, and a Bayes update rule was modeled by integrating perceived temporal differences between the two hands. A Gaussian noise model was assumed for the perceived temporal difference, introducing a parameter of the temporal noise, i.e., how close in time can two stimuli be to still be perceived as distinct. Another important parameter introduced in the Bayesian inference model was the confidence probability above which subjects decided to report their perceived answer. In other words, after repeated contacts with the poles, the probability of one pole being more posterior increases; above which threshold does the subject stop the interaction and report the perceived result?
The selected Bayesian inference model of this tactile perception task resulted in only two parameters, temporal noise and confidence probability, and allowed their estimation based on fitting to experimental results. The number of contacts prior to reporting was shown to increase with task difficulty, measured by decreased distance between the poles, as was predicted by the Bayesian model. Fitting the model prediction to the experimental results enabled estimation of the parameters: the temporal noise was assessed to be $312ms$ and the confidence probability $84\%$. The temporal noise was somewhat higher than previously reported purely tactile temporal discrimination thresholds, due to the fact that this experimental setup was an active sensing setup which introduced also motor noise. The confidence probability was comparable to many other psychological experiments, within which subjects had to report their perceived result after accumulating information. Hence, the Bayesian inference model of tactile perception eloquently described the accumulation and integration of tactile information.
The motor strategies employed by the subjects were also structured and exhibited initial longer, larger amplitude movements followed by decreasingly shorter and smaller amplitude ones. To model this behavior, an optimal control theory approach was taken, where a cost function was defined, followed by optimization techniques that resulted in an optimal policy that minimized costs. The cost function had three components: a perceptual error term representing the task; an energy cost term representing penalty for laborious actions; and a perceptual cost term, symmetrically identical to the energy term representing a cost to too much information. The model captured the behavior exhibited by the subjects and resulted in a simple principle governing it, namely, maintaining a constant information flow. In other words, the optimal control model "distilled" the complex tactile-perceptual driven behavior to a single guiding principle.
Robotic application
Inspired by the rodents vibrissae system, a robotic platform was constructed that had fully controlled moving artificial whiskers (Sullivan, et al., 2012). The robot was used in similar tasks to those studied in rodents, namely, surface distance and texture estimation. In other words, the robot moved its whiskers in biologically-inspired motor strategies and collected information about the surfaces via sensors located at the base of the whiskers. The robot employed models of both tactile perception and motor strategies designed based on understanding of the biological vibrissae system.
Tactile perception was modeled using a naive Bayes approach, where during training the robot collected sensory information on each type of surface and each distance from the surface, constructing labeled probability distributions for each. Then, during validation, the robot whisked upon a surface, collected information and classified the texture and distance according to the most probable class, based on the trained distributions.
The motor strategy employed an observed behavior in rodents, namely, Rapid Cessation of Protraction (RCP), which means that rodents whisk with smaller amplitude after an initial contact with an object. This strategy results in "light touch" upon the second whisk and onward with the surface. The same behavior was modeled and executed in the robotic rodent, whereupon the amplitude of the whisking was decreased after an initial perceived contact with the surface. The goal of the task and the specific models was to ascertain the potential benefits rodents might have for employing such a strategy.
The results of the study showed that the robot performed much more efficient and accurate classification of both texture and distance of the surface when employing the Rapid Cessation of Protraction (RCP) strategy, compared to unmodulated whisking. Further analysis of the results showed that using RCP resulted in less noisier sensory information which in turn resulted in improved classification. This model thus suggests that rodents employ the RCP strategy not only to keep their whisker intact, but also to improve signal-to-noise ratio and tactile perception. It also enables the development of more robust and more accurate artificial agents with moving tactile sensors.
Tactile navigation
Since touch is a proximal sense, direct contact with the objects in the environment are mandatory for tactile perception (Gordon et al., 2014b, Gordon et al., 2014c). In order to understand exploration behavior of rodents, a model was constructed that attempted to capture the complexity and structure of their exploration patterns. When rodents are allowed to explore a new round dark arena on their own, they move around the arena and sense its walls using their whiskers. They exhibit a complex exploration pattern in which they first explore the entrance to the arena, only then walk along the circumference walls of the arena and only then explore the open space in the center of the arena. Their exploration is composed of excursions made up of an outbound exploratory part and a fast retreat part in which they return to their home cage.
This tactile-driven exploration strategy was modeled using a novelty-based approach which combined tactile-perceptual representation of the arena and a motor strategy that balances between exploration motor primitives and retreats. For tactile perception of the arena a Bayesian inference approach was taken to represent the forward model of locomotion. In other words, the arena was represented as the prediction of the sensory information in a given location and orientation, e.g., a wall is represented as "in location $x$ and orientation $o$, the left whisker is predicted to experience touch". This representation is updated whenever the animal perceives a new tactile perception in any location using Bayes rule and assuming sensory noise, i.e., the perceived tactile sensation is not necessarily the correct one.
The exploration motor strategy was taken to consist of a balance between exploration motor primitives and retreats, where novelty was used as the thresholding factor. Exploration motor primitives are policies that determine the locomotive behavior of the rodent based on its perceived tactile sensation, e.g., wall-following primitive is the policy "if left whisker senses a wall, go forward", whereas wall-avoidance primitive is the policy "if right whisker senses a wall turn left". Three motor primitives were modeled, namely, circle-in-place, wall-following and wall-avoidance. Another "retreat primitive" was modeled as, given the current estimation of the arena, take the shortest path from the current location to the home cage.
Figure 2: Novelty management model architecture for tactile-driven navigation (Gordon et al., 2014c).
The balance between these motor primitives was dictated based on novelty, measured as the information gain in each time step that the arena model was updated. In other words, whenever the tactile forward model of the arena was updated, the number of bits that were updated, quantified by the Kullback-Leibler divergence between the prior and posterior distributions, represented the novelty. Whenever novelty was higher than a certain threshold, the retreat primitive was employed. Whenever novelty was lower than a certain threshold for a certain amount of time, the next exploration motor primitive was employed. This generative model captured many of the observed behaviors in tactile-driven exploring rodents and showed that the basic principle of novelty management can be used to model complex and structured exploration behaviors.
A robotic platform with actuated artificial whiskers was used to study a tactile-based Simultaneous Localization And Mapping (tSLAM) model (Pearson et al., 2013). In this setup, the perceptual task is dual, i.e., the robot needs to both localize itself in space and also map out the objects in the environment. As opposed to many other SLAM models, this model used only odometry and tactile sensation from the whisker-array as its input, i.e., it had no vision.
The tactile-driven exploration of the environment consisted of an occupancy map particle filter-based model of tactile perception and an attention-based "orient" motor strategy. The tactile perception model was composed of an occupancy map in which each cell in the modeled environmental grid had a probability of being occupied by an object. Each whisk of the artificial whisker on the robot updated this occupancy map in the estimated location of the robot, i.e., if a whisker made contact with an object, the probably of occupancy in that cell was increased. To optimize the simultaneous estimation of location and mapping, a particle filter algorithm was used, where each particle had its own occupancy map that was updated according to the "flow of information" from the whiskers. For estimation, the particle with the highest posterior probability was taken.
The motor strategy employed governed the motion of the moveable whisker array and was based on an attention model that executed an orienting behavior. In other words, a salience-based attention map was constructed based on the perceived whisker information, resulting in an orienting behavior of the entire "head" of the robot towards the salient tactile object. Thus, once contact was made with an object in the environment, the robot explored that object in greater detail. This increased the information collection required for the tSLAM algorithm.
The results of the study showed that the robot, which made several exploratory bouts in an arena with several geometric shapes, has performed a simultaneous localization and mapping of its environment, with an impressive agreement with the ground truth, as measured by an overhead camera. This model shows how known and well established models from other senses can be adapted to the unique properties of the tactile domain and inform of possible perceptual characteristics of exploring rodents, as well as improve the performance of tactile-based robotic platforms.
Development of tactile perception
Developmental models attempt to explain the emergence of tactile perception and their accompanying motor strategies from more basic principles (Gordon and Ahissar, 2012a). The latter assume repeated interaction between the agent and its environment, thus accumulating statistical representations of the underlying mechanism of sensory perception. Furthermore, the optimal sensorimotor strategies that maximize the perceptual confidence are learned in these developmental models, and not assumed or pre-designed.
Figure 3: Intrinsic reward reinforcement learning model architecture (Gordon et al., 2014c).
One framework of developmental models is artificial curiosity, wherein a reinforcement learning paradigm is used to learn the optimal policy, yet the reward function is intrinsic and is proportional to the learning progress of sensory perception. In one instantiation of this framework in the tactile domain, an artificial neural network was used to model the tactile forward model, i.e., the network predicted the next sensory state based on the current state and the action performed. More specifically, the network was implemented on the vibrissae system, where the sensory states were composed of whisker angle and binary contact information and the action was protraction (increased whisker angle) or retraction (decreased whisker angle). Thus, the ANN learned to map objects in the whisker field, e.g., given the current whisker angle and no contact, if the whisker protracts will it induce contact (there is an object) or not. By moving the whisker, the tactile perceptual model learned about the environment.
The question the developmental model tries to answer is, what is the optimal way to move the whisker so as to maximize the efficiency of mapping the environment? For this purpose, an intrinsic reward reinforcement learning was used, where the reward was proportional to the prediction error of the perceptual ANN. Thus, the more prediction errors made, the higher the reward, exemplifying the concept of "you learn by making mistakes". The policy converged to moving the whisker to the more unknown places.
The results of this developmental model showed the convergence of whisking behaviors, starting from random motion and ending up with behaviors observed in adult rodents, e.g., periodic whisking for learning free-space and touch-induced pumps (Deutsch et al., 2012) for localizing tactile objects in the whisker field. The model suggests that these behaviors are learned during development and are not innate in the rodent brain. Furthermore, the model suggests developmental-specific brain connectivity, between the perceptual-learning brain areas, e.g., barrel cortex, and the reward system, e.g., basal ganglia, such that the former supplied the reward signal to the latter.
A study of artificial curiosity principles on a finger robotic platform with tactile sensors was also performed (Pape et al, 2012). The goal was to study the emergence of tactile-oriented finger movements, that optimize tactile perception of surface textures. For the robotic platform, a robotic finger with two tendon-based actuators and a $2 \times 2$ array of 3D Micro-Electro-Mechanical System (MEMS) tactile sensors at its tip was used. The finger was able to flex in order to touch a surface with changing textures.
For tactile perception, a clustering algorithm was used to distinguish between the resulting frequency spectra of the MEMS recordings during $0.33 \ \textrm{s}$. This unsupervised learning model represented the abstraction of tactile sensory information into discrete tactile perceptions. However, the clustering was performed only on recent observations and was thus dependent on the movement of the finger, e.g., free movements without contact resulted in different spectra than tapping on the surface. The question asked in this study was "Which skills will be learned by intrinsically motivating the robotic finger to learn about different tactile perceptions?".
For this purpose, a rewarding mechanism was developed such that intrinsic rewards were given to various aspects of exploration: reward was high for unexplored states of the finger position encouraging exploration; reward was given for ending up in a tactile perceptual state, thus driving sensation towards a specific tactile perception, embodying active sensing principles; reward was given for skills that are still changing, thus focusing on stabilization of skills. This complex reward mechanism ensured the appearance of several intrinsically motivated stabilized skills, that were aimed at reaching specific tactile perceptions. Each developed skill thus resulted in a unique perception in a repeatable manner.
The study resulted in the emergence of several specific intrinsically motivated skills:
free movements that avoided the surface that resulted in free-air tactile perception;
tapping movements that resulted in unique spectra of the surface and;
sliding movements that resulted in a texture-specific spectra.
These well-known and documented tactile strategies of human finger-driven tactile perceptions emerged from intrinsic motivation and were not pre-designed. Thus, the developmental model resulted in learned tactile skills that were associated with unique tactile perceptions.
Deutsch, D; Pietr, M; Knutsen, P M; Ahissar, E and Schneidman, E (2012). Fast feedback in active sensing: Touch-induced changes to whisker-object interaction. PLoS One 7(9): e44272.
Gordon, G and Ahissar, E (2012). Hierarchical curiosity loops and active sensing. Neural Networks 32: 119-129.
Gordon, G; Fonio, E and Ahissar, E (2014a). Emergent exploration via novelty management. The Journal of Neuroscience 34(38): 12646-12661.
Gordon, G; Fonio, E and Ahissar, E (2014b). Learning and control of exploration primitives. Journal of Computational Neuroscience 37(2): 259-280.
Grant, R A; Mitchinson, B and Prescott, T J (2012). The development of whisker control in rats in relation to locomotion. Developmental Psychobiology 54(2): 151-168.
Pape, L et al. (2012). Learning tactile skills through curious exploration. Frontiers in Neurorobotics 6: 6.
Pearson, M J et al. (2013). Simultaneous localisation and mapping on a multi-degree of freedom biomimetic whiskered robot. In: 2013 IEEE International Conference on Robotics and Automation (ICRA) (pp. 586-592).
Saig*, A; Gordon*, G; Assa, E; Arieli, A and Ahissar E (2012). Motor-sensory confluence in tactile perception. The Journal of Neuroscience 32(40): 14022-14032.
Sullivan, J C et al. (2012). Tactile discrimination using active whisker sensors. IEEE Sensors Journal 12(2): 350-362.
Prescott, T J; Mitchinson, B and Grant, R A (2011). Vibrissal behavior and function. Scholarpedia 6(10): 6642. http://www.scholarpedia.org/article/Vibrissal_behavior_and_function. (see also pages mmm-nnn of this book)
Schultz, W (2007). Reward signals. Scholarpedia 2(6): 2184. http://www.scholarpedia.org/article/Reward_signals.
Sporns, O (2007). Brain connectivity. Scholarpedia2(10): 4695. http://www.scholarpedia.org/article/Brain_connectivity.
Woergoetter, F and Porr, B (2008). Reinforcement learning. Scholarpedia 3(3): 1448. http://www.scholarpedia.org/article/Reinforcement_learning.
Johnson, D H (2006). Signal-to-noise ratio. Scholarpedia 1(12): 2088. http://www.scholarpedia.org/article/Signal-to-noise_ratio.
Gordon, G; Fonio, E and Ahissar, E (2014). Emergent exploration via novelty management. The Journal of Neuroscience 34(38): 12646-12661.
Gordon, G; Fonio, E and Ahissar, E (2014). Learning and control of exploration primitives. Journal of Computational Neuroscience 37(2): 259-280.
Goren Gordon's home page
Encyclopedia:Touch
Sponsored by: Dr. Jonathan R. Williford, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, MD, USA
Reviewed by: Dr. Martin J. Pearson, Bristol Robotics Laboratory
Reviewed by: Dr. David S Deutsch, Princeton University, Princeton, NJ, USA
Retrieved from "http://www.scholarpedia.org/w/index.php?title=Models_of_tactile_perception_and_development&oldid=150244"
"Models of tactile perception and development" by Goren Gordon is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use | CommonCrawl |
3.6 E: Rates of Change Exercises
[ "article:topic", "calcplot:yes", "license:ccbyncsa", "showtoc:no", "transcluded:yes", "source-https://math.libretexts.org-undefined" ]
https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FRemixer_University%2FUsername%253A_hdagnew%40ucdavis.edu%2FCourses%252F%252FRemixer_University%252F%252FUsername%253A_hdagnew%40ucdavis.edu%252F%252FMonroe2%2FCourses%252F%252FRemixer_University%252F%252FUsername%253A_hdagnew%40ucdavis.edu%252F%252FMonroe2%252F%252FChapter_3%253A_Derivatives%2FCourses%252F%252FRemixer_University%252F%252FUsername%253A_hdagnew%40ucdavis.edu%252F%252FMonroe2%252F%252FChapter_3%253A_Derivatives%252F%252F3.6_E%253A_Rates_of_Change_Exercises
MTH 210 Calculus I
Chapter 3: Derivatives
3.6: Derivatives as Rates of Change
144) A car driving along a freeway with traffic has traveled \(s(t)=t^3−6t^2+9t\) meters in \(t\) seconds.
a. Determine the time in seconds when the velocity of the car is 0.
b. Determine the acceleration of the car when the velocity is 0.
145) A herring swimming along a straight line has traveled \(s(t)=\frac{t^2}{t^2+2}\) feet in \(t\)
seconds. Determine the velocity of the herring when it has traveled 3 seconds.
Solution: \(\frac{12}{121}\) or 0.0992 ft/s
146) The population in millions of arctic flounder in the Atlantic Ocean is modeled by the function \(P(t)=\frac{8t+3}{0.2t^2+1}\), where \(t\) is measured in years.
a. Determine the initial flounder population.
b. Determine \(P′(10)\) and briefly interpret the result.
147) [T] The concentration of antibiotic in the bloodstream \(t\) hours after being injected is given by the function \(C(t)=\frac{2t^2+t}{t^3+50}\), where \(C\) is measured in milligrams per liter of blood.
a. Find the rate of change of \(C(t).\)
b. Determine the rate of change for \(t=8,12,24\),and \(36\).
c. Briefly describe what seems to be occurring as the number of hours increases.
Solution: \(a. \frac{−2t^4−2t^3+200t+50}{(t^3+50)^2}\) \(b. −0.02395\) mg/L-hr, −0.01344 mg/L-hr, −0.003566 mg/L-hr, −0.001579 mg/L-hr c. The rate at which the concentration of drug in the bloodstream decreases is slowing to 0 as time increases.
148) A book publisher has a cost function given by \(C(x)=\frac{x^3+2x+3}{x^2}\), where x is the number of copies of a book in thousands and C is the cost, per book, measured in dollars. Evaluate \(C′(2)\)and explain its meaning.
For the following exercises, the given functions represent the position of a particle traveling along a horizontal line.
a. Find the velocity and acceleration functions.
b. Determine the time intervals when the object is slowing down or speeding up.
150) \(s(t)=2t^3−3t^2−12t+8\)
151) \(s(t)=2t^3−15t^2+36t−10\)
Solution: a. \(v(t)=6t^2−30t+36,a(t)=12t−30\); b. speeds up \((2,2.5)∪(3,∞)\), slows down \((0,2)∪(2.5,3)\)
152) \(s(t)=\frac{t}{1+t^2}\)
153) A rocket is fired vertically upward from the ground. The distance s in feet that the rocket travels from the ground after t seconds is given by \(s(t)=−16t^2+560t\).
a. Find the velocity of the rocket 3 seconds after being fired.
b. Find the acceleration of the rocket 3 seconds after being fired.
Solution: \(a. 464ft/s^2\) \(b. −32ft/s^2\)
154) A ball is thrown downward with a speed of 8 ft/s from the top of a 64-foot-tall building. After t seconds, its height above the ground is given by \(s(t)=−16t^2−8t+64.\)
a. Determine how long it takes for the ball to hit the ground.
b. Determine the velocity of the ball when it hits the ground.
155) The position function \(s(t)=t^2−3t−4\) represents the position of the back of a car backing out of a driveway and then driving in a straight line, where s is in feet and t is in seconds. In this case, \(s(t)=0\) represents the time at which the back of the car is at the garage door, so \(s(0)=−4\) is the starting position of the car, 4 feet inside the garage.
a. Determine the velocity of the car when \(s(t)=0\).
b. Determine the velocity of the car when \(s(t)=14\).
Solution: \(a. 5 ft/s\) \(b. 9 ft/s\)
156) The position of a hummingbird flying along a straight line in t seconds is given by \(s(t)=3t^3−7t\) meters.
a. Determine the velocity of the bird at \(t=1\) sec.
b. Determine the acceleration of the bird at \(t=1\) sec.
c. Determine the acceleration of the bird when the velocity equals 0.
157) A potato is launched vertically upward with an initial velocity of 100 ft/s from a potato gun at the top of an 85-foot-tall building. The distance in feet that the potato travels from the ground after \(t\) seconds is given by \(s(t)=−16t^2+100t+85\).
a. Find the velocity of the potato after \(0.5s\) and \(5.75s\).
b. Find the speed of the potato at 0.5 s and 5.75 s.
c. Determine when the potato reaches its maximum height.
d. Find the acceleration of the potato at 0.5 s and 1.5 s.
e. Determine how long the potato is in the air.
f. Determine the velocity of the potato upon hitting the ground.
Solution: a. 84 ft/s, −84 ft/s b. 84 ft/s c. \(\frac{25}{8}s\) d. \(−32ft/s^2\) in both cases e. \(\frac{1}{8}(25+\sqrt{965})s\) \(f. −4\sqrt{965}ft/s\)
158) The position function \(s(t)=t^3−8t\) gives the position in miles of a freight train where east is the positive direction and \(t\) is measured in hours.
a. Determine the direction the train is traveling when \(s(t)=0\).
b. Determine the direction the train is traveling when \(a(t)=0\).
c. Determine the time intervals when the train is slowing down or speeding up.
159) The following graph shows the position \(y=s(t)\) of an object moving along a straight line.
a. Use the graph of the position function to determine the time intervals when the velocity is positive, negative, or zero.
b. Sketch the graph of the velocity function.
c. Use the graph of the velocity function to determine the time intervals when the acceleration is positive, negative, or zero.
d. Determine the time intervals when the object is speeding up or slowing down.
Solution: a. Velocity is positive on \((0,1.5)∪(6,7)\), negative on \((1.5,2)∪(5,6)\), and zero on \((2,5)\).
c. Acceleration is positive on \((5,7)\), negative on \((0,2)\), and zero on \((2,5)\). d. The object is speeding up on \((6,7)∪(1.5,2)\) and slowing down on \((0,1.5)∪(5,6)\).
160) The cost function, in dollars, of a company that manufactures food processors is given by \(C(x)=200+\frac{7}{x}+\frac{x}{27}\), where \\(x\) is the number of food processors manufactured.
a. Find the marginal cost function.
b. Find the marginal cost of manufacturing 12 food processors.
c. Find the actual cost of manufacturing the thirteenth food processor.
161) The price p (in dollars) and the demand x for a certain digital clock radio is given by the price–demand function \(p=10−0.001x\).
a. Find the revenue function \(R(x)\)
b. Find the marginal revenue function.
c. Find the marginal revenue at \(x=2000\) and \(5000\).
Solution: a. \(R(x)=10x−0.001x^2\) b.\( R′(x)=10−0.002x\) c. $6 per item, $0 per item
162) [T] A profit is earned when revenue exceeds cost. Suppose the profit function for a skateboard manufacturer is given by \(P(x)=30x−0.3x^2−250\), where \(x\) is the number of skateboards sold.
a. Find the exact profit from the sale of the thirtieth skateboard.
b. Find the marginal profit function and use it to estimate the profit from the sale of the thirtieth skateboard.
163) [T] In general, the profit function is the difference between the revenue and cost functions: \(P(x)=R(x)−C(x)\).
Suppose the price-demand and cost functions for the production of cordless drills is given respectively by \(p=143−0.03x\) and \(C(x)=75,000+65x\), where \(x\) is the number of cordless drills that are sold at a price of \(p\) dollars per drill and \(C(x)\) is the cost of producing \(x\) cordless drills.
b. Find the revenue and marginal revenue functions.
c. Find \(R′(1000)\) and \(R′(4000)\). Interpret the results.
d. Find the profit and marginal profit functions.
e. Find \(P′(1000)\) and \(P′(4000)\). Interpret the results.
Solution: a. \(C′(x)=65\) b. \(R(x)=143x−0.03x^2\),\(R′(x)=143−0.06x\) c. \(83,−97\). At a production level of 1000 cordless drills, revenue is increasing at a rate of $83 per drill; at a production level of 4000 cordless drills, revenue is decreasing at a rate of $97 per drill. d. \(P(x)=−0.03x^2+78x−75000,P′(x)=−0.06x+78\) e. 18,−162. At a production level of 1000 cordless drills, profit is increasing at a rate of $18 per drill; at a production level of 4000 cordless drills, profit is decreasing at a rate of $162 per drill.
164) A small town in Ohio commissioned an actuarial firm to conduct a study that modeled the rate of change of the town's population. The study found that the town's population (measured in thousands of people) can be modeled by the function \(P(t)=−\frac{1}{3}t^3+64t+3000\), where \(t\) is measured in years.
a. Find the rate of change function \(P′(t)\) of the population function.
b. Find \(P′(1),P′(2),P′(3)\), and \(P′(4)\). Interpret what the results mean for the town.
c. Find \(P''(1),P''(2),P''(3)\), and \(P''(4)\). Interpret what the results mean for the town's population.
165) [T] A culture of bacteria grows in number according to the function \(N(t)=3000(1+\frac{4t}{t^2+100})\), where \(t\) is measured in hours.
a. Find the rate of change of the number of bacteria.
b. Find \(N′(0),N′(10),N′(20)\), and \(N′(30)\).
c. Interpret the results in (b).
d. Find \(N''(0),N''(10),N''(20),\) and \(N''(30)\). Interpret what the answers imply about the bacteria population growth.
Solution: a. \(N′(t)=3000(\frac{−4t^2+400}{(t^2+100)}^2)\) b. \(120,0,−14.4,−9.6\) c. The bacteria population increases from time 0 to 10 hours; afterwards, the bacteria population decreases. d. \(0,−6,0.384,0.432\). The rate at which the bacteria is increasing is decreasing during the first 10 hours. Afterwards, the bacteria population is decreasing at a decreasing rate.
166) The centripetal force of an object of mass m is given by \(F(r)=\frac{mv^2}{r}\), where \(v\) is the speed of rotation and \(r\) is the distance from the center of rotation.
a. Find the rate of change of centripetal force with respect to the distance from the center of rotation.
b. Find the rate of change of centripetal force of an object with mass 1000 kilograms, velocity of 13.89 m/s, and a distance from the center of rotation of 200 meters.
The following questions concern the population (in millions) of London by decade in the 19th century, which is listed in the following table.
Year Since 1800 Population (millions)
1 0.8975
Population of LondonSource: http://en.wikipedia.org/wiki/Demographics_of_London
167) [T]
a. Using a calculator or a computer program, find the best-fit linear function to measure the population.
b. Find the derivative of the equation in a. and explain its physical meaning.
c. Find the second derivative of the equation and explain its physical meaning.
Solution: a. \(P(t)=0.03983+0.4280\) b. \(P′(t)=0.03983\). The population is increasing. c. \(P''(t)=0\). The rate at which the population is increasing is constant.
a. Using a calculator or a computer program, find the best-fit quadratic curve through the data.
b. Find the derivative of the equation and explain its physical meaning.
3.7: The Chain Rule
source-https://math.libretexts.org-undefined | CommonCrawl |
Improvement of phylogenetic method to analyze compositional heterogeneity
Volume 11 Supplement 4
Selected papers from the 10th International Conference on Systems Biology (ISB 2016)
Zehua Zhang1,
Kecheng Guo1,
Gaofeng Pan1,
Jijun Tang1,2 &
Fei Guo1
BMC Systems Biology volume 11, Article number: 79 (2017) Cite this article
Phylogenetic analysis is a key way to understand current research in the biological processes and detect theory in evolution of natural selection. The evolutionary relationship between species is generally reflected in the form of phylogenetic trees. Many methods for constructing phylogenetic trees, are based on the optimization criteria. We extract the biological data via modeling features, and then compare these characteristics to study the biological evolution between species.
Here, we use maximum likelihood and Bayesian inference method to establish phylogenetic trees; multi-chain Markov chain Monte Carlo sampling method can be used to select optimal phylogenetic tree, resolving local optimum problem. The correlation model of phylogenetic analysis assumes that phylogenetic trees are built on homogeneous data, however there exists a large deviation in the presence of heterogeneous data. We use conscious detection to solve compositional heterogeneity. Our method is evaluated on two sets of experimental data, a group of bacterial 16S ribosomal RNA gene data, and a group of genetic data with five homologous species.
Our method can obtain accurate phylogenetic trees on the homologous data, and also detect the compositional heterogeneity of experimental data. We provide an efficient method to enhance the accuracy of generated phylogenetic tree.
Phylogenetic analysis keeps an important role to understand current research in the biological processes and detect theory in evolution of natural selection. We extract the biological data via modeling features, and then compare these characteristics to study the biological evolution between species. The evolutionary relationship between species is generally reflected in the form of phylogenetic trees. Phylogenetic analysis can help to understand the evolutionary history of biological process, and become important data source for the development of large scale genomic data [1].
$$\begin{aligned} Q = \left(\begin{array}{cccc} -\mu(a\pi_{C} + b\pi_{G} + c\pi_{T}) & \mu a\pi_{C} & \mu b\pi_{G} & \mu c\pi_{T}\\ \mu g\pi_{A} & -\mu(g\pi_{A} + d\pi_{G} + e\pi_{T}) & \mu d\pi_{G} & \mu e\pi_{T}\\ \mu h\pi_{A} & \mu j\pi_{C} & -\mu(h\pi_{A} + j\pi_{C} + f\pi_{T}) & \mu f\pi_{T}\\ \mu i\pi_{A} & \mu k\pi_{C} & \mu l\pi_{G} & -\mu(i\pi_{A} + k\pi_{C} + l\pi_{G}) \end{array}\right) \end{aligned} $$
Many methods for constructing the phylogenetic tree, are based on optimization criteria, such as maximum parsimony, maximum likelihood and minimum evolution. Maximum parsimony (MP) approach [2, 3] examines all possible topologies or a certain number of topologies, which are likely to choose real phylogenetic tree or approximate phylogenetic tree with fewest evolutionary changes. Maximum likelihood (ML) approach [4, 5] tries to estimate trees by formulating a probabilistic model of evolution and applying known statistical method. It involves that phylogenetic tree yields the highest probability of evolutionary relationship. Minimum evolution (ME) approach [6] searches for the phylogenetic tree that minimizes total branch lengths. It is based on the assumption that the phylogenetic tree with smallest branch lengths is most likely to be the true one.
The correlation model of phylogenetic analysis assumes that phylogenetic trees are built on homogeneous data [7–10]. However, there exists a large deviation in the presence of heterogeneous data. As early as twenty years ago, there is first computational method [11] to detect heterogeneity problem, which makes people to doubt the credibility of phylogenetic analysis. Later, Markov model [12] of DNA sequence is used in the system development. Jukes-Cantor model [13] has been improved and taken into account unequal nucleotide compositions, different rates of changes from one nucleotide to another, variations in the form of invariant sites, and discrete gamma-distributed rates of variable sites. At the same time, researchers realize that the process of evolution would be different because of various evolutionary trees. It is obvious that the global rate can be often observed in fast and slow evolutionary species.
In this paper, we use maximum likelihood and Bayesian inference method to establish phylogenetic trees; multi-chain Markov chain Monte Carlo sampling method can be used to select optimal phylogenetic tree, resolving local optimum problem. We use two different instantaneous rate matrices, which is symmetrical and implies time-reversibility. We allow more than one composition vector to model compositional heterogeneity, because the overall model is tree-heterogeneous. The analysis is not reversible, and the likelihood depends the position of root. Compared to bootstrapping, Markov chain Monte Carlo yields a much larger sample of trees in the same computational time.
The correlation model of phylogenetic analysis assumes that phylogenetic trees are built on homogeneous data, however there exists a large deviation in the presence of heterogeneous data. The sample of trees produced by Markov chain Monte Carlo is highly auto-correlated, whereas many fewer bootstrapping replicates are sufficient. We make a conscious detection of phylogenetic tree produced by multi-chain Markov chain Monte Carlo sampling, analyzing multiple sampling and comparing different samples obtained from estimated values. We use conscious detection to solve compositional heterogeneity. Our method is evaluated on two sets of experimental data, a group of bacterial 16S ribosomal RNA gene data, and a group of genetic data with five homologous species. Our method can obtain accurate phylogenetic tree on the homologous data, and also detect the compositional heterogeneity of experimental data. We provide an efficient method to enhance the accuracy of generated phylogenetic tree.
We construct a phylogenetic tree for a set of DNA sequences. Our method generally contains following processes: aligning sequence [14–16], building phylogenetic trees, and selecting phylogenetic tree.
Aligning sequence
The genetic information storage location has some differences on distinct species, such as information length and carrier of genetic information. These differences will affect our subsequent analysis. Therefore, we should arrange all possible similar sites in the same position, via a progressive algorithm of multiple sequence alignment. We adopt representational evolutionary multiple sequence alignment algorithm, called ClustalW [17–19]. It displays the alignment score, in form of identities, similarities and differences, and a guide tree of evolutionary relationship between aligned sequences.
Building phylogenetic trees
The phylogenetic tree consists of many nodes and branches, where the node represents a taxon, namely species or sequence; the branch represents the evolutionary relationship between species [20, 21]. All nodes are divided into external nodes and internal nodes. In general, the external node represents actual observed taxon, the internal node represents location of evolutionary event.
Phylogeny model
Given the genetic information, we need the specific phylogeny model to predict evolutionary tree. First, we use the substitution model in terms of conversion rate. In general, the instantaneous conversion matrix is expressed as follows.
where this matrix specifies the rate of change from nucleotide i-row to nucleotide j-column. The nucleotides are in the order A,C,G,T. The stationary frequencies of nucleotides (π A , π C , π G , π g ) are obtained by letting the substitution process run for a very long time.
The instantaneous conversion rate matrix describes the ratio of substitutions in a short period of time, but we need to calculate probabilities of changes in a certain period of time. Then, the probability matrix can be calculated as follows.
$$ P(t) = e^{Qt} $$
where Q is the instantaneous rate matrix, t is the branch length.
For a variety of evolutionary trees, we can calculate the likelihood of each phylogenetic tree. We need to consider the transformation between one external node and one internal node, and also consider the transformation between two internal nodes. For a specific site, we can calculate the likelihood of phylogenetic tree as follows.
$$L = \sum_{y}\pi_{y_{2s-1}}\prod_{k=1}^{s}p_{y_{\sigma(k)},x_{k}}(v_{k})\prod_{k=s+1}^{2s-2}p_{y_{\sigma(k)},y_{k}}(v_{k}) $$
where x and y are the external node and the internal node, respectively. σ(k) is the prefix index of k, v k is the branch length between y σ(k) and x k /y k . External nodes are s input sequences, that is, s species; according to the graph theory, we can get a total of 2s−1 internal nodes.
Log-likelihood
We assume that all sites are independent with each other. We can calculate the likelihood of each site [22], and then multiply them together to get final likelihood of phylogenetic tree.
We put all possible permutations, and then calculate the likelihood of all possibilities. For a specific site, the likelihood is the sum possibility of all internal nodes, denoted by L j . The likelihood of all sites can be calculated as follows.
$$\ln L = \sum_{j=1}^{N}L_{j} $$
where N refers to the length of sequence and the total number of sites.
Bayesian inference
We can use Bayesian inference [23] to produce the posterior probability of i-th phylogenetic tree, τ i , as follows.
$$f(\tau_{i}|X) = \frac{f(X|\tau_{i})f(\tau_{i})}{\sum_{j=1}^{B(s)}f(X|\tau_{j})f(\tau_{j})} $$
where f(τ i |X) is the posterior probability of τ i , f(X|τ i ) is the likelihood of τ i , and f(τ i ) is the prior probability of τ i . B(s) is the number of all possible trees.
Selecting phylogenetic tree
Typically, the posterior probability of phylogenies cannot be calculated analytically, but it can be approximated by sampling phylogenetic trees from the posterior probability distribution.
Markov chain Monte Carlo (MCMC) [24] can be used to sample phylogenies according to their posterior probabilities. The Metropolis-Hastings-Green (MHG) algorithm is an MCMC method that has been used successfully to approximate posterior probabilities of trees. MHG algorithm constructs a Markov chain with the stationary frequency of posterior probability. The current state is denoted as τ, and a new state is proposed as \(\tau ^{'}\phantom {\dot {i}\!}\). The new state is accepted with probability as follows.
$$\begin{aligned} R & = min\left(1, \frac{f(\tau^{'}|X)}{f(\tau|X)} \times \frac{f(\tau|\tau^{'})}{f(\tau^{'}|\tau)} \right) \\ & = min\left(1, \frac{f(X|\tau^{'})f(\tau^{'})/f(X)}{f(X|\tau)f(\tau)/f(X)} \times \frac{f(\tau|\tau^{'})}{f(\tau^{'}|\tau)}\right) \\ & = min\left(1, \frac{f(X|\tau^{'})}{f(X|\tau)} \times \frac{f(\tau^{'})}{f(\tau)} \times \frac{f(\tau|\tau^{'})}{f(\tau^{'}|\tau)}\right) \end{aligned} $$
One important problem of MCMC method is that we can only get the local optimal result, but not the global optimum. As shown in Fig. 1, if the current state is at the peak of T 1, because of the jump decision, the probability of next state must be less than one of current state, so MCMC method may get T 1, but miss better T 2.
Markov Chain Monte Carlo method can only get the local optimal result in range T 1 or T 3
Multi-chain Markov chain Monte Carlo
When the distribution becomes flat, Multi-Chain Markov Chain Monte Carlo (MCMCMC) is easy to get down from the peak of local optimum, and then try to get more states. We set a cold chain, and rest of heat chains obtained by heat values. The heat value is obtained as follows.
$$ \beta_{i} = \frac{1}{1 + c(i-1)} $$
where c is the heat coefficient according to the specific experimental data, i is the chain number. The state value of i-chain is calculated as \(\phantom {\dot {i}\!}f_{i}(s) = f_{1}(s)^{\beta _{i}}\). Easy to see, the distribution is more gentle, as shown in Fig. 2.
Cold Chain and Heat Chain for obtaining the global optimum: red point is used for heat status and blue point is used for cold status
Exchange occurs between two selected chains, and the exchange rate is determined as follows.
$$ R = \frac{f_{i}(s_{j})f_{j}(s_{i})}{f_{i}(s_{i})f_{j}(s_{j})} $$
where s is the state of chain, f(s) is the state s corresponding to the state value in the special chain. When R is more than or equal to 1, it must be exchanged; when R is less than 1, it may be exchanged with probability value.
Conscious detection
The correlation model of phylogenetic analysis [9] assumes that phylogenetic tree is built on homogeneous data, therefore there exists a large deviation in the presence of heterogeneous data. We use conscious detection to solve compositional heterogeneity. We make a conscious detection of phylogenetic tree, analyze multiple sampling, and compare different samples obtained from estimated values. We extract the partial data from original data and form a new data set. Hundreds of data sets are used to generate different phylogenetic trees, and then get the support rate of different branches in the phylogenetic tree generated by actual data.
For m×n data set matrix, we select a random number from 1 to n, and obtain the column corresponding to this random number as re-sampling data for the first column; then repeat the above step to obtain re-sampling data of the second column, and so on. After N-loops selection, we get the final data set with same length of the original data set. For obtained data set, we analyze the phylogenetic tree according to phylogenetic analysis. Finally, we get N phylogenetic trees and their posterior probabilities, and analyze the genetic information.
Our method is evaluated on two sets of experimental data, a group of bacterial 16S ribosomal RNA gene data, and a group of genetic data with five homologous species.
Experimental environment
We use Think Station S30 Workstation, and all programs are carried out on Ubuntu 14.04 64bit operating system, Intel Xeon E5-2620, 6 core 12 threads A-2 processor, 32G DDR3 1333MHz memory. We also use experiment softwares, such as multiple sequence alignment on CLUSTALX 2.0 [25, 26] and simulation test on JMODELTEST 2.17. The experimental data source is from National Center for Biotechnology Information (NCBI) database.
Compositional heterogeneity in bacterial 16S genes
Our development system is applied to a problematic data set of bacterial 16S genes [27]: Deinococcus, Thermus, Bacillus, Thermotoga, and Aquifex. Specific information is shown in Table 1.
Table 1 Bacterial 16S genes: Deinococcus, Thermus, Bacillus, Thermotoga, and Aquifex
Our method produces the phylogenetic tree on 16S genes. We get prediction result with a tree (Deinococcus, (Aquifex, Thermotoga), (Thermus), Bacillus), as shown in Fig. 3. As we can see, Thermotoga and Aquifex are connected together, Bacillus and Deinococcus are connected together.
Predicted evolutionary tree on Bacterial 16S Genes
However, other biological evidence, according to their actual evolutionary relationship, should introduce actual phylogenetic tree ((Aquifex, Thermotoga), (Deinococcus, Thermus), Bacillus), as shown in Fig. 4.
Actual evolutionary tree on Bacterial 16S Genes
Here, we re-sample 100 groups of data set, and construct one phylogenetic tree for each group of data set. Experiment results on 67 groups of data set are the same with their actual evolutionary relationship, as shown in Table 2. Based on conscious detection, we can correct the experimental data, in order to get the actual phylogenetic tree.
Table 2 Experiment results of our method with conscious detection on bacterial 16S genes
Homologous experiment
We adopt homologous gene sequences to construct the evolutionary tree, and find out evolutionary relationship. We use five species of albumin and c-myc mRNA genes [28]: fish(Actinoptergyii, Salmo salar), frogs(Amphibia, Xenopus laevis), birds(Aves, Gallus gallus), rodents(Rodentia, Rattus norvegicus) and humans(Primates, Homo sapiens), as listed in Table 3.
Table 3 Homologous data of albumin genes and c-myc mRNA genes
Our method produces similar experiment results on albumin and c-myc mRNA genes. We get result with a tree (frog, (human, rodent), (bird), fish), as shown in Fig. 5. As we can see, human and rodent are connected together, frog and fish are connected together. Experiment results on albumin and c-myc mRNA genes are the same with their actual evolutionary relationship.
Evolutionary tree on albumin and c-myc mRNA genes
Xanthine dehydrogenase from drosophila
We analyze the root of Drosophila saltans and Drosophila willistoni groups, as outgroup rooting with the Xdh gene [29]. Based on morphology, we got the most credible root as shown in the root position r 1 in Fig. 6, as well as based on deletion of an intron in the willistoni group-specific Adh gene. The outgroup is D. virilis, D. pseudoobscura and D. melanogaster. When only the ingroup is used, an acceptable phylogeny can be generated, which is consistent with the known relationships derived from morphological characters. When outgroup taxa are used in the analysis, depending on different model or method, the ingroup's root position became unstable. This situation is resulted by the compositional differences, especially the ones between ingroup and outgroup taxa.
Rooting Drosophila saltans and willistoni groups
Four different roots indicated by positions r 1- r 4 in Fig. 6, the points where the outgroup attach to the ingroup on, are found by various methods. Here, the entire analysis's overall root and the outgroup root position can be distinguished from each other, numbered as in Fig. 6. When accommodating the heterogeneous composition, this model can recover the outgroup root position r 1. A distance-based analysis can overcome compositional heterogeneity, finding the preferred root position r 1. We produce on these data to choose a model using the tree rooted at position r 1, with the expectation that our choice of model is independent on other roots. A search for the GTR+SS model using PAUP finds a tree rooted at position r 2. A Bayesian analysis using MrBayes also finds a tree rooted at position r 2.
In our paper, maximum likelihood, Bayesian inference method and multi-chain Markov chain Monte Carlo sampling are used to build and select global optimal phylogenetic tree. And also, compositional heterogeneity problem is solved by using conscious detection. When evaluated on two sets of experimental data, our method is efficient and accurate to generate phylogenetic tree and detect the compositional heterogeneity.
DNA:
Deoxyribonucleic acid
mRNA:
Messenger RNA
Maximum parsimony
ML:
Minimum evolution
MCMCMC:
MCMC:
MHG:
Metropolis-Hastings-Green
Baxevanis AD, Ouellette BF. Bioinformatics: a practical guide to the analysis of genes and proteins: John Wiley & Sons; 2004.
Eck RV, Dayhoff MO. Atlas of protein sequence and structure. Washington: National Biomedical Research Foundation; 1966.
Fitch WM. Toward defining the course of evolution: Minimum change for a specific tree topology. Syst Biol. 1971; 20(4):406–16.
Felsenstein J. Evolutionary trees from dna sequences: A maximum likelihood approach. J Mol Evol. 1981; 17(6):368–76.
Yang Z. Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods. J Mol Evol. 1994; 39(3):306–14.
Edwards AWF, Cavalli-Sforza LL. Reconstruction of evolution. Heredity. 1963; 18:553.
Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol Biol Evol. 1987; 4(4):406–25.
Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. Mega5 : molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 2011; 28(10):2731–9.
Ronquist F, Huelsenbeck JP. Mrbayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics. 2003; 19(12):1572–4.
Stamatakis A. Raxml-vi-hpc: maximum likelihood-based phylogenetic analyses with thousands of taxa and mixed models. Bioinformatics. 2006; 22(21):2688–90.
Lockhart P, Steel M, Hendy M, Penny D. Recovering evolutionary trees under a more realistic model of sequence evolution. Mol Biol Evol. 1994; 11:605–12.
Larget B, Simon DL. Markov chain monte carlo algorithms for the bayesian analysis of phylogenetic trees. Mol Biol Evol. 1999; 16:750–9.
Jukes TH, Cantor CR, Munro HN. Evolution of protein molecules. Mammal Protein Metab. 1969; 3(21):132.
Gibbs AJ, Mcintyre GA. The diagram, a method for comparing sequences. Eur J Biochem. 1970; 16(1):1–11.
Smith TF, Waterman MS. Identification of common molecular subsequences. J Mol Biol. 1981; 147(1):195–7.
Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970; 48(1):443–53.
Higgins DG, Sharp PM. Clustal: a package for performing multiple sequence alignment on a microcomputer. Gene. 1988; 73(1):237–44.
Higgins DG, Bleasby AJ, Fuchs R. Clustal v: improved software for multiple sequence alignment. Comput Appl Biosci CABIOS. 1992; 8(2):189–91.
Larkin MA, Blackshields G, Brown NP, Chenna R, McGettigan PA, McWilliam H, Valentin F, Wallace IM, Wilm A, Lopez R, Thompson JD, Gibson TJ, Higgins DG. Clustal w and clustal x version 2.0. Bioinformatics. 2007; 23(21):2947–8.
Ranwez V, Gascuel O. Improvement of distance-based phylogenetic methods by a local maximum likelihood approach using triplets. Mol Biol Evol. 2002; 19(11):1952–63.
Fitch WM, Margoliash E. Construction of phylogenetic trees. Science. 1967; 155(3760):279–84.
Lewis PO. A likelihood approach to estimating phylogeny from discrete morphological character data. Syst Biol. 2001; 50(6):913–25.
Swofford DL. PAUP (version 3.0): phylogenetic analysis using parsimony. Ill Nat Hist Surv Champaign, Ill. 1989;9.
Larget B, Simon D. Markov chasin monte carlo algorithms for the bayesian analysis of phylogenetic trees. Mol Biol Evol. 1999; 16(6):750.
Jeanmougin F, Thompson JD, Gouy M, Higgins DG, Gibson TJ. Multiple sequence alignment with clustal x. Trends Biochem Sci. 1998; 23(10):403–5.
Thompson JD, Gibson TJ, Plewniak F, Jeanmougin F, Higgins DG. The clustal_x windows interface: Flexible strategies for multiple sequence alignment aided by quality analysis tools. Nucleic Acids Res. 1997; 25(24):4876–82.
Foster PG. Modeling compositional heterogeneity. Syst Biol. 2004; 53(3):485–95.
Huelsenbeck JP, Ronquist F. Mrbayes: Bayesian inference of phylogenetic trees. Bioinformatics. 2001; 17(8):754–5. https://academic.oup.com/bioinformatics/article/17/8/754/235132/MRBAYES-Bayesian-inference-of-phylogenetic-trees.
Tarrio R, Rodriguez-Trelles F, Ayala FJ. Tree rooting with outgroups when they differ in their nucleotide composition from the ingroup: The drosophila saltans and willistoni groups, a case study. Mol Phylogenet Evol. 2000; 16(3):344–9. doi:10.1006/mpev.2000.0813.
This research and this article's publication costs are supported by a grant from the National Science Foundation of China (NSFC 61402326), Peiyang Scholar Program of Tianjin University (no. 2016XRG-0009), and the Tianjin Research Program of Application Foundation and Advanced Technology (16JCQNJC00200).
Availability of data and material
About this supplement
This article has been published as part of BMC Systems Biology Volume 11 Supplement 4, 2017: Selected papers from the 10th International Conference on Systems Biology (ISB 2016). The full contents of the supplement are available online at https://bmcsystbiol.biomedcentral.com/articles/supplements/volume-11-supplement-4.
School of Computer Science and Technology, Tianjin University, 92 Weijin Road, Nankai District, Tianjin, People's Republic of China
Zehua Zhang, Kecheng Guo, Gaofeng Pan, Jijun Tang & Fei Guo
Department of Computer Science and Engineering, University of South Carolina, Columbia, USA
Jijun Tang
Zehua Zhang
Kecheng Guo
Gaofeng Pan
Fei Guo
ZZ, KG and FG conceived the study. KG and FG performed the experiments and analyzed the data. ZZ, GP and FG drafted the manuscript. All authors read and approved the manuscript.
Correspondence to Fei Guo.
Zhang, Z., Guo, K., Pan, G. et al. Improvement of phylogenetic method to analyze compositional heterogeneity. BMC Syst Biol 11 (Suppl 4), 79 (2017). https://doi.org/10.1186/s12918-017-0453-x
Phylogenetic analysis
Compositional heterogeneity | CommonCrawl |
Chemistry Meta
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.
Could we possibly discover new extraterrestrial elements and minerals?
So, after dissolving a good portion of my brain by watching a ton of sci-fi, I started thinking about all of the strange new materials that we will supposedly have in the future. For example, the ever-present "di-lithium" in Star Trek. But, as I started thinking about it, I thought it was a little ridiculous (I know science-FICTION) that all these exotic elements could exist. No matter where a species comes from in the galaxy or universe, they would be working with the exact same Table of Elements as us. Furthermore, so long as they are similar to human-life, i.e. they breathe oxygen, need water, etc., and therefore live on similar planets, wouldn't the compounds be the same as well?
Let's say we are 250 million years into the future and have discovered superluminal space-flight at 100,000,000c. Say we cruise around to inhabitable planets throughout the galaxy and universe. Upon landing on these new worlds, the elements and compounds we find would be the same that occur on Earth, right? Would we still find diamonds, rubies, uranium, iron, etc.? Or is it possible that there are combinations of the elements yet unknown? Is it for certain that we have catalogued every element and isotope that occurs naturally? I understand that new compounds could be formed, but as far as the base elements and the naturally occurring minerals, have we found them all?
chemistry-in-fiction astrochemistry
orthocresol♦
Jimmy G.Jimmy G.
$\begingroup$ I think you'll find Stack Exchange World Building pretty fascinating. $\endgroup$
– Vatsal Manot
$\begingroup$ Actually we flew 4 times faster then you say but was quite an achievement ;) $\endgroup$
– Mithoron
$\begingroup$ Could someone comment on different "elements" by means of metastable nuclear isomers? Obviously, with insanely long half-lives (I know nothing about nuclear physics, are those possible?). Could this exist somewhere out there? While the chemistry would probably not be that different, the spectroscopy could be. $\endgroup$
– TAR86
$\begingroup$ @TAR86 Better late than never! Nuclear isomers are just nuclei excited above their ground state, and much like atoms/molecules in excited electronic states, they are almost always very unstable. The key word is almost, however. I think you'll be fascinated to learn about tantalum-180m, the only metastable nucleus with significant natural occurrence. The table of radionuclides still has many gaps (especially metastable states), but it is unlikely any gaps hide very surprising species. $\endgroup$
– Nicolau Saker Neto
One thing which can be said with absolute certainty is that no species in the Universe (or even all their combined efforts) will even come close to exhausting the daunting enormity of chemical space. When you drag combinatorics into a problem, you can easily stumble on massive numbers, and this is the case here; it has been roughly calculated that there are on the order of $10^{60}$ "chemically reasonable" distinct molecules below a molar mass of $\mathrm{500\ g\ mol^{-1}}$! This means that almost all chemical substances which can exist, never will exist, be it from natural or artificial production.
No matter how far chemistry develops, we will always be interested in investigating the chemical composition of new samples, and will always find many new compounds doing so. Most of the time we'll find the same of what we do on Earth; there's no escaping the fact that $\ce{N_2}$ and $\ce{Al2O3}$ are very stable substances, for example, and can exist in a wide range of conditions. But there will always be something new for us waiting out there. Organic substances are tremendously varied thanks to the concatenability of carbon atoms. One might think inorganic substances are far less variable, but just the class of silicate minerals is enormous, and many polymorphs of the same substances can exist.
Having said that, at least one area of chemistry seems far closer to being exhausted; the elements themselves. There is no possibility of us having "missed" any element among the first 112, and we're still filling in the gaps at higher atomic numbers, but each heavier element discovered involves ever more complicated procedures and lower yields. Much of the experimentally determined properties of elements above $\mathrm{Z=104}$ come from the analysis of a tiny amount of atoms, as little as tens of them. Each new element is significantly more difficult to synthesize, and is less stable, which is hampering progress. We don't yet know very well how to get past $\mathrm{Z=122}$ or so even in principle, and are unsure how accessible the ultraheavy elements will prove to be. There is still great academic interest in several aspects of the elements and their isotopes, but current research deals with the production of very shortlived species, which have limited applicability.
In all, we will always find something new to explore, but whether we will always find something strange depends on what you mean by the word. There are still many, many substances to find which will display impressive biochemical properties, for example, and many lethal or crippling diseases today might still become curable in the future. But there is zero chance of finding a substance which makes magic real, which unlocks faster-than-light travel, which has endochronic properties or which can be used to make overunity engines and perpetual motion devices. Ultimately, Chemistry is a form of applied quantum electrodynamics, a subsection of Physics, and quantum electrodynamics is a very developed theory with incredible precision. Thus, there is little room for surprises, and they likely will not come from a chemical system unless it is very specially tailored, rather than randomly found in nature.
Nicolau Saker NetoNicolau Saker Neto
It is worth mentioning the hypothetical 'island of stability' explicitly https://en.m.wikipedia.org/wiki/Island_of_stability. The prediction of stable, ultra heavy, yet-to-be-discovered elements warrants this great question.
Thanks for contributing an answer to Chemistry Stack Exchange!
Not the answer you're looking for? Browse other questions tagged chemistry-in-fiction astrochemistry or ask your own question.
Can our chemical elements be differents on other universes?
The last element's atomic number
Does hydrogen really have seven isotopes?
Where can we find the inventors of all chemicals?
Biological Consequences of Asteroid Mining—Death by Isotope?
If I were to create a fictional way to obtain elements from Granite, what would be a good approach? | CommonCrawl |
Optimization of a simple, accurate and low cost method for starch quantification in green microalgae
Tze Ching Yong1,
Chia-Sheng Chiu1 &
Ching-Nen Nathan Chen ORCID: orcid.org/0000-0002-9729-349X1
Botanical Studies volume 60, Article number: 25 (2019) Cite this article
Lipids and starch are important feedstocks for bioenergy production. Genetic studies on the biosyntheses of lipids and starch in green microalgae have drawn significant attention recently. In these studies, quantifications of lipids and starch are required to clarify the causal effects. While lipids are assayed with similar procedures worldwide, starch in green microalgae has been measured using various methods with deficiencies in accuracy or high cost.
A simple, accurate and low cost procedure for routine quantification of starch in green microalgae was developed. This procedure consists of quick-freezing of the cells, solvent extraction of the pigments, 134 °C autoclaving and glucoamylase double digestions of starch, followed by a glucose assay using the dinitrosalicylic acid reagent. This procedure was optimized to quantify starch in small volumes of green microalgal culture. The accuracy of starch quantification using this procedure was 102.3 ± 2.5% (mean ± SD, n = 6), as indicated by using cornstarch as internal controls. The working protocol is available at http://dx.doi.org/10.17504/protocols.io.2mhgc36.
This quantification approach overcomes the current problems in the starch quantification of green microalgae such as inaccuracy and high cost. This approach would provide an opportunity to compare the effects of genetic, physiological or cultivation manipulations on the productivity of starch in green microalgae elucidated in different labs, which is essential in the enhancement of lipid productivity studies in microalgae.
Biodiesel is superior to bioethanol in terms of production considerations, and their raw materials are lipids and starch, respectively (Chisti 2008). Although some oleaginous green microalgae accumulate high levels of lipids in their cells, a significant drawback found in them is that their energy reservoir includes starch that is less desirable as a feedstock for biofuel production. In the production of bioethanol, the raw materials containing starch have to be hydrolyzed into glucose first, followed by anaerobic fermentation, centrifugation and distillation to produce and concentrate bioethanol. In this procedure, one-third of the glucose carbon is lost and it takes a significant amount of energy input. On the other hand, to produce biodiesel, the storage lipid triacylglycerol simply goes through transesterification and the products are glycerol and biodiesel.
It has been speculated that starch biosynthesis must be suppressed in order to enhance lipid productivity in green microalgae (Siaut et al. 2011). The rationale behind this thought is that biosyntheses of the two kinds of molecules compete for the same precursor 3-phosphoglycerate (3-PG). Genetic modification approaches have been taken to change the metabolite flux in green microalgae recently. To verify whether these approaches work to reduce starch biosynthesis at the same time enhancing lipid productivity in green microalgae, a simple, accurate and low cost method for starch quantification is required for the routine measurements. While methods for lipid extraction and measurement have reached a consensus worldwide (based on organic solvent extraction followed by transesterification and GC analysis) (Bligh and Dyer 1959; Folch et al. 1957; Pan et al. 2011), starch measurement is still practiced in various ways in different labs to date. In 1991, Rose et al. compared six starch quantification methods that employed either perchloric acid extraction or starch-digesting enzymes. Their results demonstrated that the variations of these methods could reach 20 to 40% (Rose et al. 1991). Steps described in those methods are still adopted for microalgal starch assay nowadays. In the recent microalgal literature, methods for starch quantification include acid hydrolysis of starch followed by color formation using anthrone or HPLC analysis of glucose (Branyikova et al. 2011; Kato et al. 2017), amylase/amyloglucosidase digestion of starch followed by glucose oxidase reaction and spectrophotometry (Dragone et al. 2011), and assay kits from Sigma-Aldrich, USA (Cat. # SA20; USD 153 for 20 assays sold in the US; USD 284 in Taiwan) and Megazyme, Ireland (Cat. # K-TSHK, Euro 263 for 100 assays), respectively (Juergens et al. 2016; Soh et al. 2014). These different methods result in different accuracies for starch quantification. Starch quantification using acid hydrolysis at a high temperature could lead to over-estimate of the actual starch level in the cells because this reaction could also hydrolyze other glucose-containing polysaccharides and glycoproteins in the cells. The approach using amylase/amyloglucosidase digestion followed by glucose oxidase actually uses the third enzyme peroxidase and the chemical o-dianisidine to produce color for the spectrophotometric measurement. The three enzyme reactions compromise the simplicity and accuracy of this approach in addition to the cost considerations. The most costly methods involve the use of commercial assay kits, which are unlikely to be adopted for routine assays.
To overcome these barriers, a simple, accurate and low cost procedure was developed for quantification of starch in green microalgae. In this procedure, a thermo-tolerant enzyme glucoamylase (EC 3.2.1.3, Tokyo Chemical Industry, more information available in BRENDA database) and the chemical dinitrosalicylic acid were adopted (Miller 1959; Saqib and Whitney 2011; Wang et al. 1997). This procedure requires a small amount of cell culture only, which well fits lab-scale microalgal cultivation. The procedure and the verification of this procedure's accuracy are presented here.
Working protocol webpage in protocols.io
The working protocol is available at http://dx.doi.org/10.17504/protocols.io.2mhgc36.
Microalga and cultivation
Chlamydomonas reinhardtii UTEX 90, a wildtype strain, was purchased from the Culture Collection of Algae at the University of Texas at Austin, USA. The cells were propagated under continuous 150 μmol photon/m2/s white light at 25 °C in a modified Bold 3 N medium which contains 1.1 mM NaNO3, 0.05 mM K2HPO4, 0.16 mM KH2PO4, 0.17 mM CaCl2, 0.3 mM MgSO4, 0.43 mM NaCl and minerals including 6.56 µM FeCl3, 0.25 µM ZnSO4, 2.42 µM MnSO4, 5.69 nM CoSO4, 6.1 nM Na2MoO4, 1 nM Na2SeO3, 6.3 nM NiCl2, described in Table 2 of Berges et al. (2001).
Chemicals and enzyme
Corn starch (S5296), glucose (G5146), dinitrosalicylic acid (D0550), NaOH (S8045), and NaH2PO4 (S0751) were purchased from Sigma-Aldrich, USA. Glucoamylase (a.k.a. amyloglucosidase, EC 3.2.1.3) was purchased from Tokyo Chemical Industry, Japan (Cat. # M0035, from Rhizopus sp., about 6000 units/g, 25 g sold for USD 165 in Taiwan). This enzyme completely hydrolyzes soluble starch, amylose, and amylopectin (see in BRENDA database). Potassium sodium tartrate (131,729.1210) was purchased from PanReac AppliChem, Spain.
Glucose assay and calculation
Glucose was dehumidified and weighed using a high accuracy analytical balance (METTLER AT21, Columbus, OH, USA; readability to 5 μg). A solution of 10 mM was prepared and stored at 4 °C. Serially twofold diluted glucose solutions, 0.5 mL each, were mixed with 2 mL DNS reagent (44 mM dinitrosalicylic acid, 1 M potassium sodium tartrate, and 0.5 M NaOH) separately and then heated in boiling water for 5 min. After cooling in tap water, the optical density at 540 nm (OD540) of each mix was measured. A standard curve was built based on the glucose quantity in each mix against its OD540 (Fig. 1). Samples of 0.5 mL were treated and measured using the same procedure in parallel to the standards in each batch of measurement. The glucose quantity in each sample was calculated based on the regression equation of the standard curve. The level of experimental replications was indicated in each Table. The results were analyzed using t test with 95% confidence interval.
A standard curve of the glucose assay generated using the dinitrosalicylic acid method
Complete disintegration of starch granules, enzymatic digestion and calculation
Each corn starch sample, weighed using the high accuracy analytical balance aforementioned and the quantities specified in Table 1, was mixed with 5 mL sodium phosphate buffer (50 mM, pH 5.0). The samples were disintegrated using autoclaving at 134 °C for 1 h. Glucoamylase powder was dissolved in the same buffer to make 100 units per mL. Two units of the glucoamylase (in 20 μL) were mixed with 0.5 mL of the autoclaved starch solution plus 0.5 mL of the phosphate buffer, and the enzymatic digestion was executed at 50 °C overnight. A second digestion was executed in the same conditions for 7 h by adding the same amount of enzyme to the mix. In the course of the method setting (the pilot tests), each of the reactions was mixed with 20 μL KI-I2 reagent in a time-course serial to inspect the remaining starch using spectrophotometry at OD590. The enzymatic reaction is shown below.
$$ \begin{aligned} \left( {n{\text{Glucose~in~polymer}}} \right)~ + & \, \left( {H_{2} O} \right) n - 1~\xrightarrow{{{\text{Glucoamylase}}~{\text{at}}~50\;^{ \circ } {\text{C}}}}\left( {{\text{Glucose}}} \right) n \\ & M.W. = 18~\quad \quad \quad \quad \quad \quad \quad \quad M.W. = 180 \\ \end{aligned} $$
Table 1 Effects of autoclaving temperature and enzymatic digestion duration on the accuracy of starch measurement using this method
The glucose product was measured using the dinitrosalicylic acid (DNS) method described in the previous "Glucose assay and calculation" section. The net weight of the corn starch (glucose in polymer) was determined by multiplying the quantity of the glucose product by 0.9.
Microalgal sample preparation—pigment extraction by using methanol-tetrahydrofuran
Ten microliter of the day 5 microalgal culture (OD682 around 1.0) were harvested using swing bucket centrifugation (2600g, 3 min) at room temperature. The supernatant was discarded and the cells were transferred to a 2-mL screw cap microtube rapidly. After a brief high-speed centrifugation, the supernatant in the microtube was removed using a pipette and the cells were quickly frozen at − 15 °C in a mix of ice and crude sea salt. One microliter of cold methanol/tetrahydrofuran (v/v = 1/3) was added to the frozen cells. The cells were soaked in the solvent for 30 min in the ice-salt mix and agitated occasionally to extract the pigments. After centrifuging at 16,000g at 4 °C for 10 min, the solvent was discarded and the pellet was dried at 65 °C for 1 h. The dried pellet was repeatedly washed out using the sodium phosphate buffer (50 mM, pH 5.0) and the final volume was adjusted to 5 mL. A small amount of corn starch, weighed using the high accuracy analytical balance aforementioned and listed in Table 2, was added to each suspension serving as the internal control. This mix was autoclaved at 134 °C for 1 h. One mL of the autoclaved sample was smashed using a mini-beadbeater (BioSpec Products Inc., USA) to fully release the starch from the cells, and 0.5 mL of the smashed sample was mixed with 0.5 mL of the sodium phosphate buffer then subjected to the double digestions aforementioned. After the double digestions, the sample was centrifuged again to precipitate any cell debris. Five hundred microliter of the supernatant was used for glucose measurement as described in the previous "Glucose assay and calculation" section.
Table 2 Accuracy of measuring the internal control cornstarch mixed with algal samples using this method
Estimate of the net weight of glucose polymers in the corn starch
As in the grains of crops, corn starch contains certain levels of water, protein and cellulosic fiber that affect the measurement of the net weight of starch (glucose polymer). Corn starch samples, stored in the lab refrigerator, weighing more than 0.3 g were dried at 105 °C for 2 h in a ceramic crucible. The weight difference of each sample before and after the drying was measured using a high accuracy analytical balance. The water content in the starch was determined to be 13.5 ± 1.3% (mean ± SD, n = 3). The contents of protein and fiber in the corn starch were measured by the U.S. Department of Agriculture (https://ndb.nal.usda.gov/ndb/foods/show/305228), which together comprised 1.16% raw weight of corn starch. The amounts of other components such as lipids and minerals were very little as shown in our results and in the USDA database. They were thus ignored in the net weight calculation. Therefore, the net weight of starch (glucose polymer) was 85.3% of the raw weight of the corn starch used in this study.
Effects of autoclaving temperature and duration of enzymatic digestion on starch degradation
Two parameters, autoclaving temperature and duration of enzymatic digestion, can affect the completeness of starch digestion by glucoamylase. Starch granules are formed by compacted glucose polymers in the cells of green algae and plants. The compact structure hinders enzymatic reactions and thus it has to be disintegrated before the enzyme can completely digest the glucose polymers. The effect of autoclaving temperature on the disintegration of starch granules was examined. In addition, the duration of the enzymatic digestion of dissolved glucose polymers is an important factor that determines the accuracy of starch measurement. This factor was also investigated in this study.
As shown in Table 1, the highest accuracy of the starch measurement using this procedure was achieved by autoclaving at 134 °C in conjunction with the glucoamylase double digestions (98.9 ± 0.9%, n = 5), followed by autoclaving at 134 °C coupled with single digestion (94.8 ± 2.2%, n = 5) and autoclaving at 121 °C with double digestions (93.5 ± 1.7%, n = 4. It should be noted that the difference between the two sets of measurements was not statistically significant), and lastly autoclaving at 121 °C with single digestion (90.7 ± 1.7%, n = 4).
Estimate of the accuracy of measuring endogenous starch in algal cells
Unlike processed corn starch, microalgal cells contain pigments that could impede the starch measurement which is based on spectrophotometry at 540 nm absorption. Pigment extraction seems to be the best option to avoid this problem. The solvent methanol/tetrahydrofuran (v/v = 1/3) was employed to extract the pigments, and 30 min of extraction gave satisfactory results with the cells of the day 5 culture. After autoclaving at 134 °C and glucoamylase double digestions, the algal cells with or without the additional corn starch (the internal controls) were smashed and centrifuged to collect clear supernatant for the glucose assay. As shown in Table 2, the measurement accuracies of the additional corn starch (the internal controls) were close to 100% in the six tests using this procedure. The results suggest this procedure can achieve a high level of accuracy for the measurement of endogenous starch in green microalgal cells.
This simple and accurate procedure that was built based on the corn starch internal controls for microalgal starch quantification is also low-cost, due to the use of the thermo-tolerant glucoamylase. This enzyme is able to hydrolyze α-1,6-glucosidic bonds in starch in addition to α-1,4-glucosidic bonds (see the comments of the International Union of Biochemistry of Molecular Biology, IUBMB about this group of enzymes in BRENDA database, and the product information of this enzyme issued by Tokyo Chemical Industry Company). The cost of the enzyme per assay (4 units/assay) in this study was less than USD 0.5 cent, a dramatic difference compared to USD 14.2 using the Sigma-Aldrich starch assay kit and Euro 2.6 using the Megazyme kit purchased in Taiwan. The mass of the internal controls was measured using a high accuracy analytical balance, which gave the readings very close to their true values. This provides the possibility of gauging the accuracy of the results obtained using this quantification method.
In the study of Rose et al. (1991), the "accuracy" of the six starch quantification methods was compared, and up to 40% in variation was found in the results obtained by using those methods. As aforementioned, steps used in those six methods are still adopted recently (Branyikova et al. 2011; Dragone et al. 2011). The accuracy of starch quantifications nowadays is still a great concern. The six methods did not include internal controls or standards of purified starch in those assays. Therefore, the comparisons were actually about the precisions and the result variations measured using those methods. The true values of the starch in their samples were not clear. Therefore, it is unlikely to determine which method was more accurate than the others among those six methods. In the two commercial starch assay kits produced by Sigma-Aldrich and Megazyme companies, the principles the two kits adopt are the same, which are (1) starch degradation to glucose using amylase/amyloglucosidase; (2) glucose conversion to glucose-6-phosphosphate using hexokinase; (3) glucose-6-phosphate conversion to 6-phosphogluconate using glucose-6-phosphosphate dehydrogenase; and (4) measurement of the resulted NADH levels using OD340. The two assays do include starch standards, but not internal controls, to calibrate the results from measuring unknown samples. However, the interferences from water contents and other non-glucose components in the starch standards were ignored in the protocols of the two kits. Besides, the three enzyme reactions are also a concern. Generally, more enzyme reactions give more bias in the quantifications.
This quantification procedure has advanced starch assay one step further by using the internal controls compared to other methods. In addition, this method has a high sensitivity. The sensitivity of the DNS method described in this procedure is 28 μg of glucose per reaction at the low end of the standard curve (Fig. 1). One microliter of green microalgal culture in this study had starch that could generate much more than this quantity of glucose. Therefore, this procedure is well suited to the lab scale of microalgal cultivation and starch measurement.
Found in this study, the action of quick-freezing of the cells at − 15 °C right after harvest had positive influence on the accuracy of starch quantification. This might be due to the starch degradation by the endogenous microalgal enzymes at room temperature when the cells were agitated. The measurement accuracy of the internal controls being a little over 100% on average in Table 2 was most likely due to the remaining pigments in the samples. Therefore, double solvent extractions might be necessary if the cell pellets of interest are still green after the first pigment extraction. Complete disintegration of starch granules and thorough glucose polymer digestion are crucial to achieve accurate measurement of starch. High temperature autoclaving at 134 °C and glucoamylase double digestions at 50 °C for more than 20 h yielded satisfactory results. High-speed centrifugation (16,000g at room temperature for 10 min) to precipitate the cell debris after the double digestions of the microalgal samples was required to obtain high accuracy of starch measurement.
Although it takes 2 days to quantify microalgal starch using this procedure, the work is not intensive since most time in the 2 days is used for autoclaving (about 4 h from the beginning to the end) and enzyme digestion (one night plus 7 h in the second day). This procedure will be a great tool for the studies of biochemical and genetic engineering that involve in starch biosynthesis or degradation in green microalgae.
All data generated or analyzed during the current study are included in this published article.
Berges JA, Franklin DJ, Harrison PJ (2001) Evolution of an artificial seawater medium: improvements in enriched seawater, artificial water over the last two decades. J Phycol 37:1138–1145
Bligh EG, Dyer WJ (1959) A rapid method of total lipid extraction and purification. Can J Biochem Physiol 37:911–917
Branyikova I, Marsalkova B, Doucha J, Branyik T, Bisova K, Zachleder V, Vitova M (2011) Microalgae—novel highly efficient starch producers. Biotechnol Bioeng 108:766–776
Chisti Y (2008) Biodiesel from microalgae beats bioethanol. Trends Biotechnol 26:126–131
Dragone G, Fernandes BD, Abreu AP, Vicente AA, Teixeira JA (2011) Nutrient limitation as a strategy for increasing starch accumulation in microalgae. Appl Energy 88:3331–3335
Folch J, Lees M, Sloane-Stanley GH (1957) A simple method for the isolation and purification of total lipids from animal tissues. J Biol Chem 226:497–509
Juergens MT, Disbrow B, Shachar-Hill Y (2016) The relationship of triacylglycerol and starch accumulation to carbon and energy flows during nutrient deprivation in Chlamydomonas reinhardtii. Plant Physiol 171:2445–2457
Kato Y, Ho SH, Vavricka CJ, Chang JS, Hasunuma T, Kondo A (2017) Evolutionary engineering of salt resistance Chlamydomonas sp. strains reveals salinity stress-activated starch-to-lipid biosynthesis switching. Bioresour Technol 245:1484–1490
Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31:426–428
Pan YY, Wang ST, Chuang LT, Chang YW, Chen CNN (2011) Isolation of thermo-tolerant and high lipid content green microalgae: oil accumulation is predominantly controlled by photosysnthesis efficiency during stress treatments in Desmodesmus. Bioresour Technol 102:10510–10517
Rose R, Rose CL, Omi SK, Forry KR, Durall DM, Bigg WL (1991) Starch determination by perchloric acid vs enzymes: evaluating the accuracy and precision of six colorimetric methods. J Agric Food Chem 39:2–11
Saqib AAN, Whitney PJ (2011) Differential behaviour of the dinitrosalicylic acid (DNS) reagent towards mono- and disaccharide sugars. Biomass Bioenergy 35:4748–4750
Siaut M, Cuine S, Cagnon C, Fessler B, Nguyen M, Carrier P, Beyly A, Beisson F, Triantaphylides C, Li-Beisson Y, Peltier G (2011) Oil accumulation in the model green alga Chlamydomonas reinhardtii: characterization, variability between common laboratory strains and relationship with starch reserves. BMC Biotechnol 11:7–15
Soh L, Montazeri M, Haznedaroglu BZ, Kelly C, Peccia J, Eckelman MJ, Zimmerman JB (2014) Evaluating microalgal integrated biorefinery schemes: emperical controlled growth studies and life cycle assessment. Bioresour Technol 151:19–27
Wang G, Michailides TJ, Bostock RM (1997) Improved detection of polygalacturonase activity due to Mucor piriformis with a modified dinitrosalicylic acid reagent. Phytopathology 87:161–163
The authors thank Prof. Keryea Soong for helpful discussion.
This work was financially supported by the Grant MOST 106-2221-E-110-068-MY2 from the Ministry of Science and Technology, Taiwan.
Department of Oceanography, National Sun Yat-sen University, Kaohsiung, 804, Taiwan
Tze Ching Yong, Chia-Sheng Chiu & Ching-Nen Nathan Chen
Tze Ching Yong
Chia-Sheng Chiu
Ching-Nen Nathan Chen
TCY and C-SC executed this study, interpreted the data, and reviewed this manuscript. C-NNC conceived and directed this work, analyzed the data and prepared this manuscript. All authors read and approved the final manuscript.
Correspondence to Ching-Nen Nathan Chen.
The author declare that they have no competing interests.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Yong, T.C., Chiu, CS. & Chen, CN.N. Optimization of a simple, accurate and low cost method for starch quantification in green microalgae. Bot Stud 60, 25 (2019). https://doi.org/10.1186/s40529-019-0273-y
Accepted: 05 October 2019
Photosynthate partitioning
Starch quantification | CommonCrawl |
ENTRNA: a framework to predict RNA foldability
Congzhe Su1,
Jeffery D. Weir2,
Fei Zhang3,
Hao Yan3 &
Teresa Wu1
RNA molecules play many crucial roles in living systems. The spatial complexity that exists in RNA structures determines their cellular functions. Therefore, understanding RNA folding conformations, in particular, RNA secondary structures, is critical for elucidating biological functions. Existing literature has focused on RNA design as either an RNA structure prediction problem or an RNA inverse folding problem where free energy has played a key role.
In this research, we propose a Positive-Unlabeled data- driven framework termed ENTRNA. Other than free energy and commonly studied sequence and structural features, we propose a new feature, Sequence Segment Entropy (SSE), to measure the diversity of RNA sequences. ENTRNA is trained and cross-validated using 1024 pseudoknot-free RNAs and 1060 pseudoknotted RNAs from the RNASTRAND database respectively. To test the robustness of the ENTRNA, the models are further blind tested on 206 pseudoknot-free and 93 pseudoknotted RNAs from the PDB database. For pseudoknot-free RNAs, ENTRNA has 86.5% sensitivity on the training dataset and 80.6% sensitivity on the testing dataset. For pseudoknotted RNAs, ENTRNA shows 81.5% sensitivity on the training dataset and 71.0% on the testing dataset. To test the applicability of ENTRNA to long structural-complex RNA, we collect 5 laboratory synthetic RNAs ranging from 1618 to 1790 nucleotides. ENTRNA is able to predict the foldability of 4 RNAs.
In this article, we reformulate the RNA design problem as a foldability prediction problem which is to predict the likelihood of the co-existence of a sequence-structure pair. This new construct has the potential for both RNA structure prediction and the inverse folding problem. In addition, this new construct enables us to explore data-driven approaches in RNA research.
Ribonucleic acid (RNA), as an emerging nanoscale building block, is regarded as one of the most promising candidates to create nano-architectures and nano-devices for therapeutic and diagnostic purposes. Due to its unique biochemical properties and functionalities [1], such as catalysis of metabolic reactions [2], regulation of gene expression [3], and organization of proteins into large machineries [4], RNA has attracted great attention from both academia and industry resulting in broad applications. For example, the success in clinical trials has proved that RNA-based therapeutics hold great potential to overcome the limitation of existing medicine that can only target a limited number of proteins [5]. To fully explore and utilize RNA functions, the cornerstone is to study the multi-levels of complicated RNA structures to include the linear ribonucleotide sequence (primary structure), the 2D fold based on canonical Watson-Crick and wobble base-pairings (secondary structure), the 3D fold (tertiary structure), and the complex spatial arrangement of multiple folded molecules (quaternary structure) [6]. The folding of RNA molecules is broadly considered as a hierarchical process in which the secondary structure will be folded first representing the most relevant characteristic of an RNA molecule [7]. Therefore, studying the RNA secondary structure is one of the fundamental steps towards understanding function-related RNA structures.
In general, RNA secondary structure research falls into two categories: The RNA structure prediction problem, which is to predict the folding result of base pairs given the RNA sequence; and the RNA inverse folding problem, which is to identify the appropriate assignment of nucleotides so that a targeted RNA secondary structure can be folded with certainty. For the RNA structure prediction problem, researchers have developed a variety of computational approaches to increase the prediction accuracy. One early effort is to use the comparative approach to infer a consensus secondary structure by aligning the given sequence with other existing RNA sequences. This requires large collections of RNA sequences for the analysis. A major challenge of this approach is the limited availability of RNA [8]. An alternative is using a thermodynamic model to predict the secondary structure, which is based on the assumption that a structure with smaller free energy tends to be more stable. Therefore, an optimization problem with the objective being to minimize the free energy is constructed to identify the structures with minimum free energy (MFE). A number of research tools have been developed to serve this purpose. One tool is Mfold [9]. It employs a dynamic programming algorithm to predict the RNA secondary structure with MFE. While promising, the prediction accuracy of Mfold is less than satisfactory leading to some research efforts to improve its performance. For example, RNAstructure [8] incorporates the constraints from experimental data to improve the prediction accuracy. Realizing the uncertainties in the folding process, RNAfold [10] provides the estimated probabilities of base pairs. For the RNA inverse folding problem, the objective is to identify the appropriate sequences minimizing the distance metric (e.g., the number of common base pairs) between the structure folded from the designed sequence to the target secondary structure. One of the first tools is RNAinverse [11]. In RNAinverse, a random sequence is generated, changes of the nucleotide assignment are made locally to minimize the dissimilarities between the structures. Apparently, such a local search strategy may be trapped in a local optimum and the designed sequences are highly depended on the initial seed solution. To address this issue, RNA-SSD [12] is proposed to assign initial bases probabilistically attempting to avoid local trapping. incaRNAtion [13] uses global sampling and weighted sampling techniques to avoid the seed bias in local search. In antaRNA [14], ant colony optimization, an efficient bio-inspired optimization algorithm is implemented to expedite the searching process with high accuracy. All of the algorithms reviewed assume the designed sequence will fold into the MFE structure, which will be used to calculate the distance to a target secondary structure.
As noted, previous research in both structure prediction and inverse folding has heavily relied on free energy as the metric to evaluate the stability of RNA structures [9,10,11,12,13,14,15,16]. The hypothesis here is, given an RNA sequence, the secondary structure with the MFE will be the stable structure which it would fold into with highest likelihood and thus is considered "optimum"; and given a structure, the sequence shall be assigned with nucleotides in the way that MFE is achieved. To test the hypothesis, we started by collecting 167 existing pseudoknot-free RNA sequences from the Protein Data Bank (PDB), it is observed that only 53 RNAs (32%) are in MFE secondary structures. This finding indicates MFE alone may not be a sufficient condition in guiding RNA design. In other words, not all existing RNA structures are folded with the energy being MFE. Often, RNA can still be folded at an energy level close to MFE, we call them suboptimal RNAs. As indicated in Laing [6], RNA may have a large number of alternative suboptimum folding which is known as the multi-conformation RNA issue.
Recognizing the limitations from MFE algorithms, some research has proposed to generate a set of possible structures with near-optimal free energy instead of the MFE secondary structure alone. For example, RNAsubopt provides all the secondary structures within δ difference from the MFE [28]. However, the number of possible structures grows exponentially with the increment of different δ. Others have developed alternative metrics calculated from partition functions to evaluate the accessibility of the possible secondary structures. These include IPknot, Sfold [29], RNAshapes [30] and RNA profiling [31]. However, although efforts in the field have focused on exploring different metrics, researchers have not reached the consensus on which metrics should be broadly adopted.
In this research, we introduce a new concept: RNA foldability. Let the RNA structure prediction problem be considered as sequence → structure*, and the RNA inverse folding problem be considered as the structure → sequence*. Our foldability is defined as l(structure, sequence), which measures the likelihood of the co-existence of the structure – sequence pair. One motivation of developing this new construct is it can be potentially applied to both the structure prediction and inverse folding problems. For example, given a sequence, a number of possible structures could be folded, foldability l(structure, sequence) can be used to identify the structure with high likelihood. For an inverse folding problem, a number of sequencing candidates can be first identified for a targeted structure, again, foldability l(structure, sequence) here can be used to identify the sequence most likely to fold into the structure. A second motivation of this foldability concept is it enables us to explore data-driven approaches to RNA research. By extracting features from both sequence and structure, multi-parametric machine learning models can be developed to obtain the foldability measures. To achieve this, in conjunction with free energy and other commonly used RNA structural design features (e.g., GC content and base pair percentage), we introduce a new metric to evaluate the diversity of RNA sequence segments termed Sequence-Segment entropy (SSE). A Positive-Unlabeled (PU) learning based data driven framework, ENTRNA, is developed using the features to predict RNA foldability. After training on both pseudoknot-free and pseudoknotted RNAs, ENTRNA shows promising accuracy in predicting RNA foldability. Specifically, it successfully identifies 80% pseudoknot-free RNAs and pseudoknotted RNAs can be folded into the desired structures.
There are two main contributions from our proposed ENTRNA. First, RNA design is reformulated as a foldability prediction problem (l(structure, sequence)) which can evaluate the successful rate of a given pair of sequence and structure. This new formulation can fundamentally tackle the challenging issues in RNA design, that is, one RNA sequence may fold into multiple structures, and one RNA structure may have multiple sequence assignments. The second contribution lies in the new metric on assessing the RNA sequence segment diversity. In the remainder of the paper, the ENTRNA is presented in Section 2 followed by validation experiments in Section 3. The conclusion and discussion are drawn in Section 4.
RNA foldability prediction problem
Most existing computational algorithms formulate RNA secondary structure prediction as a deterministic optimization problem which aims to find the global optimal secondary structure for the given sequence. It provides a single best guess for the secondary structure with the assumption that the RNA sequence will only fold into the optimal secondary structure (i.e. MFE secondary structure). Unfortunately, such an assumption has notable limitations as some RNAs (i.e. highly structured ribosomal RNAs) often exist in multiple conformations [17]. Deterministic optimization approaches fail to discover multiple RNA secondary structures.
To address the multi-conformation RNA challenge, we look at RNA design from a different perspective. Specifically, we propose to develop a predictive model to estimate the likelihood l(structure, sequence) of a given RNA sequence folding into a given secondary structure. We call this approach RNA foldability prediction. RNA foldability prediction fundamentally differs from RNA secondary structure prediction and the RNA inverse folding problem, as the later ones only require RNA sequences or secondary structure as a single input. RNA foldability prediction will require both sequence and secondary structure to be provided. As such, it enables foldability evaluation on one sequence vs. its several potential secondary structures. Similarly, it can be used to evaluate one secondary structure vs. its multiple sequence candidates which is the RNA inverse folding problem.
ENTRNA for RNA foldability prediction
RNA foldability prediction could be regarded as a classification problem. To train a classification model, both successful and failed examples are needed. In the RNA foldability prediction problem, any reported successful synthetic RNA or natural existing RNA can be regarded as a positive example. However, failed RNAs have rarely been reported in the literature. To address this issue, we propose the application of the Positive-Unlabeled Learning technique (PU) to fill in the failed examples. Two different sets of RNA features are defined and extracted for pseudoknot-free and pseudoknotted RNAs respectively. By mapping RNAs into a length-free feature space, it enables us to fully learn and explore all the existing RNAs together. In addition, a new metric is proposed to evaluate the diversity of RNA sequences (see Section 2.2.2). Together with free energy (see Section 2.2.3), base pairing probability (see Section 2.2.4) and other RNA domain knowledge driven features (Section 2.2.5), ENTRNA is developed as a data-driven framework to predict RNA foldability.
Generate training dataset for PU learning
PU Learning is originally used to solve the text classification problem, which is to assign predefined labels to a new document [18]. Two datasets are needed for training: a positive labeled training set P and an unlabeled mixed set U. The positive set P has the positive examples, the mixed set U is assumed to have both positive and negative examples, but no explicit class label. Generally, PU learning is a two-step approach. First, it identifies a set of reliable negative examples from the mixed set U based on the knowledge of positive set P. Next, it builds predictive models on those positive and "negative" examples iteratively and then selects the best model among them.
In the RNA foldability prediction problem, a pair of existing RNA sequence and its corresponding secondary structure is considered a successful example in the positive training set P. The challenge lies in the unlabeled dataset U as it is not publically available. We decide to generate synthetic RNAs computationally as the examples composing U. The rationale here is the synthetic sequences generated by the computational algorithms are believed to be folded into targeted secondary structures, yet not empirically validated through lab testing, thus could be treated as part of the unlabeled dataset U.
In this research, we use the secondary structures existing in P as seeds to generate possible sequences. For a given secondary structure in P, instead of randomly assign sequences, we generate a number of possible sequences satisfying three constraints. The first two constraints are the same as in Williams et al. [19]: base pairing and repetition. Base-pairing constraint states only Watson-Crick and G-U base pairs are valid. The repetition constraint sets the longest sequence of bases that can all be the same. For example, if the repetition limit is 4, then AAAA may not appear in the structure, though AAAC can. Given the unique property of RNA folding, the third constraint on GC percentage is added, that is, the minimum and maximum percent of bases in the structure that must be either guanine (G) or cytosine (C). The set of sequences for the given structures consists of our unlabeled dataset U.
Next, we apply PU Learning to identify "reliable" negatives from U. Note we use "reliable" instead of "true" negatives as there is no ground truth to validate the negatives. We make the assumption "reliable" negatives are the ones furthest from the true positives in P which is known as a prior. For simplicity, we propose to use the Euclidean distance of feature values (see sections 2.2.2–2.2.5 for details on the features) to identify these negatives. Normalization has been done to eliminate the scaling issue of different features. Let \( {f}_{u_i,j} \) and \( {f}_{p_k,j}^{\prime } \) denote the values of feature j for example ui from U and example pk from P respectively. \( {d}_{u_i} \) is calculated as follows to measure the maximum distance between example ui to the positive set P:
$$ {d}_{u_i}=\max {d}_{u_i,{p}_k}\forall {p}_k\ \epsilon\ P $$
$$ {d}_{u_i,{p}_k}=\sqrt{\sum_{k=1}^m{\left({f}_{u_i,j}-{f}_{p_k,j}^{\hbox{'}}\ \right)}^2} $$
and m is the number of features.
With true positives from P and "reliable" negatives from U, we are able to develop a classification model (see section 2.2.5) to predict foldability, l(structure, sequence) for any pair of structure - sequence.
ENTRNA feature: sequence segment entropy
Due to the incomplete and inaccurate thermodynamic parameters, a great number of RNAs are trapped in the suboptimal structures that are near the predicted global free energy minimum [6]. Meanwhile, the sequence is more likely to be trapped into its suboptimal secondary structures if it has diverse secondary structures. Therefore, a new metric measuring the secondary structure diversity, is needed in addition to free energy.
Entropy, derived from thermodynamics and information theory [20], is used to measure the amount of uncertainty and disorder within a system. Since its inception, entropy has been applied to a diverse set of research fields including structural RNA research. For example, conformational entropy is considered an important factor in protein-ligand discrimination [21]. Positional entropy is introduced to measure the certainty of being unpaired considering all nucleotides [22]. However, the base pairing probability is required for all the existing entropy-based metrics, which is calculated based on the free energy value. Hence, it is still dependent on thermodynamic parameters and it is not capable for pseudoknotted RNAs. Therefore, a pseudoknotted-RNA capable and thermodynamic parameter free metric is needed to evaluate the structural diversity.
The k-mer concept has been widely used in bioinformatics research. For example, in genome, k-mer has been applied to de novo assembly of large genomes from short read sequences [32] and detecting mis-assemblies [33]. In RNA, Sailfish, a k-mer based algorithm, is developed to quantify the abundance of RNA isoforms [34]. In this research, we introduce sequence segment entropy (SSE) to measure the diversity of RNA sequence segments, which is motivated by the k-mer concept. For generalization, assume an RNA sequence of length n nucleotides (nt1, nt2, …,ntn), let w be the segment size referring to the number of consecutive nucleotides in order. To derive the SSE, we need to evaluate the entire RNA sequence. Thus, we use the moving window concept to list the segments. In that case, the segments of the RNA sequence can be written as:
$$ {\boldsymbol{Seg}}_{\boldsymbol{w}}=\left[{Seg}_w^1,{Seg}_w^2,\dots, {Seg}_w^{n+1-w}\right], $$
$$ {Seg}_w^1=\left({nt}_1,{nt}_2,\dots, {nt}_w\right),{Seg}_w^2=\left({nt}_2,{nt}_3,\dots, {nt}_{w+1}\right),{Seg}_w^{n+1-w}=\left({nt}_{n+1-w},{nt}_{n+2-w},\dots {nt}_n\right). $$
Let SegUw be the set representing the collection of distinct segments, we have
$$ {\boldsymbol{SegU}}_{\boldsymbol{w}}=\left[{SegU}_w^1,{SegU}_w^2,\dots, {SegU}_w^s\right], where\ s=\left|{\boldsymbol{SegU}}_{\boldsymbol{w}}\right|. $$
Following the entropy calculation, we define Vent,w as:
$$ {V}_{ent,w}=-{\sum}_{i=1}^sp\left({SegU}_w^i\right){\log}_2p\left({SegU}_w^i\right) $$
$$ p\left({SegU}_w^i\right)=\frac{\#\mathrm{of}\ {SegU}_w^i\ occurence\ in\ {\boldsymbol{Seg}}_{\boldsymbol{w}}}{n+1-w\ }\ for\ i=1,\dots, s $$
Since the value range of SSE is highly dependent on the length of an RNA sequence, we normalize SSE as RVent, w:
$$ {RV}_{ent,w}=\frac{V_{ent,w}}{V_{ent,w}^{\ast }} $$
where \( {V}_{ent,w}^{\ast } \) is the maximum SSE for segment size w, which is proven to be:
$$ {V}_{ent,w}^{\ast }=\left\{\begin{array}{c}-{\log}_2\left(\frac{1}{n+1-w}\right)\ if\ n+1-w\le {4}^w\\ {}-b\ast \frac{a+1}{n+1-w}\ast {\log}_2\left(\frac{a+1}{n+1-w}\right)-\left({4}^w-b\right)\ast \frac{a}{n+1-w}\ast {\log}_2\left(\frac{a}{n+1-w}\right),o/w\end{array}\right. $$
$$ a=\left\lfloor \frac{n+1-w}{4^w}\right\rfloor, \kern0.5em b=\left(n+1-w\right)\ \mathit{\operatorname{mod}}\ {4}^w. $$
[Proposition 1] Suppose we have two sequences of the same size with probability density set {p1, p2, p3…, pn + 1 − w} and {p1 + ϵ, p2 − ϵ, p3, …, pn + 1 − w} and p1 = p2 = … = pn + 1 − w = p > 0, ϵ > 0. The first SSE minus the second SSE equals − plog2p − plog2p + (p + ϵ) log2(p + ϵ) + (p − ϵ)log2(p − ϵ)
Since f(x) = − xlog(x) is a concave function, according to Jensen's inequality,
$$ {\displaystyle \begin{array}{l}\frac{1}{2}\ \left(\ \left(p+\epsilon \right){\log}_2\left(p+\epsilon \right)+\left(p-\epsilon \right){\log}_2\left(p-\epsilon \right)\ \right)\\ {}=\frac{1}{2}\ast f\left(p+\epsilon \right)+\frac{1}{2}\ast f\left(p-\epsilon \right)\\ {}<f\left(\frac{1}{2}\ast \left(p+\epsilon \right)+\frac{1}{2}\ast \left(p-\epsilon \right)\right)\\ {}=f(p)=-{plog}_2p\end{array}} $$
Hence, the SSE of the first sequence is greater than the second one. Therefore, the sequence segment should be as uniform as possible to achieve the maximum SSE.
[Proof on maximum SSE]. The total number of distinct sequence segments with size w is 4w, since 4 different nucleotides could be assigned to each position arbitrarily. Therefore we have two cases depending on the cardinality of Segw.
In the cases where n + 1 − w ≤ 4w, the most uniform probability density set will occur when all elements of Segw are unique and then each element of SegUw would have probability \( \frac{1}{n+1-w} \).
In the cases where n + 1 − w > 4w there must exist elements Segw that are not unique. The most uniform probability density set will occur when Segw is partitioned into two groups of segments. The first group of segments will contain in b = (n + 1 − w) mod 4w out of 4w and occur more frequently than the remaining group of 4w − b, which occur in equal amounts. For the group occurring in equal amounts, they must occur exactly \( a=\left\lfloor \frac{n+1-w}{4^w}\right\rfloor \) times giving them a probability of \( \frac{a}{n+1-w} \). Therefore, the probability for the b remaining elements must be \( \frac{a+1}{n+1-w} \).
Substituting the optimal probability density sets into Eq. (3), we get Eq. (6).
[Illustration Example on SSE] Suppose we have two RNA sequences:
$$ {\displaystyle \begin{array}{l}{\mathbf{seq}}_1=`\mathbf{GAAAAAAAAAAAAAAAAAAC}'\\ {}{\mathbf{seq}}_2=`\mathbf{GACCGUCGUGAGACAGGUUA}'\end{array}} $$
First, we calculate the scaled sequence segment entropy value of seq1, take segment size 3 as an example:
$$ {\displaystyle \begin{array}{l}{\mathbf{Seg}}_3=\left[`\mathrm{GAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAA}',`\mathrm{AAC}'\right];\\ {}{\mathbf{Seg}\mathbf{U}}_3=\left[`\mathrm{GAA}',`\mathrm{AAA}',`\mathrm{AAC}'\right];\\ {}\mathrm{P}\left(\hbox{'}\mathrm{GAA}\hbox{'}\right)=\frac{1}{18}=0.056;\mathrm{P}\left(\hbox{'}\mathrm{AAA}\hbox{'}\right)=\frac{16}{18}=0.889;\mathrm{P}\left(\hbox{'}\mathrm{AAC}\hbox{'}\right)=\frac{1}{18}=0.056;\\ {}{\mathrm{V}}_{\mathrm{ent},3}=-\left(\frac{1}{18}{\ast \log}_2\frac{1}{18}+\frac{16}{18}{\ast \log}_2\frac{16}{18}+\frac{1}{18}{\ast \log}_2\frac{1}{18}\right)=0.614;\\ {}\mathrm{a}=\mathcal{b}\frac{\left(20+1-3\right)}{4^3}\ \mathcal{c}=0;\\ {}\mathrm{b}=\left(20+1-3\right) \operatorname {mod}\ {4}^3=18;\\ {}{\mathrm{V}}_{\mathrm{ent},3}^{\ast }=-{\log}_2\frac{1}{18}=4.170;\\ {}{\mathrm{RV}}_{\mathrm{ent},3}=\frac{0.614}{4.170}=0.147;\end{array}} $$
Following the same steps above, we get RVent, 3 of seq2 is 0.947. The second sequence (seq2) may fold into more possible structures than the first one. This is reflected by scaled segment entropy value. The RVent, 3 of first sequence is 0.147, while the value of second sequence is 0.947. The higher scaled segment entropy value means the lower certainty of base pairings between RNA segments.
As the segment size increases, SSE converges to 1. To determine the appropriate segment size, we extract 342 RNA sequences from the PDB database and calculate their normalized SSE with different segment sizes starting with 3 and increment by 1. For each SSE calculated, we also calculate a condition index to check the linear dependency. Following Grewal [23], if the condition index is greater than 30, we conclude there exist high linear dependencies among the SSEs (from varied segmentation size). This is the indicator that at least one SSE with a specific segment size can be derived from a linear combination of SSEs from other segment sizes. In that case, adding more SSE would not contribute to distinguishing the RNA sequence. As seen in Table 1, the maximum condition indices reach > 30 when the segment size 9 is added. Therefore, we determine that the segment size should be 3 to 8. As a result, six SSE features are to be derived for the ENTRNA classification model.
Table 1 Maximum Condition Index
ENTRNA feature: free energy
Free energy is used to measure stability of an RNA structure quantitatively. For pseudoknot-free RNAs, both the free energy value (Vfe) of a given pair of sequence and structure and the minimum free energy value (Vmfe) that the sequence could achieved would be calculated. The program RNAeval [10] of the ViennaRNA − package calculates the free energy value (Vfe) of any pair of sequence and secondary structure. We use RNAfold [10] of the ViennaRNA-package to calculate the minimum free energy value so that we could measure the distance between the current structure to the MFE structure in terms of free energy value.
Unlike the easily computed free energy of pseudoknot-free RNAs, the free energy of pseudoknotted RNA is hard to compute directly due to the inaccurate and incomplete parameters. Inspired by Sato's idea to decompose pseudoknotted structures into several pseudoknot-free substructures [24], we propose to decompose pseudoknotted structures into a base substructure and knotted substructure(s) (See Fig. 1).
An illustration of the decomposition of a pseudoknotted secondary structure into pseudoknot-free substructures
A pseudoknot is typically formed from the base pairings between the unpaired bases in a hairpin loop and those outside the hairpin. Hence, we treat the pseudoknotted structures as the result of two-step folding: First, a pseudoknot-free base substructure is formed as the skeleton structure. Second, the unpaired bases in the hairpin formed by the base substructure form new base pairs with bases outside the hairpin. Specifically, the base substructure is the pseudoknot-free structure that keeps the maximum number of base pairs [25]. It shares the same sequence of the pseudoknotted structure but keeps bases in the knotted substructures unpaired. As a result of further improving structural stability, knotted substructures are formed by keeping the portion of the original sequence that contains additional base pairs that are not knotted. From this viewpoint, it enables the decomposition on arbitrary pseudoknots.
Since both the base substructure and knotted substructures are pseudoknot-free, free energy can be easily calculated. The following free energy based features are extracted for each pseudoknotted RNA by RNAeval [10] and RNAfold [10]:
Base substructure free energy (Vbfe): The free energy value given to the sequence and base substructure. It is used to quantitatively measure stability of the base structure;
Base substructure minimum free energy (Vbmfe): The minimum free energy value that the sequence could achieve without forming pseudoknots;
Knotted substructure free energy (Vkfe): The free energy reduction brought on by the pseudoknots. In addition, we remove the energy increase caused by the "hairpin" since the hairpin is artificially created during the decomposition process.
ENTRNA features from base pair probabilities
MFE-based prediction algorithms are generally far from perfect. In general, less than 40% of base pairs could be predicted correctly if a RNA is more than 500 nucleotides [35]. Base pairing uncertainty is considered one of the top reasons. To quantitatively evaluate the base pairing uncertainty, it is assumed that the probability of a secondary structure s in equilibrium follows Boltzmann distribution:
$$ \mathrm{p}\left(\mathrm{s}\right)\propto {e}^{-E(s)/ RT} $$
where E(s) is the free energy of the structure, R is the gas constant and T the thermodynamic temperature of the system. After normalization, the probability of being in secondary structure s is:
$$ \mathrm{p}\left(\mathrm{s}\right)=\frac{e^{-E(s)/ RT}}{Z} $$
where Z is partition function by summing over all the possible structure:
$$ \mathrm{z}={\sum}_s{e}^{-E(s)/ RT} $$
Base pairing probability pij is derived by summing up the secondary structure probability with i and j paired, qi is the probability of base i being unpaired. The following two metrics, calculated by using base pairing probability, have been widely used to evaluate the pseudoknot-free RNA secondary structure uncertainty, which can serve as features in ENTRNA for pseudoknot-free modeling:
Ensemble Diversity (Ved): It measures the expected distance between the target secondary structure and all the other secondary structure. The lower ensemble diversity means the sequence has less ensemble diversity, which further implies the sequence would fold into the target secondary structure with high certainty.
Expected Accuracy (Vea): It measures the expected number of bases that are in correct base pairing status. The higher expected accuracy means more bases are expected to appear in the target secondary structure, which further implies the sequence would fold into the target secondary structure with high certainty.
ENTRNA features from RNA domain knowledge
In addition to SSE, free energy and base pairing features, two more features are extracted from domain knowledge:
GC Content (PerGC): The percentage of guanine or cytosine nucleotides in the sequence. This is a sequence-based feature. GC content is believed to have an impact on RNA stability [26];
Base pair percentage (Perbp): The percentage of base pairs for a given structure. This is a structure-based feature. Base pairs bring free energy reduction in most cases, which influences the structure stability.
In Tables 2, 3 and 4, we summarize all the features including our proposed SSE, free energy, sequence and structural features used for the classification model's development.
Table 2 ENTRNA: Pseudoknot-free and Pseudoknotted RNAs Common Features
Table 3 ENTRNA: Pseudoknot-free RNA Only Features
Table 4 ENTRNA: Pseudoknot RNA Only Features
Classification model
Based on the training dataset generated, ENTRNA applies logistic regression as a classifier to predict the foldability using 11 features (Tables 2 and 3) for pseudoknot-free and 11 features (Tables 2 and 4) for pseudoknotted RNAs separately. Compared to other classifiers, one advantage of logistic regression is that the result is a continuous value instead of a binary class, which could be explained as the probability of being in the positive class. In this research, the prediction result could be regarded as the foldability for the given pair of sequence and secondary structure. Specifically, we set the foldability threshold as 0.5, which means the given pair of sequence and secondary structure would be classified as a successful case if its foldability value is greater than 0.5. It is our intention to conduct sensitivity analysis on this threshold as one of the future tasks.
To evaluate the performance of ENTRNA, we measure the model accuracy as the mean of sensitivity and specificity:
$$ Sensitivity=\frac{TP}{TP+ FN} $$
$$ Specificity=\frac{TN}{TN+ FP} $$
where TP is the number of positive examples that are correctly predicted as positive, TN is the number of negative examples correctly predicted as negative, FP is the number of negative examples that are incorrectly predicted as positive and FN is the number of positive examples that are incorrectly predicted as negative.
In order to identify the best feature combinations and parameter settings, we investigate ENTRNA performance exhaustively and record the best parameter settings and feature combinations in terms of Leave-One-Out cross validation accuracy. In addition, a blind test is conducted to evaluate the robustness and generalization of the proposed ENTRNA.
In this research, we prepare 3 separate datasets to train, cross-validate and blind test ENTRNA. The details are as follows:
Dataset I: 2084 (1024 pseudoknot-free + 1060 pseudoknotted) RNAs from the RNASTRAND database [36]. The length ranges from 4 to 1192 nucleotides. This serves as the training dataset
Dataset II: 299 (206 pseudoknot-free + 93 pseudoknotted) RNAs extracted by CompaRNA [27] from the PDB database. The length ranges from 20 to 1495 nucleotides. This is used as the test dataset
Dataset III: 5 laboratory-tested pseudoknotted RNAs with synthetic sequences. All 5 RNA strands were obtained through in vitro transcription and further purified by gel electrophoresis. The RNA strands folded themselves in a buffer solution with a slow cooling process. Among the 5 sequences, 4 of them were not able to produce the designed well-formed rectangle nanostructures. The length of RNA sequences ranges from 1618 to 1790 nucleotides. This is used to test ENTRNA on long structural-complex pseudoknotted RNAs
During the training process, all the RNAs in Dataset I are treated as the positive dataset P. To create the unlabeled dataset U, we generate 100 sequences for each secondary structure by using existing computational algorithms. Specifically, we use secondary structures in the positive dataset as seed structures, generate the sequence solutions by three different RNA inverse folding algorithms(RNAinverse [11], incaRNAtion [13] and antaRNA [14]). The reason multiple inversion folding algorithms are used is to improve the diversity of the sequence-secondary structure pairs. A pair of seed secondary structure and corresponding sequence defines an example in unlabeled dataset.
Experiment I: pseudoknot-free RNA
The first experiment is to evaluate ENTRNA on pseudoknot-free RNA. We train and cross-validate the model using 1024 pseudoknot-free RNAs from RNASTRAND to identify the best parameter settings and feature combinations. The model is then blindly tested using 206 RNAs from PDB database. To balance the positive and negative examples, we identify the same number of examples from the unlabeled dataset as "reliable" negative examples. After exhaustively evaluating all the feature combinations, the best performing model, leave-one-out cross validated, is built with the following 5 features:
Normalized SSE with segment size 3 (RVent, 3)
GC percentage (Pergc)
Ensemble Diversity (Ved)
Expected Accuracy (Vea)
Pseudoknot-free RNA normalized free energy (RVfe)
Since extensive research uses minimum free energy as the single metric to guide RNA design, we provide the MFE result as a reference. Specifically, we implement RNAfold [10] to estimate the MFE structure from the sequence and assess the consistency between the real RNA secondary structure and the MFE predicted RNA secondary structure. If the two structures are identical, the pair of RNA secondary structure and sequence is considered as a positive example under MFE criteria. Table 5 summarizes the comparison between ENTRNA and MFE model on the training and testing datasets.
Table 5 Prediction result of ENTRNA on pseudoknot-free RNA
As observed, in the training and testing, only 76 out of 1024 and 52 out of 206 RNAs are in their MFE secondary structure, which yields the MFE sensitivity to 7.4 and 25.7% separately. In the training procedure, ENTRNA is able to correctly predict 886 pairs of RNA sequence and secondary structure (leave-one-out sensitivity: 86.5%). By directly applying the trained model on the 206 RNAs (blind testing), 165 RNAs are correctly predicted. We conclude ENTRNA model is robust in predicting the foldability of pseudoknot-free RNAs.
Experiment II: ENTRNA on Pseduoknotted RNA
Following the same procedure as Experiment I, this experiment is to evaluate the performance of ENTRNA on pseudoknotted RNAs. Here we train and leave-one-out cross-validate the model using 1060 pseudoknotted RNAs from RNASTRAND and blindly tested using 93 RNAs from PDB database. The following 3 features are identified in the best performing model:
Pseudoknotted RNA base substructure normalized free energy (RVkfe)
The free energy calculation for pseudoknotted RNA is still unavailable. Therefore, we only provide the training and test accuracy of ENTRNA, which are summarized in Table 6.
Table 6 Prediction result of ENTRNA on pseudoknotted RNA
From Table 6, we observe in the leave-one-out cross validated training procedure, ENTRNA is able to correctly predict 864 out of 1060 RNAs (sensitivity: 80.6%). Blind test on the PDB data gives 71.0% sensitivity, that is, 66 out of 93 pseduoknotted RNAs are correctly predicted with foldability. While it is expected blind test will have inferior performance than the training, it is our intention to further explore potential features that could be gathered to improve the predictions.
Next, we validate the model generated from the second experiment blindly on the 5 laboratory long RNA strands. Please note the first two experiments have shown that ENTRNA is able to predict positive examples with high accuracy, while the ability of predicting negative examples could not be validated due to the lack of failed RNAs. Dataset III consists of four failed RNA and one successful RNA which enables us to test the performance of ENTRNA on both sensitivity and specificity. We use the best model trained from Experiment II to predict the foldability of the give RNAs. The model is able to correctly predict the foldability of the one positive example and three out of four negative examples, which yields 100% sensitivity and 75% specificity.
In this paper, we propose a new concept: foldability. It transforms the RNA design problem to a foldability prediction problem - predicting the folding success rate for a given pair of sequence and structure. RNA sequence and secondary structure is a many-to-many mapping, known as multi-conformation. Specifically, each RNA secondary structure could be folded from several RNA sequences and vice versa. In addition, RNA folding is a stochastic process. For each RNA sequence, it will fold into several different secondary structure with certain probabilities. This research proposes a data-driven approach taking the RNA sequence and secondary structure jointly to predict its foldability. The result shows the approach is able to predict RNA foldability with high sensitivity and specificity. This implies the potential promise of the new formulation and its uses in both RNA structure prediction and inverse folding problems.
While successfully, there is room for improvement. First, it is our intention to explore extracting more features to enrich the description of RNA for improved prediction power. Second, we plan to explore the robustness of ENTRNA. One potential issue for all data-driven approaches is the performance is highly dependent on training dataset. In ENTRNA framework, the real world RNAs are not only used in training model, but also identifying reliable negative RNA examples. A larger RNA dataset with both successful and failed (instead of negative) RNA examples will certainly help improve the robustness of the model.
Introducing thermodynamics (free energy) into RNA folding has been a revolutionary milestone since more than three decades ago. It provides the foundation to computational algorithms for RNA design based on three assumptions: (1) One RNA sequence has a single unique target conformation. (2) The thermodynamic parameters are accurate to derive the free energy characterizing a specific structure. (3) An RNA structure at minimum free energy (MFE) is the most stable structure. The "stable" here refers to the thermodynamic stability calculated in silico. However, recent research has proven that the same RNA sequence may fold into several structures, known as multi-conformation. The thermodynamic parameters used in calculating free energy are only estimates using nearest neighborhood methods. And, many natural RNAs discovered in cells are in an alternative structure with higher-than-the-minimum free energy.
The issues with the three assumptions motivate us to reformulate the RNA structure prediction problem to an RNA foldability prediction problem. As a result, one sequence with its respected multiple potential structures, and one structure with its respected multiple sequences can all be assessed with a unified foldability prediction model. We propose ENTRNA as a data-driven framework for the RNA foldability prediction. In addition, we propose a new metric sequence segment entropy (SSE) as an additional feature for ENTRNA in conjunction with free energy and other RNA domain commonly used features (e.g., GC percentage). Since the unique challenge in designing data-driven approaches for RNA designs is the lack of failure examples, we propose the application of PU (Positive-Unlabeled) learning to make up the failed RNA sequence-structure pairs for the training dataset.
The performance of ENTRNA is validated using both pseudoknot-free and pseudoknotted datasets. In addition, 5 laboratory tested long structural-complex pseudoknotted RNAs with synthetic sequences are used to blindly test the model performance. The superior experiment results show that our method is able to learn from existing RNAs and apply its learning in predicting foldability of unknown RNAs. Unlike previous computational based methods, our method stands at the machine learning perspective to understand and exploit reported RNAs.
The ENTRNA source code and other necessary resources can be obtained from https://github.com/sucongzhe/ENTRNA.
FN:
FP:
MFE:
Minimum Free Energy
PU:
Positive-Unlabeled
SSE:
Sequence Segment Entropy
SVM:
TN:
True Negative
TP:
True Positive
Afonin KA, Lindsay B, Shapiro BA. Engineered RNA nanodesigns for applications in RNA nanotechnology. DNA RNA Nanotechnol. 2013;1(1).
Doherty EA, Doudna JA. Ribozyme structures and mechanisms. Annu Rev Biophys Biomol Struct. 2001;30(1):457–75.
Elbashir SM, Harborth J, Lendeckel W, Yalcin A, Weber K, Tuschl T. Duplexes of 21-nucleotide RNAs mediate RNA interference in cultured mammalian cells. Nature. 2001;411(6836):494–8.
Shajani Z, Sykes MT, Williamson JR. Assembly of bacterial ribosomes. Annu Rev Biochem. 2011;80:501–26.
Bramsen JB, Kjems J. Development of therapeutic-grade small interfering RNAs by chemical engineering. Front Genet. 2012;3:154.
Laing C, Schlick T. Computational approaches to 3D modeling of RNA. J Phys Condens Matter. 2010;22(28):283101.
Thirumalai D, Lee N, Woodson SA, Klimov DK. Early events in RNA folding. Annu Rev Phys Chem. 2001;52(1):751–62.
Reuter JS, Mathews DH. RNAstructure: software for RNA secondary structure prediction and analysis. BMC Bioinf. 2010;11(1):1.
Zuker M, Stiegler P. Optimal computer folding of large RNA sequences using thermodynamics and auxiliary information. Nucleic Acids Res. 1981;9(1):133–48.
Lorenz R, Bernhart SH, Zu Siederdissen CH, Tafer H, Flamm C, Stadler PF, Hofacker IL. ViennaRNA Package 2.0. Algorithms Mol Biol. 2011;6(1):26.
Hofacker IL, Fontana W, Stadler PF, Bonhoeffer LS, Tacker M, Schuster P. Fast folding and comparison of RNA secondary structures. Monatsh Chem/Chem Mon. 1994;125(2):167–88.
Andronescu M, Fejes AP, Hutter F, Hoos HH, Condon A. A new algorithm for RNA secondary structure design. J Mol Biol. 2004;336(3):607–24.
Reinharz V, Ponty Y, Waldispühl J. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution. Bioinformatics. 2013;29(13):i308–15.
Kleinkauf R, Mann M, Backofen R. antaRNA: ant colony-based RNA sequence design. Bioinformatics. 2015;31(19):3114–21.
Parisien M, Major F. The MC-fold and MC-Sym pipeline infers RNA structure from sequence data. Nature. 2008;452(7183):51–5.
Hofacker IL, Stadler PF. Memory efficient folding algorithms for circular RNA secondary structures. Bioinformatics. 2006;22(10):1172–6.
Woods CT, Lackey L, Williams B, Dokholyan NV, Gotz D, Laederach A. Comparative visualization of the RNA suboptimal conformational ensemble in vivo. Biophys J. 2017;113(2):290–301.
Liu B, Dai Y, Li X, Lee WS, Yu PS. Building text classifiers using positive and unlabeled examples. In: Data mining, 2003. ICDM 2003. Third IEEE international conference on: IEEE; 2003;3:179–188.
Williams S, Lund K, Lin C, Wonka P, Lindsay S, Yan H. Tiamat: a three-dimensional editing tool for complex DNA structures. In: International workshop on DNA-based computers. Berlin: Springer; 2008. p. 90–101.
Shannon, C. E. (2001). A mathematical theory of communication. ACM SIGMOBILE Mobile Computing and Communications Review, 5(1), 3–55.
Garcia-Martin JA, Clote P. RNA thermodynamic structural entropy. PLoS One. 2015;10(11):e0137859.
Huynen M, Gutell R, Konings D. Assessing the reliability of RNA folding using statistical mechanics. J Mol Biol. 1997;267(5):1104–12.
Grewal R, Cote JA, Baumgartner H. Multicollinearity and measurement error in structural equation models: implications for theory testing. Mark Sci. 2004;23(4):519–29.
Sato K, Kato Y, Hamada M, Akutsu T, Asai K. IPknot: fast and accurate prediction of RNA secondary structures with pseudoknots using integer programming. Bioinformatics. 2011;27(13):i85–93.
Smit S, Rother K, Heringa J, Knight R. From knotted to nested RNA structures: a variety of computational methods for pseudoknot removal. RNA. 2008;14(3):410–6.
Isaacs FJ, Dwyer DJ, Ding C, Pervouchine DD, Cantor CR, Collins JJ. Engineered riboregulators enable post-transcriptional control of gene expression. Nat Biotechnol. 2004;22(7):841–7.
Puton T, Kozlowski LP, Rother KM, Bujnicki JM. CompaRNA: a server for continuous benchmarking of automated methods for RNA secondary structure prediction. Nucleic Acids Res. 2013;41(7):4307–23.
Wuchty S, Fontana W, Hofacker IL, Schuster P. Complete suboptimal folding of RNA and the stability of secondary structures. Biopolymers. 1999;49(2):145–65.
Ding Y, Chan CY, Lawrence CE. S fold web server for statistical folding and rational design of nucleic acids. Nucleic Acids Res. 2004;32(suppl_2):W135–41.
Steffen P, Voß B, Rehmsmeier M, Reeder J, Giegerich R. RNAshapes: an integrated RNA analysis package based on abstract shapes. Bioinformatics. 2005;22(4):500–3.
Rogers E, Heitsch CE. Profiling small RNA reveals multimodal substructural signals in a Boltzmann ensemble. Nucleic Acids Res. 2014;42(22):e171.
Li R, Zhu H, Ruan J, Qian W, Fang X, Shi Z, et al. De novo assembly of human genomes with massively parallel short read sequencing. Genome Res. 2010;20(2):265–72.
Phillippy AM, Schatz MC, Pop M. Genome assembly forensics: finding the elusive mis-assembly. Genome Biol. 2008;9(3):R55.
Patro R, Mount SM, Kingsford C. Sailfish enables alignment-free isoform quantification from RNA-seq reads using lightweight algorithms. Nat Biotechnol. 2014;32(5):462.
Doshi KJ, Cannone JJ, Cobaugh CW, Gutell RR. Evaluation of the suitability of free-energy minimization using nearest-neighbor energy parameters for RNA secondary structure prediction. BMC Bioinf. 2004;5(1):105.
Andronescu M, Bereg V, Hoos HH, Condon A. RNA STRAND: the RNA secondary structure and statistical analysis database. BMC Bioinformatics. 2008;9(1):340.
We would like to extend our gratitude to Dr. Giulia Pedrielli, Rong Pan, Xianghua Chu for their constructive feedback.
School of Computing, Informatics, Decision Systems Engineering, Arizona State University, Tempe, AZ, 85281, USA
Congzhe Su
& Teresa Wu
Department of Operational Sciences, Graduate School of Engineering and Management, Air Force Institute of Technology, Wright-Patterson AFB, Dayton, OH, 45433, USA
Jeffery D. Weir
Biodesign Center for Molecular Design and Biomimetics, The Biodesign Institute & School of Molecular Sciences, Arizona State University, Tempe, AZ, 85281, USA
Fei Zhang
& Hao Yan
Search for Congzhe Su in:
Search for Jeffery D. Weir in:
Search for Fei Zhang in:
Search for Hao Yan in:
Search for Teresa Wu in:
CS contributed code and algorithms, performed validation experiments and was a major contributor in writing the manuscript. TW, JW and HY initiated and led the project. FZ contributed to data processing and lab experiments. All authors read and approved the final manuscript.
Correspondence to Teresa Wu.
Su, C., Weir, J.D., Zhang, F. et al. ENTRNA: a framework to predict RNA foldability. BMC Bioinformatics 20, 373 (2019) doi:10.1186/s12859-019-2948-5
Foldability
Knowledge-based analysis | CommonCrawl |
Talk:Nelder-Mead algorithm
1 ndash
2 Vector notation
3 User 1 (Assistant Editor): simplex definition
4 Reviewer A
ndash
I have changed the ndashs in the text (as in Nelder-Mead) by hyphens. I believe the ndash is only appropriate in intervals (such as in page numbers in the articles)... -- Nwerneck 22:02, 5 April 2008 (EDT)
Vector notation
Can I change the vectors to \(\vec{x}_n\) or \(\hat{x}_n\) instead of just \(x_n\)? I believe there might be some confusion, specially because there are also scalar quantities in the text (\(f_n\)). -- Nwerneck 22:02, 5 April 2008 (EDT)
Reply: Rather not, \(x_n\) is actually a point, not a vector. A full explanation is given below in the next section. This notation is so common, including \(f_n\), that any change might cause even more confusion. If you really think a change is required, then use \(\hat{x}_n\).--Singersasa 15:00, 15 February 2009 (EST)
User 1 (Assistant Editor): simplex definition
The article defines a simplex as the convex hull of a number of vertices. Shouldn't it be the hull of a number of linearly independent vectors? (that end up being the vertices...) -- Nwerneck 21:41, 4 May 2008 (EDT)
Reply: No. This is the difference between affine and vector spaces. An affine spaces contains both points and vectors. A pair of points determines a vector. The usual \({\mathbb R}^n\) can be viewed in both ways. In this case, the difference of points is a vector. So, you need 3 points (in general position) to determine a plane. These 3 point define only 2 linearly independent vectors (one point is taken as an origin). Generally, you need \(n + 1\) points, that define/determine \(n\) vectors, say \(x_1 - x_0, \ldots, x_n - x_0\), if \(x_0\) is taken as an origin. --Singersasa 14:52, 15 February 2009 (EST)
Reviewer A
In general this is a clear and correct article, but I have some changes to suggest.
1. In the first sentence of the article, it should be stated explicitly that this method should not be confused with Dantzig's simplex method for linear programming, which is completely different.
2. The line "like many other direct search methods" (near the beginning of the article) is not really accurate any more, since modern GPS-like methods do not use a simplex, but rather rely on a positive spanning set of directions. Hence I would omit the first clause and simply say "The Nelder-Mead method is...". It's true that early direct search methods tended to be based on simplices, but no longer.
3. Just above the heading "The Nelder-Mead simplex algorithm" (on page 3) I would change the statement that Nelder-Mead is still the most popular direct search method in practice, unless there is some evidence for this. Matlab now includes GPS and MADS in its optimization toolbox, and they may well have become more popular. (I don't know.) I think it's unarguable that Nelder-Mead is *among* the most popular.
4. In the third line following the heading "The Nelder-Mead simplex algorithm" (page 3), I suggest replacing the word "break" by "end".
5. In the 4th line from the bottom of page 3, I think $h_k$ should be described as "a" stepsize rather than "the".
6. Under "termination tests" (page 5), it seems misleading to say the the method *must* terminate, when it can generate an infinite sequence. I suggest saying "A practical implementation of the Nelder-Mead method must include a test that ensures termination in a finite amount of time. The termination test is often composed of three..."
7. Right after "efficient implementation", it is confusing to say "slow shrink transformations", which seems to imply that there are fast shrink transformations. I would omit "slow".
Also, it should be "there is overwhelming evidence" (remove "an").
8. In the paragraph starting "a fairly simple efficiency analysis...", the third sentence also mentions "simple" and "efficient". I suggest starting with "An analysis of a single Nelder-Mead iteration by..." and leaving the third sentence as is.
9. Soon after the heading "convergence", I think the phrase "simplex-based direct search methods" should be replaced by "direct search methods" for the reasons given above in point 2.
In mentioning these convergence results, you should probably
cite one of the recent papers by Audet and Dennis that have appeared in the SIAM Journal on Optimization.
10. Starting in the sixth line from the bottom of page 6, the word "deliberately" should be omitted, since you are already saying "by design".
11. Just before the first bullet on page 7, I would tone down the statement about "substantially faster", and say something like
"For such problems, the method is often faster than other methods,
especially those that require at least $n$ function evaluations per iteration".
Popular implementations of GPS and MADS tend to be opportunistic,
which means that in general they do not require $n$ function evaluations per iteration (which is relatively uncommon these days).
12. The first bullet on page 7, the statement that the Nelder-Mead method beats most of its competitors in best-case performance similarly needs to be toned down significantly. I suggest something like "In many numerical tests, the Nelder-Mead method succeeds in obtaining a good reduction in the function value using a relatively small number of function evaluations".
13. Once again, the statement near the end about "other simplex-based direct search methods" should refer simply to "direct search methods". It would be good to include a citation to one of the Audet-Dennis papers and one of the papers by Coope and Price.
Reply: I hope that I have addressed all the points. I also added two new paragraphs, based on John Nelder's comments on the article. The first one is located after the shrink transformation and contains a quote from the original paper. The second one is subsection 3.4. Thank you for your comments.--Singersasa 14:31, 21 February 2009 (EST)
Retrieved from "http://www.scholarpedia.org/w/index.php?title=Talk:Nelder-Mead_algorithm&oldid=59649" | CommonCrawl |
Is there any explicit symplectic Runge-Kutta method?
As far as I know, all the symplectic Runge-Kutta methods are implicit which need to solve non-linear equations during the calculation. Is there any explicit method? If not, why?
ode runge-kutta
Joseph LiJoseph Li
There are explicit, symplectic methods for certain types of Hamiltonian problems. For example, the symplectic Euler method
\begin{align} p_{n+1} &= p_n - h H_q(p_{n+1}, q_n) \\ q_{n+1} &= q_{n} + h H_p(p_{n+1}, q_n) \end{align}
is symplectic, see e.g. Theorem 3.3 on p. 189 in the book by Hairer, Wanner and Lubich (see full reference below). For simple functions $H$ like $H(p, q) = \frac{1}{2} \left( p^2 + q^2 \right)$, this becomes
\begin{align} p_{n+1} &= p_n - h q_n \\ q_{n+1} &= q_n + h p_{n+1} \end{align}
which is explicit. More generally, this method is explicit for separable Hamiltonians (see comments after the Theorem mentioned above).
In VIII.6 on p. 325, Hairer et al state that
Symplectic methods for general Hamiltonian equations are implicit, and so are symmetric methods for general reversible systems.
Therefore, there are no sympletic methods that are explicit for general Hamiltonian functions. I couldn't find the specific Theorem that states this, though.
Hairer, Ernst; Lubich, Christian; Wanner, Gerhard, Geometric numerical integration. Structure-preserving algorithms for ordinary differential equations, Springer Series in Computational Mathematics 31. Berlin: Springer (ISBN 3-540-30663-3/hbk). xvii, 644 p. (2006). ZBL1094.65125.
DanielDaniel
$\begingroup$ Under most definitions I think the trivial "do nothing" method $p^{n+1}=p^n$, $q^{n+1}=q^n$ would count as an explicit symplectic method. It's a fairly awful one though $\endgroup$ – origimbo May 24 '19 at 16:22
$\begingroup$ Given that this is not even consistent with the solved differential equation, I would doubt whether it can be called a "method" in any reasonable sense of the word. $\endgroup$ – Daniel May 24 '19 at 16:23
$\begingroup$ I couldn't find the specific Theorem that states this, though. I believe the theorem you're looking for is in section 2 of this paper by Sanz-Serna, see also these course notes, theorem 6 $\endgroup$ – GoHokies May 24 '19 at 18:03
$\begingroup$ @GoHokies To be clear, the proof is sketched in Sanz-Serna's paper, where he also mentions that Lasagni gave a proof but didn't publish it. $\endgroup$ – David Ketcheson May 26 '19 at 6:21
Not the answer you're looking for? Browse other questions tagged ode runge-kutta or ask your own question.
Intermediate values (interpolation) after Runge-Kutta calculation
Constructing explicit Runge Kutta methods of order 9 and higher
Puzzling remark about stability region of fifth-order Runge-Kutta method
Why are higher-order Runge–Kutta methods not used more often?
BDF vs implicit Runge Kutta time stepping
Easily understandable argument that normal Runge–Kutta methods cannot be generalised to SDEs?
Prescribing variables as an excitation in Runge-Kutta method
How well do explicit Runge-Kutta "tableau" methods compare to the state of the art ODE solvers and when do they fail? | CommonCrawl |
You searched for subject:(boundary conditions). Showing records 1 – 30 of 475 total matches.
◁ [1] [2] [3] [4] [5] … [16] ▶
2011 – 2015 (167)
University of Florida (156)
Brno University of Technology (17)
Delft University of Technology (11)
Mechanical Engineering - Mechanical and Aerospace Engineering (24)
PhD (130)
Docteur es (48)
doctoral (200)
Delft University of Technology
1. De Raedt, F. Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows:.
Degree: 2015, Delft University of Technology
URL: http://resolver.tudelft.nl/uuid:10797924-e37c-4328-a53a-55c5fdc464c3
► The design-process of an efficient turbo-machinery relies heavily on the accurate simulation of the internal flow. These simulations come with their own set of challenges,… (more)
▼ The design-process of an efficient turbo-machinery relies heavily on the accurate simulation of the internal flow. These simulations come with their own set of challenges, among which is the implementation of the boundary conditions. This thesis extends the Non-reflecting boundary conditions, as first proposed by Giles, to non-ideal compressible fluids. The report also explains how these boundary conditions can be implemented in a flow-solver, in this case SU2. The report explains in-dept how challenges in the implementation, such as the averaging-procedure, Fourier transform at the boundary, and implicit numerical integration can be overcome. The code is tested for both sub- and supersonic flow, for both ideal and non-ideal fluids. The results demonstrate how, as predicted by the theory, the reflections present in traditional boundary conditions are removed from the flow field, resulting in a much more accurate simulation. Advisors/Committee Members: Pini, M..
Subjects/Keywords: non-reflecting boundary conditions
De Raedt, F. (2015). Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows: . (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:10797924-e37c-4328-a53a-55c5fdc464c3
De Raedt, F. "Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows:." 2015. Masters Thesis, Delft University of Technology. Accessed January 27, 2020. http://resolver.tudelft.nl/uuid:10797924-e37c-4328-a53a-55c5fdc464c3.
De Raedt, F. "Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows:." 2015. Web. 27 Jan 2020.
De Raedt F. Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows:. [Internet] [Masters thesis]. Delft University of Technology; 2015. [cited 2020 Jan 27]. Available from: http://resolver.tudelft.nl/uuid:10797924-e37c-4328-a53a-55c5fdc464c3.
De Raedt F. Non-Reflecting Boundary Conditions for Non-ideal Compressible Fluid Flows:. [Masters Thesis]. Delft University of Technology; 2015. Available from: http://resolver.tudelft.nl/uuid:10797924-e37c-4328-a53a-55c5fdc464c3
2. Spann, Bryan T. Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions.
Degree: MS;, Mechanical Engineering;, 2010, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/1067/rec/104
► Developments in manufacturing have led to progressively smaller and more complex micro-electro-mechanical systems (MEMS), many of which employ heat exchangers to enhance performance. Due to… (more)
▼ Developments in manufacturing have led to progressively smaller and more complex micro-electro-mechanical systems (MEMS), many of which employ heat exchangers to enhance performance. Due to the size constraints on these devices, adiabatic surfaces are difficult to create, and thus, ambient thermal interaction becomes an import factor in heat exchanger performance. Similarly, end-wall boundary conditions also become a concern at this scale. A unique closed form mathematical solution is presented for single-pass, two-fluid, parallel and counter flow microscale heat exchangers. The model includes the effects of axial wall conduction, ambient thermal interaction at the axial exterior surface, and general end-wall boundary conditions. Heat exchanger effectiveness above unity is found to be possible depending on the magnitude of the ambient thermal interaction and the objective of the heat exchanger. For an objective of heating the cold fluid, end-wall boundary conditions that introduce energy into the system such as isoflux (with heat flux into the system), convection (when the ambient temperature is greater than the end-wall temperature), and isothermal (where the temperature gradients introduce energy into the system) enhance performance. The heat capacity rate ratio is found to have potential for enhancing performance and mitigating the negative effects of ambient thermal interaction. When the ambient temperature is lower than the cold fluid inlet temperature and the heat exchanger objective is to heat the cold fluid, reducing the heat capacity rate ratio results in more energy transfer from the hot fluid to the cold fluid, thus enhancing performance.
Subjects/Keywords: Analytical; End-wall boundary conditions; General boundary conditions; Heat exchanger; Microscale
Spann, B. T. (2010). Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions . (Masters Thesis). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/1067/rec/104
Spann, Bryan T. "Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions." 2010. Masters Thesis, University of Utah. Accessed January 27, 2020. http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/1067/rec/104.
Spann, Bryan T. "Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions." 2010. Web. 27 Jan 2020.
Spann BT. Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions. [Internet] [Masters thesis]. University of Utah; 2010. [cited 2020 Jan 27]. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/1067/rec/104.
Spann BT. Analytical model of a microscale heat exchanger: ambient interaction and general end-wall boundary conditions. [Masters Thesis]. University of Utah; 2010. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd2/id/1067/rec/104
3. Couture, Chad. Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals .
Degree: 2018, University of Ottawa
► Reaction-diffusion equations have been used to study various phenomena across different fields. These equations can be posed on the whole real line, or on a… (more)
▼ Reaction-diffusion equations have been used to study various phenomena across different fields. These equations can be posed on the whole real line, or on a subinterval, depending on the situation being studied. For finite intervals, we also impose diverse boundary conditions on the system. In the present thesis, we solely focus on the bistable reaction-diffusion equation while working on a bounded interval of the form [0,L] (L>0). Furthermore, we consider both mixed and no-flux boundary conditions, where we extend the former to Dirichlet boundary conditions once our analysis of that system is complete. We first use phase-plane analysis to set up our initial investigation of both systems. This gives us an integral describing the transit time of orbits within the phase-plane. This allows us to determine the bifurcation diagram of both systems. We then transform the integral to ease numerical calculations. Finally, we determine the stability of the steady states of each system.
Subjects/Keywords: reaction-diffusion equation; no-flux; steady states; no-flux boundary conditions; mixed boundary conditions; Dirichlet boundary conditions; stability
Couture, C. (2018). Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/37110
Couture, Chad. "Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals ." 2018. Thesis, University of Ottawa. Accessed January 27, 2020. http://hdl.handle.net/10393/37110.
Couture, Chad. "Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals ." 2018. Web. 27 Jan 2020.
Couture C. Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals . [Internet] [Thesis]. University of Ottawa; 2018. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/10393/37110.
Couture C. Steady States and Stability of the Bistable Reaction-Diffusion Equation on Bounded Intervals . [Thesis]. University of Ottawa; 2018. Available from: http://hdl.handle.net/10393/37110
4. Kassner, Ethan. A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 .
Degree: 2015, Cornell University
► A "supercooled" liquid forms when a liquid is cooled below its ordering temperature while avoiding a phase transition to a global ordered ground state. Upon… (more)
▼ A "supercooled" liquid forms when a liquid is cooled below its ordering temperature while avoiding a phase transition to a global ordered ground state. Upon further cooling its microscopic relaxation times diverge rapidly, and eventually the system becomes a glass that is non-ergodic on experimental timescales. Supercooled liquids exhibit a common set of characteristic phenomena: there is a broad peak in the specific heat below the ordering temperature; the complex dielectric function has a Kohlrausch-Williams-Watts (KWW) form in the time domain and a Havriliak-Negami (HN) form in the frequency domain; and the characteristic microscopic relaxation times diverge rapidly on a Vogel-Tamman-Fulcher (VTF) trajectory as the liquid approaches the glass transition. The magnetic pyrochlore Dy2 Ti2 O7 has attracted substantial recent attention as a potential host of deconfined magnetic Coulombic quasiparticles known as "monopoles". To study the dynamics of this material we introduce a highprecision, boundary-free experiment in which we study the time-domain and frequency-domain dynamics of toroidal Dy2 Ti2 O7 samples. We show that the EMF resulting from internal field variations can be used to robustly test the predictions of different parametrizations of magnetization transport, and we find that HN relaxation without monopole transport provides a self-consistent description of our AC measurements. Furthermore, we find that KWW relaxation provides an excellent parametrization of our DC time-domain measurements. Using these complementary measurement techniques, we show that the temperature dependence of the microscopic relaxation times in Dy2 Ti2 O7 has a VTF form. It follows that Dy2 Ti2 O7 , a crystalline material with very low structural disorder, hosts a supercooled magnetic liquid at low temperatures. The formation of such a state in a system without explicit disorder has become a subject of considerable theoretical interest. Recent numerical work suggests that the unconventional glassy magnetic dynamics in Dy2 Ti2 O7 may result from interacting clusters of spins that evolve according to the general principles of Hierarchical Dynamics proposed 30 years ago. In the absence of disorder this may fall analytically into the realm of Many-Body Localization, a relatively new theory that is currently under intense development. Dy2 Ti2 O7 could therefore push forward our understanding of the glass transition and bring together theories both old and new. Advisors/Committee Members: Lawler,Michael J. (committeeMember), Parpia,Jeevak M (committeeMember).
Subjects/Keywords: Spin Ice; Supercooled Liquids; Periodic Boundary Conditions
Kassner, E. (2015). A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 . (Thesis). Cornell University. Retrieved from http://hdl.handle.net/1813/40629
Kassner, Ethan. "A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 ." 2015. Thesis, Cornell University. Accessed January 27, 2020. http://hdl.handle.net/1813/40629.
Kassner, Ethan. "A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 ." 2015. Web. 27 Jan 2020.
Kassner E. A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 . [Internet] [Thesis]. Cornell University; 2015. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/1813/40629.
Kassner E. A Supercooled Magnetic Liquid State In The Frustrated Pyrochlore Dy2Ti2O7 . [Thesis]. Cornell University; 2015. Available from: http://hdl.handle.net/1813/40629
5. Philip, Timothy. Error analysis of boundary conditions in the Wigner transport equation.
Degree: MS, Electrical and Computer Engineering, 2014, Georgia Tech
► This work presents a method to quantitatively calculate the error induced through application of approximate boundary conditions in quantum charge transport simulations based on the… (more)
▼ This work presents a method to quantitatively calculate the error induced through application of approximate boundary conditions in quantum charge transport simulations based on the Wigner transport equation (WTE). Except for the special case of homogeneous material, there exists no methodology for the calculation of exact boundary conditions. Consequently, boundary conditions are customarily approximated by equilibrium or near-equilibrium distributions known to be correct in the classical limit. This practice can, however, exert deleterious impact on the accuracy of numerical calculations and can even lead to unphysical results. The Yoder group has recently developed a series expansion for exact boundary conditions which, when truncated, can be used to calculate boundary conditions of successively greater accuracy through consideration of successively higher order terms, the computational penalty for which is however not to be underestimated. This thesis focuses on the calculation and analysis of the second order term of the series expansion. A method is demonstrated to calculate the term for any general device structure in one spatial dimension. In addition, numerical analysis is undertaken to directly compare the first and second order terms. Finally a method to incorporate the first order term into simulation is formulated. Advisors/Committee Members: Yoder, Paul D. (advisor), Naeemi, Azad J. (committee member), Klein, Benjamin D. B. (committee member).
Subjects/Keywords: Wigner transport equation; Boundary conditions; Error analysis
Philip, T. (2014). Error analysis of boundary conditions in the Wigner transport equation . (Masters Thesis). Georgia Tech. Retrieved from http://hdl.handle.net/1853/54031
Philip, Timothy. "Error analysis of boundary conditions in the Wigner transport equation." 2014. Masters Thesis, Georgia Tech. Accessed January 27, 2020. http://hdl.handle.net/1853/54031.
Philip, Timothy. "Error analysis of boundary conditions in the Wigner transport equation." 2014. Web. 27 Jan 2020.
Philip T. Error analysis of boundary conditions in the Wigner transport equation. [Internet] [Masters thesis]. Georgia Tech; 2014. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/1853/54031.
Philip T. Error analysis of boundary conditions in the Wigner transport equation. [Masters Thesis]. Georgia Tech; 2014. Available from: http://hdl.handle.net/1853/54031
6. Kennedy, James Bernard. On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions .
Degree: 2010, University of Sydney
► We consider the problem of minimising the eigenvalues of the Laplacian with Robin boundary conditions \frac{\partial u}{\partial ν} + α u = 0 and generalised… (more)
▼ We consider the problem of minimising the eigenvalues of the Laplacian with Robin boundary conditions \frac{\partial u}{\partial ν} + α u = 0 and generalised Wentzell boundary conditions Δ u + β \frac{\partial u}{\partial ν} + γ u = 0 with respect to the domain Ω \subset ℝN on which the problem is defined. For the Robin problem, when α > 0 we extend the Faber-Krahn inequality of Daners [Math. Ann. 335 (2006), 767 – 785], which states that the ball minimises the first eigenvalue, to prove that the minimiser is unique amongst domains of class C2. The method of proof uses a functional of the level sets to estimate the first eigenvalue from below, together with a rearrangement of the ball's eigenfunction onto the domain Ω and the usual isoperimetric inequality. We then prove that the second eigenvalue attains its minimum only on the disjoint union of two equal balls, and set the proof up so it works for the Robin p-Laplacian. For the higher eigenvalues, we show that it is in general impossible for a minimiser to exist independently of α > 0. When α < 0, we prove that every eigenvalue behaves like -α2 as α → -∞, provided only that Ω is bounded with C1 boundary. This generalises a result of Lou and Zhu [Pacific J. Math. 214 (2004), 323 – 334] for the first eigenvalue. For the Wentzell problem, we (re-)prove general operator properties, including for the less-studied case β < 0, where the problem is ill-posed in some sense. In particular, we give a new proof of the compactness of the resolvent and the structure of the spectrum, at least if \partial Ω is smooth. We prove Faber-Krahn-type inequalities in the general case β, γ \neq 0, based on the Robin counterpart, and for the ``best'' case β, γ > 0 establish a type of equivalence property between the Wentzell and Robin minimisers for all eigenvalues. This yields a minimiser of the second Wentzell eigenvalue. We also prove a Cheeger-type inequality for the first eigenvalue in this case.
Subjects/Keywords: Elliptic partial differential equations; Laplacian; Robin boundary conditions; Wentzell boundary conditions; Isoperimetric inequality; Shape optimisation
Kennedy, J. B. (2010). On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions . (Thesis). University of Sydney. Retrieved from http://hdl.handle.net/2123/5972
Kennedy, James Bernard. "On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions ." 2010. Thesis, University of Sydney. Accessed January 27, 2020. http://hdl.handle.net/2123/5972.
Kennedy, James Bernard. "On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions ." 2010. Web. 27 Jan 2020.
Kennedy JB. On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions . [Internet] [Thesis]. University of Sydney; 2010. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/2123/5972.
Kennedy JB. On the isoperimetric problem for the Laplacian with Robin and Wentzell boundary conditions . [Thesis]. University of Sydney; 2010. Available from: http://hdl.handle.net/2123/5972
7. Koester, Angela. Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition.
Degree: MS, Mathematics and Statistics, 2016, Encompass Digital Archive, Eastern Kentucky University
URL: https://encompass.eku.edu/etd/389
► We establish the existence of and then compare smallest eigenvalues for the fractional boundary value problems D_(0^+)^α u+λ_1 p(t)u=0 and $D_(0^+)^α u+λ_2 q(t)u=0,0< t<… (more)
▼ We establish the existence of and then compare smallest eigenvalues for the fractional boundary value problems D_(0^+)^α u+λ_1 p(t)u=0 and $D_(0^+)^α u+λ_2 q(t)u=0,0< t< 1, satisfying the boundary conditions when n-1<α≤ n. First, we consider the case when 0<β<n-1, satisfying u^(i) (0)=0, i=0,1,…,n-2,D_(0^+)^β u(1)=0. Then, the case when β=0 is considered, satisfying the conditions u^(i)(0)=0, for i=0,1,…,n-2, and u(1)=0.
Subjects/Keywords: eigenvalues; fractional boundary conditions; fractional boundary value problem; Algebra
Koester, A. (2016). Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition . (Masters Thesis). Encompass Digital Archive, Eastern Kentucky University. Retrieved from https://encompass.eku.edu/etd/389
Koester, Angela. "Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition." 2016. Masters Thesis, Encompass Digital Archive, Eastern Kentucky University. Accessed January 27, 2020. https://encompass.eku.edu/etd/389.
Koester, Angela. "Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition." 2016. Web. 27 Jan 2020.
Koester A. Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition. [Internet] [Masters thesis]. Encompass Digital Archive, Eastern Kentucky University; 2016. [cited 2020 Jan 27]. Available from: https://encompass.eku.edu/etd/389.
Koester A. Smallest Eigenvalues For A Fractional Boundary Value Problem With A Fractional Boundary Condition. [Masters Thesis]. Encompass Digital Archive, Eastern Kentucky University; 2016. Available from: https://encompass.eku.edu/etd/389
8. Bansal, Karan. Exact Implementation of boundary conditions for Immersed Boundary Methods.
Degree: MSin Aeronautics and Astronautics, Aeronautics and Astronautics, 2015, Purdue University
URL: https://docs.lib.purdue.edu/open_access_theses/1207
► Most CFD flow solvers obtain solution on boundary-conforming grids. Gen- erating a boundary-conforming grid is, in general, a tedious and time consuming task. To simplify… (more)
▼ Most CFD flow solvers obtain solution on boundary-conforming grids. Gen- erating a boundary-conforming grid is, in general, a tedious and time consuming task. To simplify the grid generation process, a technique called Immersed Bound- ary Method was developed which can be applied to grids that are not boundary- conforming. However, implementing boundary conditions is not straight forward. To address this issue, several Immersed Boundary Methods have been developed over the years. All of these methods were found to satisfy the boundary conditions only on selected points on the boundary but not on the entire boundary. In this thesis, a new method is developed that satisfies the boundary conditions on the entire boundary. This method is demonstrated by applying it to solve potential flow past a circular cylinder. Results from the new method and the existing methods are compared and it is observed that the new method gives more accurate solutions on identical grids. Advisors/Committee Members: Tom I-P. Shih, Gregory A. Blaisdell, Alina Alexeenko.
Subjects/Keywords: Boundary Conditions; Computational Fluid Dynamics; Finite Volume; Immersed Boundary Methods; Transfinite Interpolation
Bansal, K. (2015). Exact Implementation of boundary conditions for Immersed Boundary Methods . (Thesis). Purdue University. Retrieved from https://docs.lib.purdue.edu/open_access_theses/1207
Bansal, Karan. "Exact Implementation of boundary conditions for Immersed Boundary Methods." 2015. Thesis, Purdue University. Accessed January 27, 2020. https://docs.lib.purdue.edu/open_access_theses/1207.
Bansal, Karan. "Exact Implementation of boundary conditions for Immersed Boundary Methods." 2015. Web. 27 Jan 2020.
Bansal K. Exact Implementation of boundary conditions for Immersed Boundary Methods. [Internet] [Thesis]. Purdue University; 2015. [cited 2020 Jan 27]. Available from: https://docs.lib.purdue.edu/open_access_theses/1207.
Bansal K. Exact Implementation of boundary conditions for Immersed Boundary Methods. [Thesis]. Purdue University; 2015. Available from: https://docs.lib.purdue.edu/open_access_theses/1207
9. Marco Alacid, Onofre. Structural Shape Optimization Based On The Use Of Cartesian Grids .
Degree: 2018, Universitat Politècnica de València
► As ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process… (more)
▼ As ever more challenging designs are required in present-day industries, the traditional trial-and-error procedure frequently used for designing mechanical parts slows down the design process and yields suboptimal designs, so that new approaches are needed to obtain a competitive advantage. With the ascent of the Finite Element Method (FEM) in the engineering community in the 1970s, structural shape optimization arose as a promising area of application. However, due to the iterative nature of shape optimization processes, the handling of large quantities of numerical models along with the approximated character of numerical methods may even dissuade the use of these techniques (or fail to exploit their full potential) because the development time of new products is becoming ever shorter. This Thesis is concerned with the formulation of a 3D methodology based on the Cartesian-grid Finite Element Method (cgFEM) as a tool for efficient and robust numerical analysis. This methodology belongs to the category of embedded (or fictitious) domain discretization techniques in which the key concept is to extend the structural analysis problem to an easy-to-mesh approximation domain that encloses the physical domain boundary. The use of Cartesian grids provides a natural platform for structural shape optimization because the numerical domain is separated from a physical model, which can easily be changed during the optimization procedure without altering the background discretization. Another advantage is the fact that mesh generation becomes a trivial task since the discretization of the numerical domain and its manipulation, in combination with an efficient hierarchical data structure, can be exploited to save computational effort. However, these advantages are challenged by several numerical issues. Basically, the computational effort has moved from the use of expensive meshing algorithms towards the use of, for example, elaborate numerical integration schemes designed to capture the mismatch between the geometrical domain boundary and the embedding finite element mesh. To do this we used a stabilized formulation to impose boundary conditions and developed novel techniques to be able to capture the exact boundary representation of the models. To complete the implementation of a structural shape optimization method an adjunct formulation is used for the differentiation of the design sensitivities required for gradient-based algorithms. The derivatives are not only the variables required for the process, but also compose a powerful tool for projecting information between different designs, or even projecting the information to create h-adapted meshes without going through a full h-adaptive refinement process. The proposed improvements are reflected in the numerical examples included in this Thesis. These analyses clearly show the improved behavior of the cgFEM technology as regards numerical accuracy and computational efficiency, and consequently the suitability of the cgFEM approach for shape optimization or contact… Advisors/Committee Members: Ródenas García, Juan José (advisor), Tur Valiente, Manuel (advisor).
Subjects/Keywords: Immersed Boundary Methods; Cartesian grids; NEFEM; Dirichlet boundary conditions; h-refinement; Sensitivity analysis; Shape optimization
Marco Alacid, O. (2018). Structural Shape Optimization Based On The Use Of Cartesian Grids . (Doctoral Dissertation). Universitat Politècnica de València. Retrieved from http://hdl.handle.net/10251/86195
Marco Alacid, Onofre. "Structural Shape Optimization Based On The Use Of Cartesian Grids ." 2018. Doctoral Dissertation, Universitat Politècnica de València. Accessed January 27, 2020. http://hdl.handle.net/10251/86195.
Marco Alacid, Onofre. "Structural Shape Optimization Based On The Use Of Cartesian Grids ." 2018. Web. 27 Jan 2020.
Marco Alacid O. Structural Shape Optimization Based On The Use Of Cartesian Grids . [Internet] [Doctoral dissertation]. Universitat Politècnica de València; 2018. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/10251/86195.
Marco Alacid O. Structural Shape Optimization Based On The Use Of Cartesian Grids . [Doctoral Dissertation]. Universitat Politècnica de València; 2018. Available from: http://hdl.handle.net/10251/86195
Addis Ababa University
10. Jemal, Seid. TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD .
Degree: 2012, Addis Ababa University
URL: http://etd.aau.edu.et/dspace/handle/123456789/4634
► This thesis work focuses on non-reactive solute transport modeling of the akaki wellfield for two selected groundwater contaminants (chloride & fluoride) for the 25 operating… (more)
▼ This thesis work focuses on non-reactive solute transport modeling of the akaki wellfield for two selected groundwater contaminants (chloride & fluoride) for the 25 operating boreholes administered by Addis Ababa Water & Sewerage Authority (AAWSA). The work is conducted based on laboratory analysis of groundwater samples from selected boreholes and based on historical data of the wellfield boreholes. The widespread use of chemical products, coupled with the disposal of large volumes of waste materials, poses the potential for widely distributed groundwater contamination. Because such contaminations can pose a serious threat to public health, prediction of the degree of contamination by appropriate numerical modeling tools is vital to aware the end user from possible risks. Mathematical models solved numerically are the subject of this thesis work focusing on conservative solute transport in the Akaki well field. Chloride & fluoride ions predictive modeling of the wellfield for the next ten years (2007-2017) is made first by calibrating the model input parameters using the available historical solute concentration data for selected boreholes at various periods. For calibration purpose, initial solute concentration was taken as 3 mg/l for chloride and 0.51 mg/l for fluoride and MATLAB simulation of chloride & fluoride ion concentration is done. The simulation results show that while chloride concentrations in the wellfield get increased; fluoride, however, is getting decreasing through out all of the boreholes in the wellfield. This is in agreement with the actual observed pattern of solute load of the wellfield revealing chloride is being introduced in to the wellfield by one or more mechanisms somewhere in the vicinity of the akaki river catchment (ARC) while fluoride is not. Advisors/Committee Members: Ato Teshome Worku (advisor).
Subjects/Keywords: porosity; hydraulic conductivity; conservative contaminant transport; prediction; calibration; boundary conditions; initial conditions; breakthrough curves; aquifers
Jemal, S. (2012). TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD . (Thesis). Addis Ababa University. Retrieved from http://etd.aau.edu.et/dspace/handle/123456789/4634
Jemal, Seid. "TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD ." 2012. Thesis, Addis Ababa University. Accessed January 27, 2020. http://etd.aau.edu.et/dspace/handle/123456789/4634.
Jemal, Seid. "TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD ." 2012. Web. 27 Jan 2020.
Jemal S. TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD . [Internet] [Thesis]. Addis Ababa University; 2012. [cited 2020 Jan 27]. Available from: http://etd.aau.edu.et/dspace/handle/123456789/4634.
Jemal S. TWO DIMENSIONAL CONSERVATIVE CONTAMINANT TRANSPORT MODELING OF THE AKAKI WELLFIELD . [Thesis]. Addis Ababa University; 2012. Available from: http://etd.aau.edu.et/dspace/handle/123456789/4634
11. Eldredge, Weston M. Development, verification, and validation of the responsive boundary model for pool fire simulations.
Degree: PhD, Chemical Engineering, 2011, University of Utah
URL: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/167/rec/722
► The need to understand and predict the behavior of fires and explosions is important considering the amount of property damage and the loss of life… (more)
▼ The need to understand and predict the behavior of fires and explosions is important considering the amount of property damage and the loss of life that can result. In the case of transportation pool fires (fires resulting from liquid fuel spills), predictive science is an especially valuable tool considering that experiments with large pools fires are costly and often lead to damaged or destroyed instrumentation. In the development of fire codes as well as in other areas of computational science, the need for fidelity in computational results has become a prominent issue. The various sources of error in computation, such as discretization error, machine round-off error, iterative convergence error, programmer error, and model error, must be accounted for and, if possible, quantified if computational results are to be considered legitimate. The current study seeks to remedy an important source of error in pool fire simulations. The error stems from the application of a simplistic fuel inlet boundary condition. Traditionally this type of boundary condition assumes that the liquid pool vaporizes fuel to feed the flame at a constant rate. Additionally the vaporization rate is assumed to be uniform over the pool surface. In reality there is a complex feedback mechanism between the pool surface and the flame. Pool vaporization rate changes with time as the pool is heated, and thermal flux to the pool will be nonuniform over the surface. The responsive Boundary model utilizes energy and mass conservation principles to model the thermal behavior of the fuel pool and to predict the vaporization rate given thermal input from the flame. Verification tests such as the Method of Manufactured Solutions and grid convergence and validation methods such as model input sensitivity analysis and consistency analysis are applied to the Responsive Boundary model on its own and linked with the gas-phase fire code (ARCHES). The tests verify that the code solves the continuum model that is the basis of the boundary model with acceptable error. A region of consistency is also found between the steady vaporization fluxes predicted from the model and experimental data for a small heptane pool fire. Consistency analysis is also applied to data obtained from ARCHES simulations of a small helium plume and data taken from holographic interferometric images.
Subjects/Keywords: Boundary conditions; Combustion; Data consistency; Pool fires; Validation; Verification
Eldredge, W. M. (2011). Development, verification, and validation of the responsive boundary model for pool fire simulations . (Doctoral Dissertation). University of Utah. Retrieved from http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/167/rec/722
Eldredge, Weston M. "Development, verification, and validation of the responsive boundary model for pool fire simulations." 2011. Doctoral Dissertation, University of Utah. Accessed January 27, 2020. http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/167/rec/722.
Eldredge, Weston M. "Development, verification, and validation of the responsive boundary model for pool fire simulations." 2011. Web. 27 Jan 2020.
Eldredge WM. Development, verification, and validation of the responsive boundary model for pool fire simulations. [Internet] [Doctoral dissertation]. University of Utah; 2011. [cited 2020 Jan 27]. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/167/rec/722.
Eldredge WM. Development, verification, and validation of the responsive boundary model for pool fire simulations. [Doctoral Dissertation]. University of Utah; 2011. Available from: http://content.lib.utah.edu/cdm/singleitem/collection/etd3/id/167/rec/722
12. Castillo, Davis. Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges.
URL: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9501
► New measuring methods continue to be developed in the field of wind anemometry for various environments subject to low-speed and high-speed flows, turbulent-present flows, and… (more)
▼ New measuring methods continue to be developed in the field of wind anemometry for various environments subject to low-speed and high-speed flows, turbulent-present flows, and ideal and non-ideal flows. As a result, anemometry has taken different avenues for these environments from the traditional cup model to sonar, hot-wire, and recent developments with sphere anemometers. Several measurement methods have modeled the air drag force as a quadratic function of the corresponding wind speed. Furthermore, by incorporating non-drag fluid forces in addition to the main drag force, a dynamic set of equations of motion for the deflection and strain of a spherical anemometer's beam can be derived. By utilizing the equations of motion to develop a direct relationship to a measurable parameter, such as strain, an approximation for wind speed based on a measurement is available. These ODE's for the strain model can then be used to relate directly the fluid speed (wind) to the strain along the beam?s length. The spherical anemometer introduced by the German researcher Holling presents the opportunity to incorporate the theoretical cantilevered Euler-Bernoulli beam with a spherical mass tip to develop a deflection and wind relationship driven by cross-area of the spherical mass and constriction of the shaft or the beam's bending properties. The application of Hamilton's principle and separation of variables to the Lagrangian Mechanics of an Euler-Bernoulli beam results in the equations of motion for the deflection of the beam as a second order partial differential equation (PDE). The boundary conditions of our beam's motion are influenced by the applied fluid forces of a relative drag force and the added mass and buoyancy of the sphere. Strain gauges will provide measurements in a practical but non-intrusive method and thus the concept of a measuring strain gauge is simulated. Young's Modulus creates a relationship between deflection and strain of an Euler-Bernoulli system and thus a strain and wind relation can be modeled as an ODE. This theoretical sphere anemometer's second order ODE allows for analysis of the linear and non-linear accuracies of the motion of this dynamic system at conventional high speed conditions. Advisors/Committee Members: Hurtado, John E. (advisor), White, Edward B. (committee member), Bhattacharyya, Shankar P. (committee member).
Subjects/Keywords: Wind Anemometry; Euler-Bernoulli beam; strain gauge; boundary conditions
Castillo, D. (2011). Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges . (Thesis). Texas A&M University. Retrieved from http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9501
Castillo, Davis. "Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges." 2011. Thesis, Texas A&M University. Accessed January 27, 2020. http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9501.
Castillo, Davis. "Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges." 2011. Web. 27 Jan 2020.
Castillo D. Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges. [Internet] [Thesis]. Texas A&M University; 2011. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9501.
Castillo D. Euler-Bernoulli Implementation of Spherical Anemometers for High Wind Speed Calculations via Strain Gauges. [Thesis]. Texas A&M University; 2011. Available from: http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9501
13. Rusakov, Alexander. Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals.
Degree: PhD, Natural Sciences, 2013, Rice University
► We report theoretical, algorithmic, and computational aspects of exploiting space-group symmetry in self-consistent field (SCF) calculations, primarily Kohn-Sham density functional theory (DFT), with periodic boundary… (more)
▼ We report theoretical, algorithmic, and computational aspects of exploiting space-group symmetry in self-consistent field (SCF) calculations, primarily Kohn-Sham density functional theory (DFT), with periodic boundary conditions (PBC) and Gaussian-type orbitals. Incorporating exact exchange leads to generally better performance for a broad class of systems, but leads to a significant increase of computation time, especially for 3D solids, due to a large number of explicitly evaluated two-electron integrals. We exploit reduction of the list thereof based on the space-group symmetry of a crystal. As distinct from previous achievements, based on the use of symmorphic groups only, we extend our technique to non-symmorphic groups, thus enabling application of any of 230 3D space groups. Algorithms facilitating efficient reduction of the list of two-electron integrals and restoring the full Fock-type matrix have been proposed and implemented in the development version of Gaussian program. These schemes are applied not only to the HFx, but also to explicit evaluation of the near-field Coulomb contribution. In 3D solids with smallest unit cells speedup factors range from 2X to 9X for the near field Coulomb part and from 3X to 8X for the exact exchange, thus leading to a substantial reduction of the overall computational cost. Factors noticeably lower than the number of the operations are due to the highly symmetric atomic positions in crystals, as well as to the choice of primitive cells. In systems with atoms on general positions or in special positions of low multiplicity, the speedup factors readily exceed one order of magnitude being almost 70X (near-field Coulomb) and 57X (HFx) for the largest tested (16,7) single-walled nanotube with 278 symmetry operations. Advisors/Committee Members: Scuseria, Gustavo E. (advisor), Kolomeisky, Anatoly B. (committee member), Yakobson, Boris I. (committee member).
Subjects/Keywords: Space-group symmetry; Density functional theory; Periodic boundary conditions
Rusakov, A. (2013). Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals . (Doctoral Dissertation). Rice University. Retrieved from http://hdl.handle.net/1911/77509
Rusakov, Alexander. "Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals." 2013. Doctoral Dissertation, Rice University. Accessed January 27, 2020. http://hdl.handle.net/1911/77509.
Rusakov, Alexander. "Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals." 2013. Web. 27 Jan 2020.
Rusakov A. Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals. [Internet] [Doctoral dissertation]. Rice University; 2013. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/1911/77509.
Rusakov A. Space group symmetry applied to SCF calculations with periodic boundary conditions and Gaussian orbitals. [Doctoral Dissertation]. Rice University; 2013. Available from: http://hdl.handle.net/1911/77509
14. Thirunavukkarasu, Senganal. Absorbing Boundary Conditions for Molecular Dynamics.
Degree: MS, Civil Engineering, 2009, North Carolina State University
URL: http://www.lib.ncsu.edu/resolver/1840.16/1880
► With the goal of minimizing the domain size for Molecular dynamics (MD) simulations, we develop a new class of absorbing boundary conditions (MD-ABC's) that can… (more)
▼ With the goal of minimizing the domain size for Molecular dynamics (MD) simulations, we develop a new class of absorbing boundary conditions (MD-ABC's) that can mimic the phonon absorption of an unbounded exterior. The proposed MD-ABC's are extensions of perfectly matched discrete layers (PMDL), originally developed as an absorbing boundary condition for continuous wave propagation problems. Called MD-PMDL, this extension carefully targets the absorption of phonons, the high frequency waves, whose propagation properties are completely different from continuous waves. This thesis presents the derivation of MD-PMDL for general lattice systems, followed by explicit application to one-dimensional and two-dimensional square lattice systems.The accuracy of MD-PMDL for phonon absorption is proven by analyzing reflection coefficients, and demonstrated through numerical experiments. Unlike existing ABC's that are effective for high frequency phonon absorption, MD-PMDL is local in both space and time and is thus more efficient. Based on their favorable properties, it is concluded that MD-PMDL could provide a more effective alternative to existing MD-ABC's. Advisors/Committee Members: Dr. M. N. Guddati, Committee Chair (advisor), Dr. P. A. Gremaud, Committee Member (advisor), Dr. M. S. Rahman, Committee Member (advisor).
Subjects/Keywords: molecular dynamics; wave propagation in discrete systems; absorbing boundary conditions
Thirunavukkarasu, S. (2009). Absorbing Boundary Conditions for Molecular Dynamics . (Thesis). North Carolina State University. Retrieved from http://www.lib.ncsu.edu/resolver/1840.16/1880
Thirunavukkarasu, Senganal. "Absorbing Boundary Conditions for Molecular Dynamics." 2009. Thesis, North Carolina State University. Accessed January 27, 2020. http://www.lib.ncsu.edu/resolver/1840.16/1880.
Thirunavukkarasu, Senganal. "Absorbing Boundary Conditions for Molecular Dynamics." 2009. Web. 27 Jan 2020.
Thirunavukkarasu S. Absorbing Boundary Conditions for Molecular Dynamics. [Internet] [Thesis]. North Carolina State University; 2009. [cited 2020 Jan 27]. Available from: http://www.lib.ncsu.edu/resolver/1840.16/1880.
Thirunavukkarasu S. Absorbing Boundary Conditions for Molecular Dynamics. [Thesis]. North Carolina State University; 2009. Available from: http://www.lib.ncsu.edu/resolver/1840.16/1880
15. Medida, Shivaji. Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows.
Degree: MS, Mechanical Engineering, 2007, University of Toledo
URL: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1185309100
► In the present work, the Giles non-reflecting boundary conditions have been extended to Curvilinear co-ordinates for wall-bounded flows. In addition to the non-reflecting boundary conditions,… (more)
▼ In the present work, the Giles non-reflecting boundary conditions have been extended to Curvilinear co-ordinates for wall-bounded flows. In addition to the non-reflecting boundary conditions, wall boundary conditions have been derived and implemented for inflow/outflow-wall corners (grid points common to the inflow/outflow and wall boundaries) so that the flow solution at the corner points satisfies both the inflow/outflow boundary conditions and the wall boundary conditions. Two-dimensional, wall-bounded, Ringleb flow configurations (with non-orthogonal grids and curved boundary geometries) were used as test cases to validate the non-reflecting boundary conditions and the wall corner conditions. The wall corner conditions for the Cartesian Giles boundary conditions were sufficient to eliminate the flow through wall at inflow and outflow corners. However, the Cartesian Giles boundary conditions along with the wall corner conditions could not solve the corner problem in the Ringleb flow test cases and eventually caused the flow solution to diverge. Two-dimensional Curvilinear Giles boundary conditions derived from the Curvilinear form of the linearized Euler equations resulted in additional terms that were not present in the chain rule form of Cartesian Giles boundary conditions. These additional terms contained the factor (xx + yy) which becomes zero for an orthogonal grid. The Curvilinear Giles boundary conditions, when implemented without the additional terms, displayed the same corner problem that was encountered with the chain rule form of Cartesian Giles boundary conditions, proving the significance of the additional orthogonal terms. In order to completely eliminate flow through the wall and obtain the correct solution at the corners, the Curvilinear boundary conditions with the additional terms and the wall corner conditions were required. The Curvilinear Giles boundary conditions along with the wall corner conditions were successfully tested on various Ringleb flow geometries and grid densities. However, a long-term instability was noticed for all the Ringleb flow configurations beyond a certain number of grid points. This was eliminated by stretching the grid towards the inflow and outflow boundaries. Converged solutions were obtained for all the test cases with acceptable L2 errors. Advisors/Committee Members: Hixon, Duane (Advisor).
Subjects/Keywords: non-reflecting; boundary conditions; wall-bounded; corner condition; Giles; characteristic
Medida, S. (2007). Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows . (Masters Thesis). University of Toledo. Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=toledo1185309100
Medida, Shivaji. "Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows." 2007. Masters Thesis, University of Toledo. Accessed January 27, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1185309100.
Medida, Shivaji. "Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows." 2007. Web. 27 Jan 2020.
Medida S. Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows. [Internet] [Masters thesis]. University of Toledo; 2007. [cited 2020 Jan 27]. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1185309100.
Medida S. Curvilinear Extension to the Giles Non-reflecting Boundary Conditions for Wall-bounded Flows. [Masters Thesis]. University of Toledo; 2007. Available from: http://rave.ohiolink.edu/etdc/view?acc_num=toledo1185309100
16. Misiats, Oleksandr. Asymptotic analysis of Ginzburg-landau superconductivity model.
Degree: PhD, Mathematics, 2012, Penn State University
URL: https://etda.libraries.psu.edu/catalog/16044
► The thesis is devoted to variational problems and PDEs related to the Ginzburg-Landau superconductivity theory and consists of two parts. In the first part we… (more)
▼ The thesis is devoted to variational problems and PDEs related to the Ginzburg-Landau superconductivity theory and consists of two parts. In the first part we establish the existence of minimizers of the magnetic Ginzburg-Landau energy functional with prescribed degrees and unit modulus on the boundary (so called semi-stiff boundary conditions) in both simply connected and doubly connected domains. In simply connected domains we show that the vortices of these minimizers are located strictly inside the domain for certain values of Ginzburg-Landau parameter, while in doubly connected domains we have nearboundary vortices in the same parameter regime. We also establish the necessary conditions for the existence of local minimizers of the simplified Ginzburg-Landau functional in doubly connected domains with semi-stiff boundary conditions. The second part of the thesis studies vortex pinning (i.e., fixing the positions of vortices), which is done through introducing inclusions into a homogeneous superconductor. We use the homogenization techniques to model a composite superconductor obtained by introducing a large number of superconducting inclusions in superconducting media. Next, we focus on modeling a superconductor with finitely many small superconducting inclusions in the vortex state. We show that even the inclusions of negligibly small size (e.g. shrinking to single points) capture the vortices of minimizers, and therefore, the problem of finding the locations of the vortices of minimizers may be reduced to a discrete minimization problem for a finite-dimensional functional.
Subjects/Keywords: Ginzburg-Landau theory; asymptotics; vortices; degree boundary conditions; vortex pinning
Misiats, O. (2012). Asymptotic analysis of Ginzburg-landau superconductivity model . (Doctoral Dissertation). Penn State University. Retrieved from https://etda.libraries.psu.edu/catalog/16044
Misiats, Oleksandr. "Asymptotic analysis of Ginzburg-landau superconductivity model." 2012. Doctoral Dissertation, Penn State University. Accessed January 27, 2020. https://etda.libraries.psu.edu/catalog/16044.
Misiats, Oleksandr. "Asymptotic analysis of Ginzburg-landau superconductivity model." 2012. Web. 27 Jan 2020.
Misiats O. Asymptotic analysis of Ginzburg-landau superconductivity model. [Internet] [Doctoral dissertation]. Penn State University; 2012. [cited 2020 Jan 27]. Available from: https://etda.libraries.psu.edu/catalog/16044.
Misiats O. Asymptotic analysis of Ginzburg-landau superconductivity model. [Doctoral Dissertation]. Penn State University; 2012. Available from: https://etda.libraries.psu.edu/catalog/16044
17. Ferrand, Martin. Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation.
Degree: 2011, University of Manchester
URL: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:118859
► Smoothed Particle Hydrodynamics (SPH) is a meshless Lagrangian numerical method ideal for simulating potentially violent free-surface phenomena such as a wave breaking, or a dam… (more)
▼ Smoothed Particle Hydrodynamics (SPH) is a meshless Lagrangian numerical method ideal for simulating potentially violent free-surface phenomena such as a wave breaking, or a dam break where many Eulerian methods can be difficult to apply.Dealing with wall boundary conditions is one of the most challenging parts of the SPH method and many different approaches have been developed among (i) repulsive forces such as Lennard-Jones one, which is efficient to give impermeable boundaries but leads to non-physical behaviours, (ii) fictitious (or ghost) particles which provide a better physical behaviour in the vicinity of a wall but are hard to define for complex geometries and (iii) semi-analytical approach such as Kulasegaram et al. (2004) which consists of renormalising the density field near a solid wall with respect to the missing kernel support area. The present work extends this semi-analytical methodology, where intrinsic gradient and divergence operators are employed that ensures conservation properties. The accuracy of the physical field such as the pressure next to walls is considerably improved, and the consistent manner developed to wall-correct operators allows us to perform simulations with turbulence models. This work will present three key advances:• The time integration scheme used for the continuity equation requires particular attention, and as already mentioned by Vila (1999), we prove there is no point in using a dependence in time of the particles' density if no kernel gradient corrections are added. Thus, by using a near-boundary kernel-corrected version of the time integration scheme of the form proposed by Vila, we are able to simulate long-time simulations ideally suited for turbulent flow in a channel in the context of accurate boundary conditions.• In order to compute the kernel correction, Feldman and Bonet (2007) use an analytical value which is computationally expensive whereas Kulasegaram et al. (2004) and De Leffe et al. (2009) use polynomial approximation which can be difficult to define for complex geometries. We propose here to compute the renormalisation term of the kernel support near a solid with a novel time integration scheme, allowing us any shape for the boundary.• All boundary terms issued from the continuous approximation are given by surface summations which only require information from a mesh file of the boundary. The technique developed here allows us to correct the pressure gradient and viscous terms and hence provide a physically correct wall-shear stress so that even the diffusion equation of a scalar quantity can be solved accurately using SPH such as the turbulent kinetic energy or its dissipation in a k − ε model of turbulence.The new model is demonstrated for cases including hydrostatic conditions for a channel flow, still water in a tank of complex geometry and a dam break over triangular bed profile with sharp angle where significant improved behaviour is obtained in comparison with the conventional boundary techniques. Simulation of the benchmark test case of a… Advisors/Committee Members: Laurence, Dominique.
Subjects/Keywords: SPH; Wall boundary conditions; Turbulence modelling in SPH
Ferrand, M. (2011). Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation. (Doctoral Dissertation). University of Manchester. Retrieved from http://www.manchester.ac.uk/escholar/uk-ac-man-scw:118859
Ferrand, Martin. "Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation." 2011. Doctoral Dissertation, University of Manchester. Accessed January 27, 2020. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:118859.
Ferrand, Martin. "Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation." 2011. Web. 27 Jan 2020.
Ferrand M. Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation. [Internet] [Doctoral dissertation]. University of Manchester; 2011. [cited 2020 Jan 27]. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:118859.
Ferrand M. Unified semi-analytical wall boundary conditions for inviscid, laminar and turbulent slightly compressible flows in SPARTACUS-2D combined with an improved time integration scheme on the continuity equation. [Doctoral Dissertation]. University of Manchester; 2011. Available from: http://www.manchester.ac.uk/escholar/uk-ac-man-scw:118859
18. Cordes, Cristoffer. Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions.
Degree: MS, Mathematical Science, 2011, Clemson University
URL: https://tigerprints.clemson.edu/all_theses/1167
► Electrical Impedance Tomography is an imaging technique with high potential in medical imaging. As of today the resolution is very low and measurement errors have… (more)
▼ Electrical Impedance Tomography is an imaging technique with high potential in medical imaging. As of today the resolution is very low and measurement errors have a huge influence on the result. In order to improve the results, the currents that are applied to perform the measurements have to be chosen carefully, and the best method to do so has not been found yet. For analytical and numerical convenience the spaces of the currents and voltages are often assumed to be L2. However, recent studies have shown that by introducing spaces that are more involved with the weak formulation of the problem, the algorithm of finding optimal currents gives significantly different results. In addition the transition from posing the problem as a Neumann-to-Dirichlet experiment to posing it in a Robin-to-Dirichlet sense is often neglected due to the very similar nature of the resulting calculations. This thesis investigates the impact of changing the boundary value problems in Electrical Impedance tomography from Neumann-to-Dirichlet to Robin-to-Dirichlet. Several combinations of spaces for the Dirichlet and Robin data will be examined analytically and then compared to the according Neumann/Dirichlet spaces in a numerical simulation. Advisors/Committee Members: Khan, Tauquar, Medlock , Jan, Yoon , Jeong-Rock.
Subjects/Keywords: Electrical Impedance Tomography; Optimal Currents; Robin Boundary Conditions; Applied Mathematics
Cordes, C. (2011). Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions . (Masters Thesis). Clemson University. Retrieved from https://tigerprints.clemson.edu/all_theses/1167
Cordes, Cristoffer. "Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions." 2011. Masters Thesis, Clemson University. Accessed January 27, 2020. https://tigerprints.clemson.edu/all_theses/1167.
Cordes, Cristoffer. "Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions." 2011. Web. 27 Jan 2020.
Cordes C. Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions. [Internet] [Masters thesis]. Clemson University; 2011. [cited 2020 Jan 27]. Available from: https://tigerprints.clemson.edu/all_theses/1167.
Cordes C. Optimal Currents in Electrical Impedance Tomography with Robin Boundary Conditions. [Masters Thesis]. Clemson University; 2011. Available from: https://tigerprints.clemson.edu/all_theses/1167
University of Illinois – Chicago
19. Heidary, Zahra. Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE.
Degree: 2015, University of Illinois – Chicago
► The Acoustic Emission (AE) method is a nondestructive testing method that relies on the waves emitted from the localized permanent deformation. The method can detect,… (more)
▼ The Acoustic Emission (AE) method is a nondestructive testing method that relies on the waves emitted from the localized permanent deformation. The method can detect, locate and characterize the damage in large-scale structures. Unfortunately, the AE method is not being used to its full advantage because of a problem in detecting the induced micro-damage signals, which can be obscured by mechanical noise and the complexity of the measurement leading to challenges of repeatability and extracting quantitative features (e.g., damage mode, size, direction). The goal of this research is to understand the quantitative significance of the AE signatures through effective finite element models combining the components of source, structure and sensor into the model. The domain is modeled using spectral element, which reduces the required degrees of freedom significantly; the boundaries are modeled with Perfectly Matched Layer (PML) to absorb the waves, similar to large-scale structures. The numerical models are validated with experimental measurements. The numerical model together with novel mathematical formulation of axisymmetric pipe under non-axisymmetric loading (e.g., leak) allowed understanding wave propagation in long-range pipelines in order to determine reliable sensor spacing. The transfer function of the AE sensor (typically piezoelectric) is coupled with the solid model to obtain the electrical displacement of the sensor under given excitation and understand the influence of the sensor response to the output signal. Advisors/Committee Members: Ozevin, Didem (advisor), Foster, Craig D. (committee member), Indacochea, J. Ernesto (committee member), Shabana, Ahmed (committee member), McNallan, Michael J. (committee member).
Subjects/Keywords: Acoustic Emission; Wave Propagation; Boundary Conditions; Sensor Transfer function
Heidary, Z. (2015). Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE . (Thesis). University of Illinois – Chicago. Retrieved from http://hdl.handle.net/10027/19353
Heidary, Zahra. "Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE." 2015. Thesis, University of Illinois – Chicago. Accessed January 27, 2020. http://hdl.handle.net/10027/19353.
Heidary, Zahra. "Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE." 2015. Web. 27 Jan 2020.
Heidary Z. Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE. [Internet] [Thesis]. University of Illinois – Chicago; 2015. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/10027/19353.
Heidary Z. Numerical Modeling of Wave Propagation at Large Scale Damaged Structures For Quantitative AE. [Thesis]. University of Illinois – Chicago; 2015. Available from: http://hdl.handle.net/10027/19353
20. Lohmeier, Joseph. A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method.
Degree: 2011, Boise State University
URL: https://scholarworks.boisestate.edu/td/246
► The main attraction of using radial basis functions (RBFs) for generating finite difference type approximations (RBF-FD) is that they naturally work for unstructured or scattered… (more)
▼ The main attraction of using radial basis functions (RBFs) for generating finite difference type approximations (RBF-FD) is that they naturally work for unstructured or scattered nodes. Therefore, a geometrically complex domain can be efficiently discretized using scattered nodes and continuous differential operators such as the Laplacian can be effectively approximated locally using RBF-FD formulas on these nodes. This RBF-FD method is becoming more and more popular as an alternative to the finite-element since it avoids the sometimes complex and expensive step of mesh generation and the RBF-FD method can achieve much higher orders of accuracy. One of the issues with the RBF-FD method is how to properly handle non-Dirichlet boundary conditions. In this thesis, we describe an effective method for handling Neumann conditions in the case of Poisson's equation. The method uses fictitious points and generalized Hermite-Birkhoff interpolation to enforce the boundary conditions and to improve the accuracy of the RBF-FD method near boundaries. We present several numerical experiments using the method and investigate its convergence and accuracy.
Subjects/Keywords: radial basis function; Poisson's equation; Neumann boundary conditions
Lohmeier, J. (2011). A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method . (Thesis). Boise State University. Retrieved from https://scholarworks.boisestate.edu/td/246
Lohmeier, Joseph. "A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method." 2011. Thesis, Boise State University. Accessed January 27, 2020. https://scholarworks.boisestate.edu/td/246.
Lohmeier, Joseph. "A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method." 2011. Web. 27 Jan 2020.
Lohmeier J. A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method. [Internet] [Thesis]. Boise State University; 2011. [cited 2020 Jan 27]. Available from: https://scholarworks.boisestate.edu/td/246.
Lohmeier J. A Fictitious Point Method for Handling Boundary Conditions in the RBF-FD Method. [Thesis]. Boise State University; 2011. Available from: https://scholarworks.boisestate.edu/td/246
21. Mendes, Joana Alexandra Assunção. Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) .
Degree: 2018, Universidade de Aveiro
► Forecasting estuarine circulation is in high demand, especially in regions of high population density like Santos region. Present work intends to optimize the performance of… (more)
▼ Forecasting estuarine circulation is in high demand, especially in regions of high population density like Santos region. Present work intends to optimize the performance of the water level forecast for Santos Estuary, particularly, the physical forcing determining the residual tide. In extreme cases, predicted and observed water level can significantly differ, increasing significantly the predicting errors, which highlights the need to understand the factors that affect the residual tide to propose corrective measures. Furthermore, the extreme events have potentially hazardous consequences not only for navigation purposes, as also to the population that live nearby the channels. This dissertation analyzes the water level and significant wave height dataset covering the period of 2016 to 2017. Datasets comprehend 5 tide gauge stations in Santos channel and were obtained from the Pilotos da Barra and Praticagem. MOHID 2D hydrodynamic model (www.mohid.com) was used, implemented with a nested downscaling approach, being linked to AQUASAFE software that provides high-resolution forecast to give support to navigation in real-time. The model was validated for the period of 2016-2017, being minimum RMSE found of 12.5 cm and practically all stations present an excellent SKILL. The most recent astronomical global solution (FES2014) was implemented as oceanic boundary condition. Additionally, the meteorological boundary condition (CMEMS) was altered from daily to hourly data. It was also researched the possible influence of wave height on the forecast of water level (particularly the residual tide). Due to the great correlation found among these parameters, a linear regression method was developed to correct in post-processing stage the residual tide under specific wave height conditions. The appliance of distinct boundary conditions on forecasting models (meteorological and astronomical) decreased errors when compared to observations, evidencing the improvement of forecast capacity. On the other hand, the use of FES2014 shows improvements at the bay entrance, however, results get worst in the inner stations. This portrays the need of reliable bathymetric data due to increasing errors on the astronomical tidal components towards the end of the estuary. Residual tide errors remain practically constant along the estuary, however, they increase under extreme conditions. The results evidence that model modifications improve the accuracy in reproducing the water level evolution, comparing to the previous version, particularly under extreme events Advisors/Committee Members: Dias, João Miguel Sequeira Silva (advisor), Leitão, Paulo Chambel (advisor).
Subjects/Keywords: Santos estuary; Hydrodynamic; Numerical modelling; Operational oceanography; Boundary conditions
Mendes, J. A. A. (2018). Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) . (Thesis). Universidade de Aveiro. Retrieved from http://hdl.handle.net/10773/27065
Mendes, Joana Alexandra Assunção. "Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) ." 2018. Thesis, Universidade de Aveiro. Accessed January 27, 2020. http://hdl.handle.net/10773/27065.
Mendes, Joana Alexandra Assunção. "Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) ." 2018. Web. 27 Jan 2020.
Mendes JAA. Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) . [Internet] [Thesis]. Universidade de Aveiro; 2018. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/10773/27065.
Mendes JAA. Optimization of an operational forecast system for extreme tidal events in Santos estuary (Brazil) . [Thesis]. Universidade de Aveiro; 2018. Available from: http://hdl.handle.net/10773/27065
22. De Haas, E. Boundary Conditions for a Seiche model:.
URL: http://resolver.tudelft.nl/uuid:2b98e9d4-2690-4376-905e-5cbd432e4a29
► The Storm Surge Barrier in the New Waterway was built to protect the city and port of Rotterdam and the area of the lower Rhine… (more)
▼ The Storm Surge Barrier in the New Waterway was built to protect the city and port of Rotterdam and the area of the lower Rhine against flooding during extreme conditions. This construction, consisting of two gigantic arc shaped barriers, is to be pivoted into the New Waterway and then lowered in case of an impeding emergency. The arc shape makes it a very efficient design against forces from the seaside, but if the level on the riverside surpasses the level on the seaside, a negative force will be exerted on the construction. Seiches contribute to this effect. The barrier only has a relatively small capacity to withstand a negative head difference. To accurately predict the maximum expected head difference a numerical model that handles seiches correctly is needed. In this thesis, the boundary conditions for such a numerical model are investigated. The program currently used to calculate the effects of seiches, RAS/FLOW predicts a head difference that exceeds the design specifications of the construction. However, the calculations done with Rasflow are not accurate with respect to the amplification of the seiches. The amplitude is overestimated significantly due to the use of an inaccurate boundary condition at the sea boundary of the model. The boundary condition at the channel entrance is very complex. Mendez Lorenzo (1997) studied a new boundary condition: the epsilon boundary. This boundary is a combination of a water level and a Riemann invariant with a factor epsilon. In the analytical case the results of this boundary condition match the analytical solution exactly. The step from the analytical boundary condition to a numerical boundary condition involves a set of derivations and simplifications that fixate the value for Epsilon. With a fixed value for epsilon, the amplification function obtained will only match one of the peaks in the spectrum: the peak for which the value of epsilon is set. In this thesis the addition of non-linear terms to the epsilon model can be found. The non-linear terms did not resolve the problem of the fixed epsilon. To reduce the complexity of the boundary condition a different approach to the problem is taken, namely a combination of a one and two-dimensional approach. In this model a two-dimensional sea area is attached to the onedimensional channel. Thus moving the complex boundary condition at the channel entrance to a simpler boundary condition on the open sea boundary. With this model it is possible to correctly model the amplification for more than one peak. The results obtained with this model are satisfactory and are recommended for a future implementation. Advisors/Committee Members: Battjes, J.A., Méndez Lorenzo, A.B., Kemkamp, H.W.J., Van Dongeren, A., Stelling, G.S..
Subjects/Keywords: seiche; numerical model; boundary conditions
De Haas, E. (1998). Boundary Conditions for a Seiche model: . (Masters Thesis). Delft University of Technology. Retrieved from http://resolver.tudelft.nl/uuid:2b98e9d4-2690-4376-905e-5cbd432e4a29
De Haas, E. "Boundary Conditions for a Seiche model:." 1998. Masters Thesis, Delft University of Technology. Accessed January 27, 2020. http://resolver.tudelft.nl/uuid:2b98e9d4-2690-4376-905e-5cbd432e4a29.
De Haas, E. "Boundary Conditions for a Seiche model:." 1998. Web. 27 Jan 2020.
De Haas E. Boundary Conditions for a Seiche model:. [Internet] [Masters thesis]. Delft University of Technology; 1998. [cited 2020 Jan 27]. Available from: http://resolver.tudelft.nl/uuid:2b98e9d4-2690-4376-905e-5cbd432e4a29.
De Haas E. Boundary Conditions for a Seiche model:. [Masters Thesis]. Delft University of Technology; 1998. Available from: http://resolver.tudelft.nl/uuid:2b98e9d4-2690-4376-905e-5cbd432e4a29
23. Cote, Dominic. Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads .
► Cross-laminated timber (CLT) is an emerging engineered wood product in North America. Past research effort to establish the behaviour of CLT under extreme loading conditions… (more)
▼ Cross-laminated timber (CLT) is an emerging engineered wood product in North America. Past research effort to establish the behaviour of CLT under extreme loading conditions has focussed CLT slabs with idealized simply-supported boundary conditions. Connections between the wall and the floor systems above and below are critical to fully describing the overall behaviour of CLT structures when subjected to blast loads. The current study investigates the effects of "realistic" boundary conditions on the behaviour of cross-laminated timber walls when subjected to simulated out-of-plane blast loads. The methodology followed in the current research consists of experimental and analytical components. The experimental component was conducted in the Blast Research Laboratory at the University of Ottawa, where shock waves were applied to the specimens. Configurations with seismic detailing were considered, in order to evaluate whether existing structures that have adequate capacities to resist high seismic loads would also be capable of resisting a blast load with reasonable damage. In addition, typical connections used in construction to resist gravity and lateral loads, as well as connections designed specifically to resist a given blast load were investigated. The results indicate that the detailing of the connections appears to significantly affect the behaviour of the CLT slab. Typical detailing for platform construction where long screws connect the floor slab to the wall in end grain performed poorly and experienced brittle failure through splitting in the perpendicular to grain direction in the CLT. Bearing type connections generally behaved well and yielding in the fasteners and/or angles brackets meant that a significant portion of the energy was dissipated there reducing the energy imparted on the CLT slab significantly. Hence less displacement and thereby damage was observed in the slab. The study also concluded that using simplified tools such as single-degree-of-freedom (SDOF) models together with current available material models for CLT is not sufficient to adequately describe the behaviour and estimate the damage. More testing and development of models with higher fidelity are required in order to develop robust tools for the design of CLT element subjected to blast loading.
Subjects/Keywords: CLT; Cross-Laminated Timber; Blast; Connections; Boundary Conditions
Cote, D. (2017). Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads . (Thesis). University of Ottawa. Retrieved from http://hdl.handle.net/10393/36993
Cote, Dominic. "Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads ." 2017. Thesis, University of Ottawa. Accessed January 27, 2020. http://hdl.handle.net/10393/36993.
Cote, Dominic. "Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads ." 2017. Web. 27 Jan 2020.
Cote D. Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads . [Internet] [Thesis]. University of Ottawa; 2017. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/10393/36993.
Cote D. Effect of Realistic Boundary Conditions on the Behaviour of Cross-Laminated Timber Elements Subjected to Simulated Blast Loads . [Thesis]. University of Ottawa; 2017. Available from: http://hdl.handle.net/10393/36993
24. [No author]. Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations.
Degree: 2018, Baylor University
► This dissertation details the development of several analytic tools that are used to apply the techniques and concepts of perturbation theory to other areas of… (more)
▼ This dissertation details the development of several analytic tools that are used to apply the techniques and concepts of perturbation theory to other areas of analysis. The main application is an efficient characterization of the boundary conditions associated with the general left-definite theory for differential operators. This theory originated with the groundbreaking work of Littlejohn and Wellman in 2002 which fully determined the `left-definite domains' and spectral properties of powers of self-adjoint Sturm – Liouville operators associated with classical orthogonal polynomials. We will study how the left-definite domains associated with these operators can be explicitly described by classical boundary conditions. Additional applications are made to infinite rank perturbations by successively introducing rank-one perturbations to a self-adjoint operator with absolutely continuous spectrum. The absolutely continuous part of the spectral measure of the constructed operator is controlled and estimated. Advisors/Committee Members: Liaw, Constanze (advisor).
Subjects/Keywords: Spectral theory. Operator theory. Perturbation theory. Boundary conditions. Left-definite theory.
author], [. (2018). Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations. (Thesis). Baylor University. Retrieved from http://hdl.handle.net/2104/10445
author], [No. "Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations. " 2018. Thesis, Baylor University. Accessed January 27, 2020. http://hdl.handle.net/2104/10445.
author], [No. "Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations. " 2018. Web. 27 Jan 2020.
author] [. Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations. [Internet] [Thesis]. Baylor University; 2018. [cited 2020 Jan 27]. Available from: http://hdl.handle.net/2104/10445.
author] [. Boundary conditions associated with left-definite theory and the spectral analysis of iterated rank-one perturbations. [Thesis]. Baylor University; 2018. Available from: http://hdl.handle.net/2104/10445
25. Moalafhi, Ditiro. A framework for dynamical downscaling of global reanalyses for hydrological applications.
Degree: Civil & Environmental Engineering, 2016, University of New South Wales
URL: http://handle.unsw.edu.au/1959.4/56988 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:42235/SOURCE02?view=true
► Global reanalyses provide the most consistent atmospheric circulation datasets for many dynamical processes that are not easily observed. Reanalysis products are, however, of limited value… (more)
▼ Global reanalyses provide the most consistent atmospheric circulation datasets for many dynamical processes that are not easily observed. Reanalysis products are, however, of limited value in hydrological applications due to the coarse spatial scales they are available at. Dynamical downscaling of these products offers a means to convert them to finer spatial scales useful in resolving dominant atmospheric processes and thus leading to improved accuracy in the surface climate derived. In downscaling the climate of the recent past with a regional climate model (RCM), lateral boundary conditions (LBCs) are created from an available reanalysis in order to simulate the observed climate. This allows the investigation of regional climate processes and quantification of the errors associated with the regional model. Thus the global reanalysis being downscaled should provide the most accurate LBCs in order to produce the best downscaled simulations. Despite this, choice of reanalysis to perform such downscaling has mostly been left to convenience, a researcher's familiarity or on performance of the reanalysis within the regional domain for variables such as near-surface air temperature and precipitation. These variables may only partially be relevant for downscaling as they do not directly reflect information contained in the LBCs fields which are the principal input to a RCM. A framework to evaluate reanalyses derived LBCs to choose the most accurate at the intended boundaries and time of an example domain over southern Africa is demonstrated. The robustness of the approach is demonstrated over a different domain in the same region. The two reanalyses with the best LBCs are subsequently downscaled over 31 years (1980-2010). It is revealed that although choice of LBCs has a bearing on simulations of a RCM, internal model physics also plays some role in determining the simulated near-surface temperature and precipitation. The evaluations are extended to basin scale where the resultant near-surface temperature and precipitation are also found to be useful especially in studying sustained hydrological extremes over the data poor region. It is also revealed that simple bias correction of raw reanalyses temperature and relative humidity fields before downscaling gives mixed results in terms of improving the simulation. Advisors/Committee Members: Sharma, Ashish, UNSW, Evans, Jason Peter, Faculty of Science, UNSW.
Subjects/Keywords: Lateral boundary conditions; Reanalysis; Dynamical downscaling; Regional climate model; southern Africa
Moalafhi, D. (2016). A framework for dynamical downscaling of global reanalyses for hydrological applications . (Doctoral Dissertation). University of New South Wales. Retrieved from http://handle.unsw.edu.au/1959.4/56988 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:42235/SOURCE02?view=true
Moalafhi, Ditiro. "A framework for dynamical downscaling of global reanalyses for hydrological applications." 2016. Doctoral Dissertation, University of New South Wales. Accessed January 27, 2020. http://handle.unsw.edu.au/1959.4/56988 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:42235/SOURCE02?view=true.
Moalafhi, Ditiro. "A framework for dynamical downscaling of global reanalyses for hydrological applications." 2016. Web. 27 Jan 2020.
Moalafhi D. A framework for dynamical downscaling of global reanalyses for hydrological applications. [Internet] [Doctoral dissertation]. University of New South Wales; 2016. [cited 2020 Jan 27]. Available from: http://handle.unsw.edu.au/1959.4/56988 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:42235/SOURCE02?view=true.
Moalafhi D. A framework for dynamical downscaling of global reanalyses for hydrological applications. [Doctoral Dissertation]. University of New South Wales; 2016. Available from: http://handle.unsw.edu.au/1959.4/56988 ; https://unsworks.unsw.edu.au/fapi/datastream/unsworks:42235/SOURCE02?view=true
26. Parker, Mark. An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube.
Degree: 2019, University of Western Ontario
URL: https://ir.lib.uwo.ca/etd/6411
► A vortex tube converts a single stream of compressed gas into two outlet streams; one heated, and the other cooled. This phenomenon, referred to as… (more)
▼ A vortex tube converts a single stream of compressed gas into two outlet streams; one heated, and the other cooled. This phenomenon, referred to as energy separation, is not fully understood, thereby requires further experimentation and analysis. CFD can provide insight, however, such simulations require a complete set of information of the geometric, operational, and performance parameters of particular vortex tube experiments. An experiment is developed to provide complete information for CFD simulations. The vortex tube under consideration is established by considering reported data from past studies and selecting parameters that yield significant energy separation. An experimental apparatus is developed that can precisely measure the flow properties at the inlet and outlets. In addition to providing operational and performance data for future CFD studies, the experiments conducted showed the magnitude of the energy separation is (at least partially) dependent on the pressure ratio between the inlet and the cold outlet.
Subjects/Keywords: Ranque-Hilsch vortex tube; Experimental; CFD boundary conditions; Other Mechanical Engineering
Parker, M. (2019). An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube . (Thesis). University of Western Ontario. Retrieved from https://ir.lib.uwo.ca/etd/6411
Parker, Mark. "An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube." 2019. Thesis, University of Western Ontario. Accessed January 27, 2020. https://ir.lib.uwo.ca/etd/6411.
Parker, Mark. "An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube." 2019. Web. 27 Jan 2020.
Parker M. An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube. [Internet] [Thesis]. University of Western Ontario; 2019. [cited 2020 Jan 27]. Available from: https://ir.lib.uwo.ca/etd/6411.
Parker M. An Experimental Study on the Impact of Pressure Drop in the Ranque-Hilsch Vortex Tube. [Thesis]. University of Western Ontario; 2019. Available from: https://ir.lib.uwo.ca/etd/6411
27. Alashoor, Tawfiq. Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors.
Degree: PhD, Computer Information Systems, 2019, Georgia State University
URL: https://scholarworks.gsu.edu/cis_diss/70
► The privacy paradox phenomenon suggests that individuals tend to make privacy decisions (i.e., disclosure of personal information) that contradict their dispositional privacy concerns. Despite… (more)
▼ The privacy paradox phenomenon suggests that individuals tend to make privacy decisions (i.e., disclosure of personal information) that contradict their dispositional privacy concerns. Despite the emerging research attempting to explain this phenomenon, it remains unclear why the privacy paradox exists. In order to explain why it exists and to be able to predict occurrences of privacy paradoxical decisions, this dissertation emphasizes the need to identify boundary conditions of the relationship between privacy concerns and disclosure behaviors. Across three empirical research studies varying in their contexts, this dissertation presents a total of seven boundary conditions (i.e., cognitive absorption, cognitive resource depletion, positive mood state, privacy control, convenience, empathic concern, and social nudging) that can explain why privacy concerns sometimes do not predict disclosure behaviors (i.e., the privacy paradox). The approach of identifying the boundary conditions advances privacy theories by establishing a theoretically sounder causal link between privacy concerns and disclosure behaviors while contributing to enhancing privacy policies, organizational privacy practices, and individuals' privacy decisions. Advisors/Committee Members: Dr. Mark Keil, Dr. Richard Baskerville, Dr. Likoebe Mohau Maruping, Dr. Zhenhui (Jack) Jiang.
Subjects/Keywords: privacy paradox; privacy concerns; disclosure behaviors; boundary conditions
Alashoor, T. (2019). Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors . (Doctoral Dissertation). Georgia State University. Retrieved from https://scholarworks.gsu.edu/cis_diss/70
Alashoor, Tawfiq. "Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors." 2019. Doctoral Dissertation, Georgia State University. Accessed January 27, 2020. https://scholarworks.gsu.edu/cis_diss/70.
Alashoor, Tawfiq. "Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors." 2019. Web. 27 Jan 2020.
Alashoor T. Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors. [Internet] [Doctoral dissertation]. Georgia State University; 2019. [cited 2020 Jan 27]. Available from: https://scholarworks.gsu.edu/cis_diss/70.
Alashoor T. Explaining the Privacy Paradox through Identifying Boundary Conditions of the Relationship between Privacy Concerns and Disclosure Behaviors. [Doctoral Dissertation]. Georgia State University; 2019. Available from: https://scholarworks.gsu.edu/cis_diss/70
28. Ghulam, Ashar. Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip.
Degree: PhD, Applied Mathematics, 2016, Louisiana State University
URL: etd-06142016-133552 ; https://digitalcommons.lsu.edu/gradschool_dissertations/3487
► In this dissertation, a new method is developed to study BVPs of the modified Helmholtz and Helmholtz equations in a semi-infinite strip subject to the… (more)
▼ In this dissertation, a new method is developed to study BVPs of the modified Helmholtz and Helmholtz equations in a semi-infinite strip subject to the Poincare type, impedance and higher order boundary conditions. The main machinery used here is the theory of Riemann Hilbert problems, the residue theory of complex variables and the theory of integral transforms. A special kind of interconnected Laplace transforms are introduced whose parameters are related through branch of a multi-valued function. In the chapter 1 a brief review of the unified transform method used to solve BVPs of linear and non-linear integrable PDEs in convex polygons is given. Then unified transform method is applied to the BVP of the modified Helmholtz equation in a semi-infinite strip subject to the Poincare type and impedance boundary conditions. In the case of BVP of the modified Helmholtz equation in a semi-infinite strip subject to the impedance boundary conditions, two scalar RHPs are derived, then the closed form solutions of the given BVP are derived. The difficulty in application of the unified transform method to BVP of the Helmholtz equation in a semi infinite strip is discussed later on. The chapter 2 contains application of the finite integral transform (FIT) method to study the BVP for the Helmholtz equation in a semi-infinite strip subject to the Poincare type and impedance boundary conditions. In the case of the impedance boundary conditions, a series representation of the solution of the BVP for the Helmholtz equation in a semi-infinite strip is derived. The Burniston-Siewert method to find integral representations of a certain transcendental equation is presented. The roots of this equation are required for both methods, the FIT method and the RHP based method. To implement the Burniston-Siewert method, we solve a scalar RHP on several segments of the real axis. In chapter 3, we have applied the new method to study the Poincare type and impedance BVPs for the Helmholtz equation in a semi-infinite strip. In the case of the Poincare type boundary conditions an order two vector RHP is derived. In general, it is not possible to find closed form solution of an order two vector RHP. In the case of the impedance boundary conditions two scalar RHPs are derived whose closed form solutions are found. Then the series representation for solution of the BVP of the Helmholtz equation in a semi-infinite strip subject to the impedance boundary conditions, is recovered using the inverse transform operator, and the residue theory of complex variables. The numerical results are presented for various values of the parameters involved. It is observed that the FIT method and the new method generate exactly the same solution of the BVP of the Helmholtz equation in a semi-infinite strip subject to the impedance boundary conditions. In chapter 4, we have applied the new method to study the acoustic scattering from a semi-infinite strip subject to higher order boundary conditions. Two scalar RHPs are derived whose closed form solutions are found. A…
Subjects/Keywords: theory of integral transforms; theory of complex variables; residue theory of complex variables; higher order boundary conditions; Impedance boundary conditions; Poincare type boundary conditions; triangular case; Burniston-Siewert method; finite integral transform; scalar case; Modified Helmholtz equation; RHP; Riemann Hilbert problems; BVPs; boundary value problems
Ghulam, A. (2016). Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip . (Doctoral Dissertation). Louisiana State University. Retrieved from etd-06142016-133552 ; https://digitalcommons.lsu.edu/gradschool_dissertations/3487
Ghulam, Ashar. "Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip." 2016. Doctoral Dissertation, Louisiana State University. Accessed January 27, 2020. etd-06142016-133552 ; https://digitalcommons.lsu.edu/gradschool_dissertations/3487.
Ghulam, Ashar. "Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip." 2016. Web. 27 Jan 2020.
Ghulam A. Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip. [Internet] [Doctoral dissertation]. Louisiana State University; 2016. [cited 2020 Jan 27]. Available from: etd-06142016-133552 ; https://digitalcommons.lsu.edu/gradschool_dissertations/3487.
Ghulam A. Method of the Riemann-Hilbert Problem for the Solution of the Helmholtz Equation in a Semi-infinite Strip. [Doctoral Dissertation]. Louisiana State University; 2016. Available from: etd-06142016-133552 ; https://digitalcommons.lsu.edu/gradschool_dissertations/3487
29. Dhakal, Tek Nath. Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions.
Degree: MS, Mathematics and Statistics, 2018, Eastern Kentucky U
We apply a recent extension of a compression-expansion fixed point theorem of function type to a second order boundary value problem with Dirichlet boundary conditions. We show the existence of positive symmetric solutions of this boundary value problem.
Subjects/Keywords: Compression-expansion; Dirichlet boundary conditions; Fixed Point Theorem; function type; Second order Boundary Value probelm; Symmetric Solutions; Analysis
Dhakal, T. N. (2018). Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions . (Masters Thesis). Eastern Kentucky U. Retrieved from https://encompass.eku.edu/etd/563
Dhakal, Tek Nath. "Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions." 2018. Masters Thesis, Eastern Kentucky U. Accessed January 27, 2020. https://encompass.eku.edu/etd/563.
Dhakal, Tek Nath. "Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions." 2018. Web. 27 Jan 2020.
Dhakal TN. Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions. [Internet] [Masters thesis]. Eastern Kentucky U; 2018. [cited 2020 Jan 27]. Available from: https://encompass.eku.edu/etd/563.
Dhakal TN. Positive Symmetric Solutions Of A Boundary Value Problem With Dirichlet Boundary Conditions. [Masters Thesis]. Eastern Kentucky U; 2018. Available from: https://encompass.eku.edu/etd/563
30. He, Xin. SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING.
Degree: PhD, Department of Mechanical Engineering-Engineering Mechanics, 2019, Michigan Technological University
URL: https://digitalcommons.mtu.edu/etdr/854
► This dissertation is comprised of two parts. Part I includes the study of the compressible gas boundary layer (BL) flows over a flat plate… (more)
▼ This dissertation is comprised of two parts. Part I includes the study of the compressible gas boundary layer (BL) flows over a flat plate with velocity-slip, and temperature-jump boundary conditions (BCs). The work mainly focuses on the BCs effects on base flow profiles, shear stress and heat transfer on the plate surface. The governing ordinary differential equations (ODE) for the flow and temperature fields stay the same. However, the BCs on the plate surface change. With the new BCs, it is still possible to form self-similar solutions. Also, the shooting methods are still applicable to solve the velocity profiles station by station. There are two types of velocity-temperature field couplings where the temperature field may or may not affect the velocity field. In the velocity profiles, there may be a deflection point, which indicates a higher probability of flow instability. Part I also includes linear stability analysis on high-speed, compressible, rarefied boundary layer flows over a flat plate. The base boundary layer flows consider the velocity-slip effect, but no temperature-jumps are considered. The eigenvectors for the perturbations have larger amplitudes indicating a higher chance of flow instability. The relative larger eigenvalue αi also confirms this point. The second part of work concentrates on a simple model for micro-plasma jet end flow and electric field. It is assumed that the jet exit is planar, the spray is in vacuum, and the gas at the exit is Maxwellian. The gaskinetic theory is adopted to compute the flowfield properties. The gas is also assumed to be weakly charged, and the Boltzmann relation is adopted to compute the potential and electric fields. A set of analytical solutions for flowfield density, velocity, temperature components, and potential and electric fields are obtained. Further, corresponding farfield solutions for these properties were also obtained. These solutions offer many insights, for example, the Simons' plume model or the cosine law may be improper to describe the plume flows. Also, at farfield, the particles shot out along straight lines, and the detailed density, velocity components, and fluxes solutions were provided. It shall be mentioned that the electric field solutions are very crude, however, they are analytical solutions and serve as a starting point to develop a more advanced model. By comparison, if numerical simulations can only offer a significant amount of data which buries physical insights. These analytical solutions can be used as a proper sub-grid model, which can properly connect the upstream Taylor Cone-Jet model which may be studied with an analytical method, and its downstream plasma plume flows which must be simulated with particle methods. Advisors/Committee Members: Chunpei Cai.
Subjects/Keywords: Boundary layer flow; linear stability analysis; slip boundary conditions; micro-plasma; Fluid Dynamics; Plasma and Beam Physics
He, X. (2019). SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING . (Doctoral Dissertation). Michigan Technological University. Retrieved from https://digitalcommons.mtu.edu/etdr/854
He, Xin. "SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING." 2019. Doctoral Dissertation, Michigan Technological University. Accessed January 27, 2020. https://digitalcommons.mtu.edu/etdr/854.
He, Xin. "SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING." 2019. Web. 27 Jan 2020.
He X. SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING. [Internet] [Doctoral dissertation]. Michigan Technological University; 2019. [cited 2020 Jan 27]. Available from: https://digitalcommons.mtu.edu/etdr/854.
He X. SLIP BOUNDARY LAYER FLOW STABILITY ANALYSIS AND MICRO-PLASMA JET END FLOW MODELLING. [Doctoral Dissertation]. Michigan Technological University; 2019. Available from: https://digitalcommons.mtu.edu/etdr/854 | CommonCrawl |
Finding a mistake in this 'proof' that $1 = 2$. [duplicate]
$2+2 = 5$? error in proof 7 answers
I have some idea of where the mistake in this 'proof' may be, but can't quite formalize.
We start with the trivially correct statement, $1 - 3 = 4 - 6$. Then, completing the square on the LHS, we add $\frac{9}{4}$ to both sides, giving us $1 - 3 + \frac{9}{4} = 4 - 6 + \frac{9}{4}$, which is still accurate. Then, we factor both sides into $\left(1 - \frac{3}{2}\right)^2 = \left(2 - \frac{3}{2}\right)^2$. I was confused particularly by this step, especially as to why this works on the right-hand side, but upon verifying it, it seems to hold.
The proof then, I think incorrectly, takes principal roots of both sides and writes that $1 - \frac{3}{2} = 2 - \frac{3}{2}$, which implies that $1 = 2$.
I believe the mistake is in not writing $1 - \frac{3}{2} = \pm \left(2 - \frac{3}{2}\right)$, but this statement is still inaccurate unless we take take the negation on the RHS, though I'm not sure why, other than realizing this with arithmetic, we would think to only take its negation.
So, my questions are:
(a) How does the factoring on the right-hand side work? I know that the standard method of factoring when completing the square is to take the square of the first number, sign of the second, and square of the third. But I don't quite understand 'why' this works or, for that matter, why it would work on the right-hand side, other than the fact that $4$ and $\frac{9}{4}$ are squares, though I don't understand the significance of the $-6$ here: if this is all that is required, that $-6$ could in principle be anything at all, which is surely not the case.
(b) Is it correct that the mistake in the proof is in taking principal square roots? What is the logic behind leaving out the $\pm$ here?
proof-explanation fake-proofs
José Carlos Santos
Matt.PMatt.P
marked as duplicate by Jam, Daniel Fischer Jul 13 '18 at 17:24
$\begingroup$ How come $-1/2$ is a "principal root" of $1/4$? $\endgroup$ – Lord Shark the Unknown Jul 13 '18 at 16:27
$\begingroup$ In fact they didn't take the principal square root on both sides! The principal square root is positive. $\endgroup$ – David C. Ullrich Jul 13 '18 at 16:27
$\begingroup$ $x^2=y^2 \implies x=y\,\lor x=-y$ $\endgroup$ – Jakobian Jul 13 '18 at 16:31
There's a lot of misdirection in that proof - the entire start of the proof is irrelevant.
If you start with $(2-1)^2=(1-2)^2$, can you conclude that $1-2=2-1$?
The principal value of $\sqrt{x^2}$ is not $x$, because $x^2=(-x)^2$ and thus we'd have $$-x=\sqrt{(-x)^2}=\sqrt{x^2}=x$$ for all $x.$
Thomas AndrewsThomas Andrews
For every real number $x\quad$ $\sqrt{x^2} = \lvert x\rvert$
That is the mistake - omitting absolute value. You believe correctly.
Alvin LepikAlvin Lepik
(a): There is a well-known equality $a^2-2ab+b^2=(a-b)^2$. In this case, $a=2$ and $b=\dfrac{3}{2}$, so we have $a^2-2ab+b^2=4-6+\dfrac{9}{4}$, because $-2ab=-2\times 2\times \dfrac{3}{2}=-6.$
(b): For $a,b\in\mathbb{R}$, $a^2=b^2$ does not imply that $a=b$. It implies that $a^2-b^2=0$ or $(a-b)(a+b)=0$, so $a=b$ or $a=-b$, which can be informally written as $a=\pm b$.
Not the answer you're looking for? Browse other questions tagged proof-explanation fake-proofs or ask your own question.
$2+2 = 5$? error in proof
What is wrong with the given proof?
Discrete Math Proofs
Why is this -1=1 proof incorrect?
Proof by derivative that 2=1
Proof Involving Modular Arithmetic and Fermat's Theorem
Proof that 2=1 using differentiation
What went wrong in this fake proof?
Euclid Lemma proof reasoning
Proof that $0=1$?
Where is the mistake in this proof that "If the square of a number is even, then the number is even"? | CommonCrawl |
DOE PAGES Journal Article: Polaron-induced phonon localization and stiffening in rutile $$\mathrm{TiO_2}$$
Title: Polaron-induced phonon localization and stiffening in rutile $$\mathrm{TiO_2}$$
Small polaron formation in transition metal oxides, like the prototypical material rutile $$\mathrm{TiO_2}$$, remains a puzzle and a challenge to simple theoretical treatment. In our combined experimental and theoretical study, we examine this problem using Raman spectroscopy of photoexcited samples and real-time time-dependent density functional theory (RT-TDDFT), which employs Ehrenfest dynamics to couple the electronic and ionic subsystems. We observe experimentally the unexpected stiffening of the $$A_{1g}$$ phonon mode under UV illumination and provide a theoretical explanation for this effect. Our analysis also reveals a possible reason for the observed anomalous temperature dependence of the Hall mobility. Finally, small polaron formation in rutile $$\mathrm{TiO_2}$$ is a strongly nonadiabatic process and is adequately described by Ehrenfest dynamics at time scales of polaron formation.
Kolesov, Grigory [1]; Kolesov, Boris A. [2]; Kaxiras, Efthimios [1]
Harvard Univ., Cambridge, MA (United States)
Russia and Novosibirsk States Univ. (Russia)
Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
Physical Review B
Journal Volume: 96; Journal Issue: 19; Journal ID: ISSN 2469-9950
American Physical Society (APS)
75 CONDENSED MATTER PHYSICS, SUPERCONDUCTIVITY AND SUPERFLUIDITY
Kolesov, Grigory, Kolesov, Boris A., and Kaxiras, Efthimios. Polaron-induced phonon localization and stiffening in rutile $\mathrm{TiO_2}$. United States: N. p., 2017. Web. doi:10.1103/PhysRevB.96.195165.
Kolesov, Grigory, Kolesov, Boris A., & Kaxiras, Efthimios. Polaron-induced phonon localization and stiffening in rutile $\mathrm{TiO_2}$. United States. doi:10.1103/PhysRevB.96.195165.
Kolesov, Grigory, Kolesov, Boris A., and Kaxiras, Efthimios. Wed . "Polaron-induced phonon localization and stiffening in rutile $\mathrm{TiO_2}$". United States. doi:10.1103/PhysRevB.96.195165. https://www.osti.gov/servlets/purl/1523842.
title = {Polaron-induced phonon localization and stiffening in rutile $\mathrm{TiO_2}$},
author = {Kolesov, Grigory and Kolesov, Boris A. and Kaxiras, Efthimios},
abstractNote = {Small polaron formation in transition metal oxides, like the prototypical material rutile $\mathrm{TiO_2}$, remains a puzzle and a challenge to simple theoretical treatment. In our combined experimental and theoretical study, we examine this problem using Raman spectroscopy of photoexcited samples and real-time time-dependent density functional theory (RT-TDDFT), which employs Ehrenfest dynamics to couple the electronic and ionic subsystems. We observe experimentally the unexpected stiffening of the $A_{1g}$ phonon mode under UV illumination and provide a theoretical explanation for this effect. Our analysis also reveals a possible reason for the observed anomalous temperature dependence of the Hall mobility. Finally, small polaron formation in rutile $\mathrm{TiO_2}$ is a strongly nonadiabatic process and is adequately described by Ehrenfest dynamics at time scales of polaron formation.},
journal = {Physical Review B},
number = 19,
Raman Spectra of Ti O 2 , Mg F 2 , Zn F 2 , Fe F 2 , and Mn F 2
Porto, S. P. S.; Fleury, P. A.; Damen, T. C.
DOI: 10.1103/PhysRev.154.522
The nature of excess electrons in anatase and rutile from hybrid DFT and RPA
Spreafico, Clelia; VandeVondele, Joost
Phys. Chem. Chem. Phys., Vol. 16, Issue 47
Phonons in a strongly coupled electron-phonon system
Alexandrov, A.; Capellmann, H.
Physical Review B, Vol. 43, Issue 3
DOI: 10.1103/PhysRevB.43.2042
Localized Electronic States from Surface Hydroxyls and Polarons in TiO 2 (110)
Deskins, N. Aaron; Rousseau, Roger; Dupuis, Michel
DOI: 10.1021/jp9037655
A surface science perspective on TiO2 photocatalysis
Henderson, Michael A.
Surface Science Reports, Vol. 66, Issue 6-7
Computational modeling of self-trapped electrons in rutile TiO 2
Yan, Likai; Elenewski, Justin E.; Jiang, Wei
Physical Chemistry Chemical Physics, Vol. 17, Issue 44
DOI: 10.1039/C5CP05271H
Chemical bonding in rutile-type compounds
Sorantin, Peter I.; Schwarz, Karlheinz
Inorganic Chemistry, Vol. 31, Issue 4
DOI: 10.1021/ic00030a009
Charge density and chemical bonding in rutile, TiO 2
Jiang, B.; Zuo, J. M.; Jiang, N.
Acta Crystallographica Section A Foundations of Crystallography, Vol. 59, Issue 4
DOI: 10.1107/S010876730301122X
Quasiparticle and optical properties of rutile and anatase TiO 2
Kang, Wei; Hybertsen, Mark S.
Distinction of electron pathways at titanium oxide/liquid interfaces in photocatalytic processes and co-catalyst effects
Kuwahara, Shota; Katayama, Kenji
DOI: 10.1039/C6CP04016K
Dual behavior of excess electrons in rutile TiO 2
Janotti, A.; Franchini, C.; Varley, J. B.
physica status solidi (RRL) - Rapid Research Letters, Vol. 7, Issue 3
DOI: 10.1002/pssr.201206464
On the calculation of multiplet energies by the hartree-fock-slater method
Ziegler, Tom; Rauk, Arvi; Baerends, Evert J.
Theoretica Chimica Acta, Vol. 43, Issue 3
Studies of polaron motion
Holstein, T.
Annals of Physics, Vol. 8, Issue 3
The SIESTA method for ab initio order- N materials simulation
Soler, José M.; Artacho, Emilio; Gale, Julian D.
Journal of Physics: Condensed Matter, Vol. 14, Issue 11
Spin bags, polarons, and impurity potentials in La 2 − x Sr x CuO 4 from first principles
Anisimov, V. I.; Korotin, M. A.; Zaanen, J.
Physical Review Letters, Vol. 68, Issue 3
DOI: 10.1103/PhysRevLett.68.345
Self-Consistent Optimization of Excited States within Density-Functional Tight-Binding
Kowalczyk, Tim; Le, Khoa; Irle, Stephan
Journal of Chemical Theory and Computation, Vol. 12, Issue 1
DOI: 10.1021/acs.jctc.5b00734
Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study
Dudarev, S. L.; Botton, G. A.; Savrasov, S. Y.
Physical Review B, Vol. 57, Issue 3, p. 1505-1509
Electronic conduction above 4 K of slightly reduced oxygen-deficient rutile TiO 2 − x
Yagi, Eiichi; Hasiguti, Ryukiti R.; Aono, Masakazu
Polarons in crystalline and non-crystalline materials
Austin, I. G.; Mott, N. F.
Advances in Physics, Vol. 18, Issue 71, p. 41-102
Phonon anharmonicity of rutile TiO 2 studied by Raman spectrometry and molecular dynamics simulations
Lan, Tian; Tang, Xiaoli; Fultz, Brent
Optical Absorption by Polarons in Rutile (TiO2) Single Crystals
Bogomolov, V. N.; Mirlin, D. N.
Physica Status Solidi (b), Vol. 27, Issue 1
DOI: 10.1002/pssb.19680270144
XSEDE: Accelerating Scientific Discovery
Towns, John; Cockerill, Timothy; Dahan, Maytal
Computing in Science & Engineering, Vol. 16, Issue 5
DOI: 10.1109/MCSE.2014.80
Direct View at Excess Electrons in TiO 2 Rutile and Anatase
Setvin, Martin; Franchini, Cesare; Hao, Xianfeng
DOI: 10.1016/0003-4916(59)90003-X
Dynamics of the Photogenerated Hole at the Rutile TiO 2 (110)/Water Interface: A Nonadiabatic Simulation Study
Tritsaris, Georgios A.; Vinichenko, Dmitry; Kolesov, Grigory
DOI: 10.1021/jp508557w
Anatomy of the Photochemical Reaction: Excited-State Dynamics Reveals the C–H Acidity Mechanism of Methoxy Photo-oxidation on Titania
Kolesov, Grigory; Vinichenko, Dmitry; Tritsaris, Georgios A.
The Journal of Physical Chemistry Letters, Vol. 6, Issue 9
DOI: 10.1021/acs.jpclett.5b00429
Electron transport in TiO 2 probed by THz time-domain spectroscopy
Hendry, E.; Wang, F.; Shan, J.
Hall mobility of reduced rutile in the temperature range 300–1250°K
Bransky, I.; Tannhauser, D. S.
Solid State Communications, Vol. 7, Issue 1
From electron to small polaron: An exact cluster solution
Alexandrov, A. S.; Kabanov, V. V.; Ray, D. K.
Real-Time TD-DFT with Classical Ion Dynamics: Methodology and Applications
Kolesov, Grigory; Grånäs, Oscar; Hoyt, Robert
Exciton Absorption Spectra by Linear Response Methods: Application to Conjugated Polymers
Journal Article Mosquera, Martín A. ; Jackson, Nicholas E. ; Fauvell, Thomas J. ; ... - Journal of the American Chemical Society
The theoretical description of the time-evolution of excitons requires, as an initial step, the calculation of their spectra, which has been inaccessible to most users due to the high computational scaling of conventional algorithms and accuracy issues caused by common density functionals. Previously (J. Chem. Phys.2016, 144, 204105), we developed a simple method that resolves these issues. Our scheme is based on a two-step calculation in which a linear-response TDDFT calculation is used to generate orbitals perturbed by the excitonic state, and then a second linear-response TDDFT calculation is used to determine the spectrum of excitations relative to the excitonicmore » state. Herein, we apply this theory to study near-infrared absorption spectra of excitons in oligomers of the ubiquitous conjugated polymers poly(3-hexylthiophene) (P3HT), poly(2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene) (MEH-PPV), and poly(benzodithiophene-thieno[3,4-b]thiophene) (PTB7). For P3HT and MEH-PPV oligomers, the calculated intense absorption bands converge at the longest wavelengths for 10 monomer units, and show strong consistency with experimental measurements. The calculations confirm that the exciton spectral features in MEH-PPV overlap with those of the bipolaron formation. In addition, our calculations identify the exciton absorption bands in transient absorption spectra measured by our group for oligomers (1, 2, and 3 units) of PTB7. For all of the cases studied, we report the dominant orbital excitations contributing to the optically active excited state–excited state transitions, and suggest a simple rule to identify absorption peaks at the longest wavelengths. We suggest our methodology could be considered for further developments in theoretical transient spectroscopy to include nonadiabatic effects, coherences, and to describe the formation of species such as charge-transfer states and polaron pairs.« less
Fast Real-Time Time-Dependent Density Functional Theory Calculations with the Parallel Transport Gauge
Journal Article Jia, Weile ; An, Dong ; Wang, Lin -Wang ; ... - Journal of Chemical Theory and Computation
Real-time time-dependent density functional theory (RT-TDDFT) is known to be hindered by the very small time step (attosecond or smaller) needed in the numerical simulation, because of the fast oscillation of electron wave functions, which significantly limits its range of applicability for the study of ultrafast dynamics. In this paper, we demonstrate that such oscillation can be considerably reduced by optimizing the gauge choice using the parallel transport formalism. RT-TDDFT calculations can thus be significantly accelerated using a combination of the parallel transport gauge and implicit integrators, and the resulting scheme can be used to accelerate any electronic structure softwaremore » that uses a Schrödinger representation. Here, using absorption spectrum, ultrashort laser pulse, and Ehrenfest dynamics calculations for example, we show that the new method can utilize a time step that is on the order of 10–100 attoseconds using a planewave basis set. Thanks to the significant increase of the size of the time step, we also demonstrate that the new method is more than 10 times faster, in terms of the wall clock time, when compared to the standard explicit fourth-order Runge–Kutta time integrator for silicon systems ranging from 32 to 1024 atoms.« less
Representing the thermal state in time-dependent density functional theory
Journal Article Modine, N. A. ; Hatcher, R. M. - Journal of Chemical Physics
Classical molecular dynamics (MD) provides a powerful and widely used approach to determining thermodynamic properties by integrating the classical equations of motion of a system of atoms. Time-Dependent Density Functional Theory (TDDFT) provides a powerful and increasingly useful approach to integrating the quantum equations of motion for a system of electrons. TDDFT efficiently captures the unitary evolution of a many-electron state by mapping the system into a fictitious non-interacting system. In analogy to MD, one could imagine obtaining the thermodynamic properties of an electronic system from a TDDFT simulation in which the electrons are excited from their ground state bymore » a time-dependent potential and then allowed to evolve freely in time while statistical data are captured from periodic snapshots of the system. For a variety of systems (e.g., many metals), the electrons reach an effective state of internal equilibrium due to electron-electron interactions on a time scale that is short compared to electron-phonon equilibration. During the initial time-evolution of such systems following electronic excitation, electron-phonon interactions should be negligible, and therefore, TDDFT should successfully capture the internal thermalization of the electrons. However, it is unclear how TDDFT represents the resulting thermal state. In particular, the thermal state is usually represented in quantum statistical mechanics as a mixed state, while the occupations of the TDDFT wavefunctions are fixed by the initial state in TDDFT. We work to address this puzzle by (A) reformulating quantum statistical mechanics so that thermodynamic expectations can be obtained as an unweighted average over a set of many-body pure states and (B) constructing a family of non-interacting (single determinant) TDDFT states that approximate the required many-body states for the canonical ensemble.« less
Atomistic Time-Domain Simulations of Light-Harvesting and Charge-Transfer Dynamics in Novel Nanoscale Materials for Solar Hydrogen Production.
Other Prezhdo, Oleg V.
Funded by the DOE grant (i) we continued to study and analyze the atomistic detail of the electron transfer (ET) across the chromophore-TiO2 interface in Gratzel cell systems for solar hydrogen production. (ii) We extensively investigated the nature of photoexcited states and excited state dynamics in semiconductor quantum dots (QD) designed for photovoltaic applications. (iii) We continued a newly initiated research direction focusing on excited state properties and electron-phonon interactions in nanoscale carbon materials. Over the past year, the results of the DOE funded research were summarized in 3 review articles. 12 original manuscripts were written. The research results weremore » reported in 28 invited talks at conferences and university seminars. 20 invitations were accepted for talks in the near future. 2 symposia at national and international meetings have being organized this year on topics closely related to the DOE funded project, and 2 more symposia have been planned for the near future. We summarized the insights into photoinduced dynamics of semiconductor QDs, obtained from our time-domain ab initio studies. QDs exhibit both molecular and bulk properties. Unlike either bulk or molecular materials, QD properties can be modified continuously by changing QD shape and size. However, the chemical and physical properties of molecular and bulk materials often contradict each other, which can lead to differing viewpoints about the behavior of QDs. For example, the molecular view suggests strong electron-hole and charge-phonon interactions, as well as slow energy relaxation due to mismatch between electronic energy gaps and phonon frequencies. In contrast, the bulk view advocates that the kinetic energy of quantum confinement is greater than electron-hole interactions, that charge-phonon coupling is weak, and that the relaxation through quasi-continuous bands is rapid. By synthesizing the bulk and molecular viewpoints, we clarified the controversies and provided a unified atomistic picture of the nature and dynamics of photoexcited states in semiconductor QDs. We also summarized our recent findings about the photoinduced electron dynamics at the chromophore-semiconductor interfaces from a time-domain ab initio perspective. The interface provides the foundation for a new, promising type of solar cell and presents a fundamentally important case study for several fields, including photo-, electro- and analytical chemistries, molecular electronics, and photography. Further, the interface offers a classic example of an interaction between an organic molecular species and an inorganic bulk material. Scientists employ different concepts and terminologies to describe molecular and solid states of matter, and these differences make it difficult to describe the interface with a single model. At the basic atomistic level of description, however, this challenge can be largely overcome. Recent advances in non-adiabatic molecular dynamics and time-domain density functional theory have created a unique opportunity for simulating the ultrafast, photoinduced processes on a computer very similar to the way that they occur in nature. These state-of-the-art theoretical tools offered a comprehensive picture of a variety of electron transfer processes that occur at the interface, including electron injection from the chromophore to the semiconductor, electron relaxation and delocalization inside the semiconductor, back-transfer of the electron to the chromophore and to the electrolyte, and regeneration of the neutral chromophore by the electrolyte. The ab initio time-domain modeling is particularly valuable for understanding these dynamic features of the ultrafast electron transfer processes, which cannot be represented by a simple rate description. We demonstrated using symmetry adapted cluster theory with configuration interaction (SAC-CI) that charging of small PbSe nanocrystals (NCs) greatly modifies their electronic states and optical excitations. Conduction and valence band transitions that are not available in neutral NCs dominate low energy electronic excitations and show weak optical activity. At higher energies these transitions mix with both single excitons (SEs) and multiple excitons (MEs) associated with transitions across the band-gap. As a result, both SEs and MEs are significantly blue-shifted, and ME generation is drastically hampered. The overall contribution of MEs to the electronic excitations of the charged NCs is small even at very high energies. The calculations supported the recent view that the observed strong dependence of the ME yields on the experimental conditions is likely due to the effects of NC charging. The electron-hole excitonic nature of high energy states was investigated in neutral and charged Si clusters, motivated by the ME generation (MEG) process that is highly debated in photovoltaic literature.« less
Anomalous thermal conductivity of monolayer boron nitride
Journal Article Tabarraei, Alireza, E-mail: [email protected] ; Wang, Xiaonan - Applied Physics Letters
In this paper, we use nonequilibrium molecular dynamics modeling to investigate the thermal properties of monolayer hexagonal boron nitride nanoribbons under uniaxial strain along their longitudinal axis. Our simulations predict that hexagonal boron nitride shows an anomalous thermal response to the applied uniaxial strain. Contrary to three dimensional materials, under uniaxial stretching, the thermal conductivity of boron nitride nanoribbons first increases rather than decreasing until it reaches its peak value and then starts decreasing. Under compressive strain, the thermal conductivity of monolayer boron nitride ribbons monolithically reduces rather than increasing. We use phonon spectrum and dispersion curves to investigate themore » mechanism responsible for the unexpected behavior. Our molecular dynamics modeling and density functional theory results show that application of longitudinal tensile strain leads to the reduction of the group velocities of longitudinal and transverse acoustic modes. Such a phonon softening mechanism acts to reduce the thermal conductivity of the nanoribbons. On the other hand, a significant increase in the group velocity (stiffening) of the flexural acoustic modes is observed, which counteracts the phonon softening effects of the longitudinal and transverse modes. The total thermal conductivity of the ribbons is a result of competition between these two mechanisms. At low tensile strain, the stiffening mechanism overcomes the softening mechanism which leads to an increase in the thermal conductivity. At higher tensile strain, the softening mechanism supersedes the stiffening and the thermal conductivity slightly reduces. Our simulations show that the decrease in the thermal conductivity under compressive strain is attributed to the formation of buckling defects which reduces the phonon mean free path.« less | CommonCrawl |
BMC Complementary Medicine and Therapies
In vitro and in silico analysis of 'Taikong blue' lavender essential oil in LPS-induced HaCaT cells and RAW264.7 murine macrophages
Mengya Wei1 na1,
Fei Liu2,3 na1,
Rifat Nowshin Raka1 na1,
Jie Xiang1,
Junsong Xiao1,
Tingting Han2,3,
Fengjiao Guo2,3,
Suzhen Yang2,3 &
Hua Wu1
BMC Complementary Medicine and Therapies volume 22, Article number: 324 (2022) Cite this article
'Taikong blue' lavender, a space-bred cultivar of Lavandula angustifolia, is one of the main lavender essential oil production crops in Xinjiang Province, China. Several cases of local usage indicated that 'Taikong blue' lavender essential oil (TLEO) had excellent anti-inflammatory and antioxidant properties for skin problems. However, to date, substantial data on these functions are lacking. In this study, we aimed to investigate the composition and bioactivities of TLEO and the potential underlying mechanisms through LPS-induced inflammatory models of HaCaT and RAW264.7 cells.
The composition of TLEO was determined by GC‒MS. To study the anti-inflammatory and antioxidative properties of TLEO, we induced HaCaT and RAW264.7 cells by LPS. TLEO (0.001%-0.1%, v/v) was used to treat inflamed cells with dexamethasone (DEX, 10 μg/mL) as the standard drug. A variety of tests were carried out, including biochemical assays, ELISA, RT‒PCR, and western blotting. Docking of components was performed to predict potential ligands.
The GC‒MS analysis revealed that 53 compounds (> 0.01%) represented 99.76% of the TLEO, and the majority of them were esters. TLEO not only reduced the levels of oxidative stress indicators (NO, ROS, MDA, and iNOS at the mRNA and protein levels) but also protected the SOD and CAT activities. According to the RT‒PCR, ELISA, and Western blot results, TLEO decreased inflammation by inhibiting the expression of TNF-α, IL-1β, IL-6, and key proteins (IκBα, NF-кB p65, p50, JNK, and p38 MAPK) in MAPK-NF-кB signaling. Molecular docking results showed that all of the components (> 1% in TLEO) were potent candidate ligands for further research.
The theoretical evidence for TLEO in this study supported its use in skin care as a functional ingredient for cosmetics and pharmaceutics.
Skin is the largest organ of the body as well as the first defensive line to protect the organism against external stimulation, while macrophages are immune cells that protect host homeostasis by phagocytosis, cytokine production, and antigen presentation [1, 2]. However, skin inflammation presents in common apparent forms, such as erythema, edema, mossy skin, and scaly plaques. It afterward results in other symptoms, such as severe itching, skin aging, lesions, insomnia, mental anxiety, etc. [3–6]. Inflammation plays a crucial role in different types of skin problems and diseases. Excessive inflammation promotes the overproduction of proinflammatory mediators and imbalances the immune system. This process involves the regulation of several enzymatic systems and signaling pathways, such as MAPK, TLR4, JAK-STAT, and NF-кB. While NO, ROS and iNOS are major and typical markers of oxidative stress, TNF-α, IL-1β, and IL-6 are the most common proinflammatory cytokines. As a consequent response to inflammation, IκBα, NF-кB p65 and p50, JNK, and p38 MAPK are phosphorylated and eventually enhance the MAPK-NF-κB pathway [7].
Therefore, managing uncontrolled inflammation could be a good way to regulate inflammatory responses. Currently, there are two types of anti-inflammatory drugs used in therapeutics: steroidal anti-inflammatory drugs (SAIDs) and nonsteroidal anti-inflammatory drugs (NSAIDs) [8]. Nevertheless, some of these drugs always have certain side effects and random adverse reactions, including dry skin, desquamation, skin atrophy, telangiectasia, and erythema [9]. Therefore, natural plant extracts without side effects have received extensive attention due to their safety and effectiveness.
Lavender essential oils (LEOs) are pale-yellow liquids with intense floral-herbal lavender scents and are mainly obtained by steam or hydro distillation from aerial parts of blooming lavenders [10]. LEOs have shown various bioactivities, including antidepressive, antioxidative, anti-inflammatory, antiplatelet, antithrombotic, antimutagenic, carminative (smooth muscle relaxing), and sedative activities, in different studies. These treatments are also effective for wounds, burns, insect bites, urinary infections, cardiac diseases, eczema, and even blood sugar reduction [11, 12]. Lavandula angustifolia Mill was brought to Xinjiang, China, in the 1950s. 'Taikong blue' lavender is a new variety of L. angustifolia Mill produced by mutagenesis in space breeding techniques. From September 27th to October 15th, 2004, 100 g L. angustifolia Mill seeds were released into space on a Long March 2-D carrier rocket as the 20th recoverable science and technology experimental satellite. After a year of cultivation, the researchers discovered that the first space seeds of 'Taikong blue' lavender had numerous benefits over L. angustifolia Mill cultured in the same place, such as longer blooming periods and higher oil yield, and it has become a significant cultivated species in Xinjiang [13, 14].
Our preliminary experimental verification demonstrates that 'Taikong blue' lavender essential oil (TLEO) significantly inhibited LPS-induced IL-6 gene expression (p < 0.05) and was even better than two other lavender essential oils from Xinjiang: 'French blue' lavender essential oil (FLEO) and 701 lavender essential oil (701 LEO) (see supplementary data Fig. S8), indicating that TLEO may have good anti-inflammatory activity. However, its anti-inflammatory and antioxidative activity and related mechanisms are less studied.
Keratinocytes constitute 95% of the skin's outer layer, and as the primary interface, they not only provide barrier protection but also participate in the original immune modulatory responses. With the development of inflammatory reactions, macrophages in the inner skin play a vital role as inflammatory controllers to monitor and secrete various stimulators and cytokines to participate in immune regulation. In other words, macrophage cells protect the skin hemostasis while keratinocytes provide immunological responses [15]. Therefore, in this study, we investigated TLEO's chemical composition and its in vitro anti-inflammatory and antioxidative potential in an LPS-induced inflammatory model of human keratinocytes (HaCaT) and RAW264.7 murine macrophages. In addition, we used molecular docking to determine the potential interactions between TLEO components and pathway proteins.
Chemical reagents and antibodies
TLEO ≥ 98% (HPLC) was purchased from Xinjing Eprhan Spices Co., Ltd. Lipopolysaccharides (LPSs, from Escherichia coli) and dexamethasone (DEX) were purchased from Sigma (St. Louis, USA). Dulbecco's modified Eagle's medium (DMEM) and phosphate buffered solution (PBS) were purchased from HyClone (USA), and 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium (MTT) was purchased from Beyotime Biotechnology Co., Ltd. (Shanghai). Fetal bovine serum (FBS), glutamine, penicillin/streptomycin (P/S), and 0.25% trypsin were obtained from Zhongsheng Aobang Biotechnology Co., Ltd. (Beijing). ROS, NO, MDA, SOD, and CAT assay kits were purchased from Beyotime Biotechnology Co., Ltd. (Shanghai). Antibodies for Western blot and ELISA kits were from ABclonal Biotechnology Co., Ltd. (Wuhan).
GC‒MS Analysis
GC–MS (Thermo Fisher Scientific, Waltham, MA) with a Thermo Fisher Trace 1310 GC system, a Thermo Fisher ISQ LT mass detector, and a DB-FFAP (30 m × 0.25 mm × 0.25 μm; J & W Scientific, Folsom, CA, USA) column was used for the TLEO analysis. One microliter of PREO and 1 μL of 2-nonyl ketone were dissolved in 998 μL of methylene chloride, mixed well, and filtered with a 0.22 μm filter. The mass-selective detector was operated in electron-impact ionization (EI) mode with a mass scan range from m/z 50 to m/z 500 at 70 eV. The oven initial temperature was maintained at 40 °C for 3 min, raised to 50 °C at a rate of 6 °C/min, 7 °C/min to 130 ℃, and then held for 1 min, 2 °C/min to 140 ℃ for 1 min, 3 °C/min to 150 ℃ for 1 min, 5 °C/min to 160 ℃ for 1 min, and finally 230 °C at a rate of 7 °C/min for 5 min. The injection temperature and detection temperature were 250 °C, and there was no split. Helium was the carrier gas at a flow rate of 1.3 mL/min. The TLEO constituents were identified by comparing the mass fragmentation profiles and the retention indices of the chromatographic peaks with the National Institute of Standards and Technology (NIST) MS spectral database (version 2017).
Cell lines, growth conditions, and treatments
HaCaT, a spontaneously immortalized human keratinocyte cell line, was purchased from the Peking Union Cell Resource Center (Beijing, China). HaCaT cells were cultured in DMEM with 10% (v/v) FBS, 1% (v/v) P/S, and 1% (v/v) glutamine. RAW264.7 murine macrophages were purchased from Shanghai Fuheng Biological Co., Ltd., and cultured in DMEM with 10% FBS, 1% P/S, 1% sodium pyruvate, and 1% glutamine. The cells were maintained in a humidified 5% CO2 incubator at 37 °C and were subcultured every 2–3 days to maintain logarithmic growth. After 24 h of culture, HaCaT and RAW264.7 cells were treated with LPS (2.5 μg/mL for 20 h and l μg/mL for 12 h, respectively). Later, the DEX solution (10 μg/mL) or different concentrations (0.001, 0.01, 0.1%) of TLEO containing serum-free media were added to the cell cultures. After 12 h, samples were collected and processed for the following experiments.
Cell viability assay
The cytotoxic activities of TLEO alone were measured by colorimetric MTT [3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyltetrazolium bromide] assays with HaCaT and RAW264.7 cells. HaCaT and RAW264.7 cell suspensions were added to each well of a 96-well plate, and the cell number in each well was approximately 2 × 103. After 24 h of incubation, different concentrations (0.001–1% v/v) of TLEO were added to each well and cultured for 12 h. Then, the solution was carefully removed and added to 10 µL of 5 mg/mL MTT. After incubation at 37 °C for 4 h, the resulting formazan crystals were dissolved in 100 µL of sodium dodecyl sulfate solution (Macklin, Shanghai, China). The absorbance was measured using the microplate reader Infinite M200PRO (Tecan) at a wavelength of 550 nm.
For analysis of the roles of TLEO in LPS-induced protective effects, HaCaT and RAW264.7 cells were grown and treated in 96-well plates as described above. After 12 h of culture, the cells were processed for MTT assays as described previously [15]. Cell cytotoxicity was calculated as the percentage of MTT absorption by using the following formula:
$$\mathrm{Cell}\;\mathrm{viability}\;\left(\%\right)\;=\;\left({\mathrm{OD}}_{\mathrm{sample}}-{\mathrm{OD}}_{\mathrm{blank}}\;\right)/{\mathrm{OD}}_{\mathrm{control}}-{\mathrm{OD}}_{\mathrm{blank}})\times100$$
Reactive Oxygen Species (ROS) level determination and Nitric Oxide (NO) production assay
HaCaT and RAW264.7 cell suspensions were added to each well of a 96-well plate. Finally, the cell number in each well was approximately 2 × 103 to determine ROS and NO levels. ROS production was identified using the ROS Detection Kit (Beyotime Biotechnology). In brief, the supernatant was discarded, and 2,7-dichlorodihydrofluorescein diacetate (DCFH-DA) (10 μM/L; Beyotime Biotechnology) diluted with DMEM was added to the plate. The cells were then cultured for 20 min in the incubator. Cells were washed three times using DMEM without FBS to remove DCFH-DA from the plate. The ROS level was measured using a microplate reader infinite with excitation at 488 nm and emission at 525 nm.
After the treatments described above, the NO production assay was performed in triplicate by using the Griess assay according to the instructions given by the kit. The absorbance values of the colored solution were measured using a microplate reader at 540 nm. These values were converted to micromoles per liter (μmol/L) with a standard curve obtained by adding 0–80 μmol/L sodium nitrate to fresh culture media. This experiment was performed in triplicate individually [16].
Biochemical assay
Three milliliters of HaCaT and RAW264.7 cells at the logarithmic growth stage were inoculated in 35 mm dishes at a density of 1.5 × 106 cells/dish and 1.8 × 106 cells/dish, respectively. After the treatments described in 2.3, measurement of MDA levels in cells was performed using a Lipid Peroxidation MDA Assay Kit. MDA is an end-product of fatty acid peroxidation and can be measured via a chromogenic reaction between MDA and thiobarbituric acid (TBA). Briefly, 100 μL of cell lysate was mixed with 200 μL of MDA working solution followed by incubation in a 100 °C water bath for 15 min and then cooled to 25 °C. The mixture was centrifuged at 1000 × g for 10 min, and 200 μL of supernatant was used to measure absorbance at 532 nm. The TBARS concentration was calculated from an MDA standard curve.
For the CAT activity assay, cells were treated as described in 2.3. The cell lysates were centrifuged at 1600 rpm at 4 °C for 20 min, the supernatant was diluted to the proper concentration, and the protein concentrations were determined by a BCA protein assay kit (Beyotime Institute of Biotechnology). The catalase activity was measured following the kit's instructions by a microplate reader at a wavelength of 520 nm.
SOD activity was measured by a total superoxide dismutase assay kit with WST-8. After the same treatments described in 2.3, the protein concentrations of the cell lysate supernatant were determined by a BCA protein assay kit (Beyotime Biotechnology), and SOD activity was measured by an infinite microplate reader at a wavelength of 450 nm according to the manufacturer's instructions [15, 16].
qRT‒PCR
Quantitative real-time polymerase chain reaction (qRT‒PCR) was used to detect the mRNA levels of proinflammatory cytokines. Three milliliters of cells were plated in a petri dish (35 mm) at a density of 1.5 × 106 cells/dish (HaCaT) and 1.8 × 106 cells/dish (RAW264.7 cells). Briefly, after the treatments of the cells as described in 2.3, total RNA was extracted from the cells with TRIzol. Complementary DNA (cDNA) synthesis was conducted with a ReverTra Ace qPCR RT kit. Quantitative PCR was performed on a CFX96™ real-time PCR machine. The mRNA level was calculated using the 2−∆∆CT method [17]. The sequences of primers for qRT‒PCR are presented in Table 1.
Table 1 Primers for detection of gene expression through qRT-PCR
Enzyme-linked immunosorbent assay (ELISA)
HaCaT cells (1.5 × 106 cells/dish) and RAW264.7 cells (1.8 × 106 cells/dish) were inoculated in 35 mm dishes and incubated for 24 h at 37 °C in 5% CO2. After the same treatments of the cells as described in 2.3, a brief centrifugation step at 2000 × g was applied to remove dead cells and cell debris, and the supernatants were subjected to ELISAs. ELISAs were performed following the manufacturer's instructions for TNF-α, IL-1β, and IL-6 levels.
Western blot (WB)
A total of 1.5 × 106 cells/dish HaCaT cells and 1.8 × 106 cells/dish RAW264.7 cells (each 3 mL) were seeded in dishes (35 mm). The cells were treated in the same way as in 2.3. Then, the cells were washed with cold PBS three times, and the proteins were extracted with a Total Protein Extraction kit. Protein concentrations were determined by a BCA protein assay kit. Equal amounts of proteins were analyzed in polyacrylamide gels by SDS‒PAGE and subsequently transferred to PVDF membranes. PVDF membranes were blocked with 5% nonfat milk in PBST and incubated with primary antibody overnight at 4 °C. Subsequently, the PVDF membranes were washed with PBST and incubated with secondary antibody for 1 h at room temperature. Bands were analyzed by chemiluminescence (Tanon 5200, Beijing Yuanping Hao Biotech) with horseradish peroxidase-conjugated IgG [15].
Molecular docking analysis
The ADME properties of all TLEO components were predicted using Swiss-ADME. The 3D conformations of all 11 components (< 1% from GC‒MS result) were downloaded from the PubChem database (see Supplementary Fig. S1). The ligands were prepared by energy minimization using the MMFF94 forcefield. After analysis of the sequence, the 3D structures of the IKB-α (Human (H):1NFI; Mouse (M): SWISS-MODEL), JNK (H:3NPC; M: SWISS-MODEL), p50 (H:2O61_B chain; M:2V2T), p65 (H: 2O61_A chain; M: 6GGR) and p38MAPK (H:1M7Q; M:6SOI) proteins for humans and mice were obtained from PDB or constructed when needed [18, 19] (see Supplementary Table S2 and Fig. S2). The proteins were energy minimized by Swiss-PDB and were prepared by cleaning and adding charges. Molecular docking was performed using the CBDOCK web server, and the best model with the lowest RMSD was chosen for each protein-component docking [20]. Discovery Visual Studio (DSV) software was used to study the interaction of docked complexes.
All assays were conducted at least three times with three different sample preparations. All data are expressed as the means ± standard deviations (SD). Analysis of variance (ANOVA) was performed using SPSS software (version 19.0; SPSS, Inc., USA). One-way ANOVA and Scheffe's method were used to analyze the differences between the means. A significant difference was recognized at the level of p < 0.05. The figures in this paper were drawn by Origin 2019b.
Chemical profiling of TLEO
For TLEO, 53 compounds were identified and are listed in Table 2. Six major groups of compounds, which resulted in 99.73% yield of TLEO, were detected: esters (51.08%), alcohols (33.14%), monoterpenoids (7.11%), sesquiterpenoids (4.74%), monoterpenes (3.35%), and ketones (0.09%). The remaining 0.22% was composed of cryptone and p-cymen-8-ol. The total ion chromatogram was presented as Fig. S9. (Please see supplementary data).
Table 2 GC–MS analysis to determine the composition of TLEO
Effects of TLEO on LPS-treated cell viability
Different doses of TLEO were added to cell cultures with or without LPS treatments to determine their effects on cell viability. As shown in Fig. 1A-D, the cell viabilities of both cell lines were more than 90% with TLEO (0.001–0.1%) and less than 25% under 1% TLEO treatment. The cell viabilities of subsequent experiments were all over 95% and did not show any decreases when 0.001–0.1% TLEO was administered to the two LPS-induced inflammatory cell models. Therefore, TLEO concentrations of 0.001–0.1% were safe and used for the subsequent experiments.
Cell viabilities of TLEO using MTT assay in HaCaT cells and RAW264.7 murine macrophages. A: Effects of TLEO on HaCaT cell cytotoxicity (without LPS); B: Effects of TLEO on LPS-induced HaCaT cells; C: Effects of TLEO on RAW264.7 cell cytotoxicity (without LPS); D: Effects of TLEO on LPS induced RAW264.7 cells. Data are mean ± S.D. (n = 6). Statistical analysis was performed by one-way ANOVA with a scheffe's test
Effects of TLEO on oxidative stress responses in the LPS-induced inflammatory cell model
As shown in Fig. 2, almost all the TLEO treatments successfully and significantly (p < 0.05) alleviated inflammatory mediator and oxidative factor production and protected antioxidative enzyme activities, similar to the control or even better. Notably, macrophages are more sensitive to LPS treatment. TLEO (0.1%) inhibited NO and ROS production to a lesser extent than DEX treatment. Compared with LPS-treated HaCaT and RAW264.7, the NO yield in 0.1% TLEO treatment was reduced by about 16.36 μmol/L and 111.83 μmol/L, respectively, which was lower than the NO reduction after DEX treatment (9.54 μmol/L and 27.37 μmol/L respectively). However, for the inhibition of iNOS protein expression, the best results were found with 0.01% TLEO (p < 0.05) in the two cell lines. The inhibition rates of iNOS in the 0.01% TLEO treatment group were approximately 49.49% and 32.49%. 0.1% TLEO reduced the ROS production of HaCaT and RAW264.7 stimulated by LPS, 74.09%, and 46.75%, respectively, which were higher than those in the DEX treatment group (12.22% and 32.80%). While for the inhibition of MDA production, 0.01% TLEO was used (p < 0.05) in HaCaT cells, which reduced the LPS-induced MDA of 52.42 μmol/mg to 29.70 μmol/mg. And 0.001–0.1% TLEO resulted in the same significance (p < 0.05) downregulation in the RAW264.7 cell line, the content of MDA was reduced by 5.88 μmol/mg, respectively, 10.38 μmol/mg and 8.37 μmol/mg. The SOD activity of the two LPS-stimulated cell lines under 0.01% TLEO treatment recovered to 13.45 U/mg and 30.00 U/mg, respectively, and the effect was similar to that of the DEX treatment group (13.77 U/mg and 28.88 U/mg), and both reached the normal level (12.58 U/mg and 30.27 U/mg). The same phenomenon was found with CAT only under treatment with 0.1% TLEO in LPS-treated HaCaT cells (p < 0.05), 0.1% TLEO increased CAT activity by 15.58 U/mg.
Effects of TLEO on the oxidative stress markers in LPS-induced HaCaT cells and RAW264.7 murine macrophages. A: Effects of TLEO on NO production in LPS-induced HaCaT cells; B: Effects of TLEO on iNOS protein expression in LPS-induced HaCaT cells; C: Effects of TLEO on NO production in LPS-induced RAW264.7 cells; D: Effects of TLEO on iNOS protein expression in LPS-induced RAW264.7 cells; E, F: Effects of TLEO on ROS production and MDA activity in LPS-induced HaCaT cells; G, H: Effects of TLEO on ROS production and MDA activity in LPS induced RAW264.7 cells; I.J: Effects of TLEO on SOD and CAT activities in LPS-induced HaCaT cells; K, L: Effects of TLEO on SOD and CAT activities in LPS-induced RAW264.7 cells; The data are the means ± S.D. (n = 3). Statistical analysis was performed by one-way ANOVA with a scheffe's test. "a"; "b"; "c"; "d" indicates a significant difference (p < 0.05) compared with the LPS-treated group
Effects of TLEO on the expression of inflammatory cytokines
TLEO inhibited the gene expression of TNF-α, IL-1β, and IL-6 (p < 0.05) at the mRNA and protein levels in the LPS-treated RAW264.7 cells. At the mRNA expression level, TLEO inhibition rates for TNF-α, IL-1β, and IL-6 ranged from 40 to 50%, 12%-72%, and 40%-85%, respectively. While at the protein level, 0.001%-0.1% TLEO reduced the production of TNF-α treated with LPS by 462.79 pg/mL, 484.50 pg/mL, and 497.73 pg/mL, respectively, which was similar to that of the DEX treatment group (476.11 pg/mL). 0.001%-0.1% TLEO reduced the production of IL-1β in LPS treatment to 20.39 pg/mL, 20.26 pg/mL, and 17.52 pg/mL, respectively, similar to the DEX treatment group (20.37 pg/mL). 0.001%-0.1% TLEO reduced the production of IL-6 treated with LPS by 272.87 pg/mL, 432.60 pg/mL, and 774.83 pg/mL, respectively, with certain dose dependence. However, in the case of LPS-activated HaCaT cells, DEX showed the best inhibition of cytokine expression, the inhibition rates of TNF-α, IL-1β, and IL-6 reached 76.98%, 60.03%, and 62.65%, respectively (Fig. 3). The production of TNF-α, IL-1β, and IL-6 was increased by only LPS, while the addition of 0.1% TLEO with LPS further promoted it. The increased expression of the cytokines was reduced significantly by DEX and slightly by 0.01% TLEO. The inhibition rates of 0.01% TLEO on mRNA expression of TNF-α, IL-1β, and IL-6 were 26.34%, 14.51%, and 25.81%, respectively. Similarly, the 0.01% TLEO treatment group achieved the best inhibitory effect on the secretion of these three inflammatory factors. However, in the LPS-treated RAW264.7 cells, TLEO exhibited a dose-dependent effect on the proliferation and LPS-induced expression of TNF-α, IL-1β, and IL-6 mRNA and protein. On average, 0.1% TLEO resulted in comparatively better results than DEX and other TLEOs (Fig. 3) (p < 0.05).
Effects of TLEO on mRNA and protein level of pro-inflammatory cytokines by LPS-induced HaCaTand RAW264.7 cells. A, B: Effects of TLEO on the expression of TNF-α at mRNA and protein level in LPS-induced HaCaT cells; C, D: Effects of TLEO on the expression of IL-1β at mRNA and protein level in LPS-induced HaCaT cells; E, F: Effects of TLEO on the expression of IL-6 at mRNA and protein level in LPS induced HaCaT cells; G, H: Effects of TLEO on the expression of TNF-α at mRNA and protein level in LPS induced RAW264.7 cells; I, J: Effects of TLEO on the expression of IL-1β at mRNA and protein level in LPS induced RAW264.7 cells; K, L: Effects of TLEO on the expression of IL-6 at mRNA and protein level in LPS induced RAW264.7 cells. The data are the means ± S.D.(n = 3). Statistical analysis was performed by one-way ANOVA with a scheffe's test. "a"; "b"; "c"; "d" indicates a significant difference (p < 0.05) compared with the LPS-treated group
Effects of TLEO on NF-κB and MAPK activation
To investigate whether TLEO inhibits NF-κB activation via the MAPK pathway, we used Western blotting to examine the effects of TLEO on the LPS-stimulated phosphorylation of JNK and p38 MAPKs in HaCaT and RAW264.7 cells. As shown in Fig. 4, all TLEO doses significantly suppressed the phosphorylation of the JNK and p38 MAPKs activated by LPS (p < 0.05). In the LPS-treated HaCaT model, the inhibition rates of 0.01% and 0.1% TLEO on the phosphorylation expression level of p38 were 36.35% and 43.46%, respectively. The inhibition rate of the phosphorylation expression level of p38 in the DEX treatment group was not significant. While TLEO of 0.001% and 0.01% significantly reduced the phosphorylation level of JNK, by 37.28% and 21.35% (p < 0.05), respectively, and the inhibition effect of 0.001% TLEO on JNK phosphorylation level was better than that of DEX (inhibition rate 23.82%). In the inflammatory model of LPS-stimulated RAW264.7, the inhibition rates of 0.001%-0.1% TLEO on p38 phosphorylation levels were 20.15%, 34.23%, and 23.82%, respectively, all slightly lower than those in the DEX treatment group (41.89%). Similarly, TLEO of 0.001%-0.1% showed a significant inhibition effect on JNK phosphorylation levels, with inhibition rates of 39.45%, 26.82%, and 25.61%, respectively (p < 0.05). DEX also slightly reduced the expression compared to that of the LPS group, although its effect was not as strong as that of TLEO.
Effects of TLEO on MAPK-NF-κB pathway cascade protein expressions activated in LPS-induced HaCaT and RAW264.7 cells. A-C: Effects of TLEO on the phosphorylation of IkB-α, p65, and p50 protein respectively in LPS induced HaCaT cells; D-F: Effects of TLEO on the expression of IkB-α, p65, and p50 protein respectively in LPS induced RAW264.7 cells; G-I: Effects of TLEO on the phosphorylation of p38MAPK, JNK protein and expression of COX-2 protein in LPS induced HaCaT cells; J-L: Effects of TLEO on the phosphorylation of p38MAPK, JNK protein and expression of COX-2 protein in LPS induced RAW264.7 cells. The data are the means ± S.D. (n = 3). Statistical analysis was performed by one-way ANOVA with a scheffe's test. "a"; "b"; "c"; "d" indicates a significant difference (p < 0.05) compared with the LPS-treated group
Since the phosphorylation of IκB, p50, and p65 and its subsequent degradation are essential steps in NF-κB activation by LPS, we examined the effect of TLEO on these processes by extensive Western blot analysis. Our results showed that 0.1% TLEO decreased the phosphorylation and degradation of IκBα and p50 but promoted p65 phosphorylation in the LPS-activated HaCaT cells (Fig. 4), the inhibition rates of 0.1% TLEO on IκBα and p50 phosphorylation levels reached 65.10% and 60.44% (p < 0.05). However, in the LPS-treated RAW264.7 cells, all TLEOs had almost the same inhibitory effects on the phosphorylation of these three NF-κB proteins. The inhibition rates of 0.001%-0.1% TLEO on IκBα, p65, and p50 phosphorylation levels ranged from 25%-50%, 15%-25%, and 10%-20%, respectively. The inhibition rate of the phosphorylation level of IκBα in the 0.001% TLEO treatment group (48.05%) was significantly higher than that in the DEX treatment group (25.05%) (p < 0.05). The inhibition rates of the phosphorylation level of p65 in the 0.001%-0.1% TLEO treatment group were 24.80%, 23.30%, and 18.95%, respectively, which was similar to that of the DEX treatment group (22.75%). The inhibition rates of the phosphorylation level of p50 in the 0.001%-0.1% TLEO treatment group were 18.59%, 14.29%, and 14.57%, respectively, significantly lower than the DEX treatment group (27.79%) (p < 0.05). However, in both LPS-stimulated cell lines, DEX had slight or no effects on the downregulation of NF-κB pathway proteins (Fig. 5).
Schematic representation of proposed mechanism that is downregulated by TLEO to reduce inflammation and oxidative stress
Molecular docking analysis of TLEO components with pathway proteins
The predicted ADME properties for all TLEO components are presented in Supplementary Table S1. Linalyl acetate, linalool, lavandulol acetate, cis-beta-ocimene, 3-octen-3yl acetate, caryophyllene, terpinen-4-ol, (4E, 6E) allocimene, beta-ocimene, geranyl acetate, and b-myrcene were chosen as ligands for molecular docking based on GC‒MS results (> 1%). Mouse IKB-α and JNK proteins were constructed and validated for use (see Supplementary Table S2 and Fig. S2).
Overall, caryophyllene had the best result for all the proteins in humans and mice followed by geranyl acetate and lavadulol acetate (Fig. 6 and Table 3). This component is bound to human JNK and p38 proteins with affinities of -7.2 and -7.1 kcal/mol, respectively. All the components had weak interactions with IKB-α (> -5.0). The expression of p65 also presented a higher energy requirement (affinity > -5.0) except for caryophyllene. Lavandulol acetate interacted with human p50 at -6.2 kcal/mol. However, for mouse p50, caryophyllene showed the best output of -6.0 kcal/mol binding affinity (see Supplementary Tables S3, S4, S5, S6, S7 and Fig. S3, S4, S5, S6, S7).
Molecular docking of carphyllene with pathway proteins. A: Human proteins IKB-α, JNK, p50, p65, and p38MAPK; B: Mice proteins IKB-α, JNK, p50, p65, and p38MAPK
Table 3 Molecular docking of selective components with pathway proteins
The major components in TLEO are linalool and linalyl acetate (29.48% and 40.97%, respectively), similar to other L. angustifolia species. However, some components that are commonly found in trace amounts (< 0.9%) in other LEOs were found to be significantly higher in TLEO, such as lavandulol acetate (4.77%), cis-β-ocimene (3.04%), 1-octen-3-yl acetate (2.62%), caryophyllene (2.49%), terpinen-4-ol (2.44%), (4E, 6E)-alloocimene (1.23%), β-ocimene (1.2%), geranyl acetate (1.11%), and β-myrcene (1.1%) [21, 22]. Space radiation is one of the methods used to create superior crop varieties. After exposure to cosmic radiation, the lavender seeds gain genotypic and phenotypic changes by the stress of space radiation and microgravity. Through several generations of culture and selection, germplasms with useful and stable modified characteristics have been obtained [23]. "TaiKhong Blue" lavender is one of the mutated cultivars that exhibit high EO productivity, a long-blooming period, and some content changes in chemical composition [24]. The main components of many essential oils known to have an anti-inflammatory effect are linalool and linalyl acetate [25].
Dexamethasone (DEX), a standard anti-inflammatory drug, was used as a positive control in this study. DEX not only reduces the production of inflammatory cytokines, such as IL-6, TNF-α, and NO, by LPS-induced inflammation [25,26,27] but also reduces MDA levels in rabbits suffering from oral ulcers [25, 26, 28]. However, TLEO showed better effects than DEX in average for our analyses.
We induced inflammation in our experiment with lipopolysaccharide (LPS), an endotoxin produced by E.coli. Through the toll-like receptor 4 (TLR4), LPS enters the body and activates macrophages, neutrophils, and endothelial cells like keratinocytes, which results in the production of numerous inflammatory factors, and oxidative stress markers and leads to organ dysfunction (Caroline, 2015). So, the downregulation of the TLR4 pathway is one of the strategies to reduce inflammation in cells. We studied the most prominent pathway of inflammation, which is MAPK-NF-κB signaling. In our present study, TLEO showed good inhibition of the excessive production of NO and ROS in the LPS-treated HaCaT cells and RAW264.7 murine macrophages. This ROS-alleviating effect was also observed by other researchers. Kozics et al. (2017) found that pretreatment with EO from L. angustifolia could inhibit the oxidant stress induced by H2O2 in human hepatoma cells (HepG2) [25, 27, 29]. EO from L. stoechas could also significantly decrease the production of MDA and increase the activities of SOD and CAT to inhibit alloxan-induced oxidative stress in rats [30].
The ROS alleviating effect of TLEO may be due to the downregulation of key proteins in MAPK-NF-κB signaling (Fig. 5). In the present study, TLEO lowered the expression of the p38MAPK, JNK, IkB-α, p65, and p50 proteins and subsequently caused a significant reduction in TNF-α, IL-1β and IL-6 cytokine expression at both the mRNA and protein levels. Few studies refer to the inhibition of the NF-κB pathway-related and cytokine expression by LEOs, which is similar to our results. For example, in a myocardial infarction rat model, L. angustifolia EO significantly reduced TNF-α and COX-2 expression by inhibiting NF-кB activity [31]. Baker et al. (2012) applied L. x intermedia essential oil against C. rodentium-induced colitis in C57BL/6 mice, and the expression of typical inflammatory factors, such as TNF-α, IFN-γ, IL-22, and iNOS, was remarkably downregulated [32].
Our preliminary experiments showed that TLEO exhibited a better anti-inflammatory effect than essential oils from 'French blue' lavender and 701 lavender (types available in Xinjiang), two commonly used varieties in Xinjiang's industry (Fig. S8). The compositional variation might be the key to explaining differences in their activity, but the compounds that are responsible for the difference remain unknown. Molecular docking might shed light on this issue.
Considering all 11 ligands, IKB-α protein from both humans and mice exerted the weakest effect, and JNK showed the highest effect. Caryophyllene, lavandulol acetate, and geranyl acetate were the top 3 ligands that bound to proteins in the MAPK-NF-κB signaling pathway with the lowest affinities. These 3 compounds are relatively abundant in TLEO and are present in trace amounts in other common LEOs. In addition to these compounds, linalyl acetate, as a major component in almost all lavender essential oils, also maintained a relatively high binding affinity (< -5.0 kcal/mol), while 1-octen-3-yl-acetate had the weakest interaction.
Caryophyllene interacted with JNK and p38 in both humans and mice with the lowest affinities > -7.0 kcal/mol. The human JNK protein interacted with caryophyllene by five alkyl bonds at Arg 72, Leu 76–77, Ile 86, and Ile 147 and four van der Waals bonds. The ligand shared eight alkyl bonds as well as four van der Waals bonds with human p38 amino acids ranging from 38 to 171 and was enriched with leucine residues. These properties make p38-caryophyllene more stable than JNK-caryophyllene in humans. However, in mice, JNK and p38 showed the same number of alkyl bonds, varying in the quantity of van der Waals bonds. As p50 and p65 work as dimers to activate the pathway, interacting with one of them could cause changes in the signal. Caryophyllene interacted with both human and mouse p50 with an affinity of -6.0 kcal/mol. These molecules shared alkyl and van der Waals bonds with five common residues: Lys 92, Ile 94, Gly 162, Tyr 163, and Phe 217 (Table 4). The next three best ligand candidates were lavandulol acetate, geranyl acetate, and linalyl acetate (Table 3). All three interacted best with JNK protein from both organisms sharing more than 8 alkyl bonds. Lavandulol acetate shared two conventional hydrogen bonds (Asn 114, Thr 183) with human JNK protein, while geranyl acetate (Met 111) and linalyl acetate (Thr 183) each shared one. For mouse JNK, the interacting bonds for geranyl acetate were conventional hydrogen bonds at Met 111, carbon-hydrogen bonds at Leu 110, alkyl bonds at Ile 86, Leu 106, Met 108, and Leu 168, and five van der Waals bonds at Ile 32, Ala 53, Leu 88, Glu 109, and Val 158 (see Supplementary data). However, there is also experimental evidence of the anti-inflammatory effects of these compounds. A study on a rat skin wound excision model indicated that caryophyllene reduced IFN-γ, IL-1β, IL-6, and TNF-α levels and participated in wound healing by an anti-inflammatory mechanism [33]. Geranyl acetate can decrease inflammation and relieve pain in different experimental models. Linalyl acetate also exhibited anti-inflammatory effects on carrageenan-induced edema in rats [34]. Linalool inhibits the generation and expression of inflammatory mediators (TNF-α, IL-6, IL-1, NO, and PGE2) by blocking the NF-kB and MAPK signaling pathways and activating the Nrf2/HO-1 signaling pathway [35,36,37,38,39,40].
Table 4 Interacting residues of pathway proteins with Caryophyllene
Therefore, our overall molecular docking results suggested that the relatively high amounts of caryophyllene, lavandulol acetate, and geranyl acetate in TLEO might be the key to explaining the better anti-inflammatory effect of TLEO than other lavender essential oils and that linalyl acetate is an important contributor to the anti-inflammatory activity of all lavender essential oils. We further predicted the ADME properties of TLEO components, and the results showed that all the components were safe for use as food and medication.
In summary, molecular biology and molecular docking analysis provide us with a combined perspective on the anti-inflammatory potential of TLEO. Our overall findings proved to have significance for the further development and utilization of TLEO in food, medicine, and cosmetics. The use of TLEO in trace amounts in everyday life could also have preventive potential, which needs more investigation. As other LEOs are already in use for cosmetics, ointment products, and food flavors, TLEO could be introduced as such, and its clinical implementation can be planned in the future to determine its effects on humans. Furthermore, the utilization of TLEO components individually or in other composition ratios may be a possible future area of study.
The data used to support the findings of this study are available from the corresponding author upon request.
Duque GA, Descoteaux A. Macrophage cytokines: involvement in immunity and infectious diseases. Front Immunol. 2014;5:1–12.
Raka RN, Wu H, Xiao J, Hossen I, Cao Y, Huang M, et al. Human ectopic olfactory receptors and their food originated ligands: a review. Crit Rev Food Sci Nutr. 2021.
Kupper TS, Fuhlbrigge RC. Immune surveillance in the skin: mechanisms and clinical consequences. Nat Rev Immunol. 2004;4:211–22.
Kim BH, Choi MS, Lee HG, Lee SH, Noh KH, Kwon S, et al. Photoprotective potential of penta-O-Galloyl-β-D-Glucose by targeting NF-κB and MAPK signaling in UVB radiation-induced human dermal fibroblasts and mouse skin. Mol Cells. 2015;38:982–90.
Dorschner RA, Lee J, Cohen O, Costantini T, Baird A, Eliceiri BP. ECRG4 regulates neutrophil recruitment and CD44 expression during the inflammatory response to injury. Sci Adv. 2020;6(11):eaay0518.
Toron F, Neary MP, Smith TW, Gruben D, Romero W, Cha A, et al. Clinical and economic burden of mild-to-moderate atopic dermatitis in the UK: a propensity-score-matched case-control study. Dermatol Ther (Heidelb). 2021;11:907–28.
Liu T, Zhang L, Joo D, Sun SC. NF-κB signaling in inflammation. Signal Transduct Target Ther. 2017;2:17023.
Moreira R, Jervis PJ, Carvalho A, Ferreira PMT, Martins JA, Valentão P, et al. Biological evaluation of naproxen–dehydrodipeptide conjugates with self-hydrogelation capacity as dual LOX/COX inhibitors. Pharmaceutics. 2020;12(2):122.
Kumar A, Agarwal K, Singh M, Saxena A, Yadav P, Maurya AK, et al. Essential oil from waste leaves of Curcuma longa L. alleviates skin inflammation. Inflammopharmacology. 2018;26:1245–55.
Prusinowska R, Śmigielski KB. Composition, biological properties and therapeutic effects of lavender (Lavandula angustifolia L). A review Herba Polonica. 2014;60:56–66.
Selmi S, Jallouli M, Gharbi N, Marzouki L. Hepatoprotective and renoprotective effects of Lavender (Lavandula stoechas L.) essential oils against malathion-induced oxidative stress inyoung male mice. J Med Food. 2015;18:1103–11.
Huang MY, Liao MH, Wang YK, Huang YS, Wen HC. Effect of Lavender essential oil on LPS-stimulated inflammation. Am J Chin Med. 2012;40:845–59.
Hu LH, Lu Zh, et al. Analysis and identification of chemical constituents of the essential oil of Lavender( Lavandula angustifolia Mill.) induced by space flight. J Anhui Agric Univ. 2014;42(14):4211–2.
Jiang XM, Li M, et al. Breeding and application of new lavender variety Xinxun 2. J Anhui Agric Univ. 2014;42(4):1027-1028,1073.
Raka RN, Zhiqian D, Yue Y, Luchang Q, Suyeon P, Junsong X, et al. Pingyin rose essential oil alleviates LPS - Induced inflammation in RAW 264 . 7 cells via the NF - κB pathway : an integrated in vitro and network pharmacology analysis. BMC Complement Med Ther. 2022;1(1):16.
Hossen I, Hua W, Mehmood A, Raka RN, Jingyi S, Jian-Ming J, et al. Glochidion ellipticum Wight extracts ameliorate dextran sulfate sodium-induced colitis in mice by modulating nuclear factor kappa-light-chain-enhancer of activated B cells signalling pathway. J Pharm Pharmacol. 2021;73:410–23.
Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2-ΔΔCT method. Methods. 2001;25:402–8.
Waterhouse A, Bertoni M, Bienert S, Studer G, Tauriello G, Gumienny R, et al. SWISS-MODEL: Homology modelling of protein structures and complexes. Nucleic Acids Res. 2018;46:W296-303.
Berman, Helen, Kim Henrick, and Haruki Nakamura. "Announcing the worldwide protein data bank." Nat Struct Mol Biol. 2003;10(12):980.
Liu Y, Grimm M, Dai W tao, Hou M chun, Xiao ZX, Cao Y. CB-Dock: a web server for cavity detection-guided protein–ligand blind docking. Acta Pharmacol Sin. 2020;41:138–44.
Gao J, Zhang Y, Lai P. Chemical composition of the essential oil of tripterospermum chinense. Chem Nat Compd. 2017;53:565–6.
Dong G, Bai X, Aimila A, Aisa HA, Maiwulanjiang M. Study on lavender essential oil chemical compositions by GC-MS and improved pGC. Molecules. 2020;25:1–8.
Mohanta TK, Mishra AK, Mohanta YK, Al-Harrasi A. Space breeding: the next-generation crops. Front Plant Sci. 2021;12:771985.
HU Lian-hua,LU Zhe et al. Analysis and Identification of Chemical Constituents of the Essential Oil of Lavender( Lavandula angustifolia Mill. ) Induced by Space Flight ( The 65 Regiment of Agricultural Division 4 of Xinjiang Production and Construction Corps,Yining,Xinjiang 835000; Institute of Agricultural Sciences,Agricultural Division 4 of Xinjiang Production and Construction Corps,Yining,Xinjiang 835000). https://0-doi-org.brum.beds.ac.uk/10.13989/j.cnki.0517-6611.2014.14.072.
Hassan HM, Guo H, Yousef BA, Ping-Ping D, Zhang L, Jiang Z. Dexamethasone pretreatment alleviates isoniazid/lipopolysaccharide hepatotoxicity: inhibition of inflammatory and oxidative stress. Front Pharmacol. 2017;8:133.
Myers MJ, Farrell DE, Palmer DC, Post LO. Inflammatory mediator production in swine following endotoxin challenge with or without co-administration of dexamethasone. Int Immunopharmacol. 2003;3:571–9.
Kanai K, Itoh N, Ito Y, Nagai N, Hori Y, Chikazawa S, et al. Anti-Inflammatory Potency of Oral Disulfiram Compared with Dexamethasone on Endotoxin-Induced Uveitis in Rats. 2011.
Wu W, Ruan J, Li D, Tao H, Deng C, Wang R, et al. Effect of dexamethasone on levels of inflammatory factors and EGF mRNA in rabbits suffering from oral ulcers. Trop J Pharm Res. 2021;20:351–7.
Lee YH, Song GG. Un co r re c te d Un co r re c te d. Neoplasma. 2013;60:607–16.
Sebai H, Selmi S, Rtibi K, Gharbi N, Sakly M. Protective effect of Lavandula stoechas and Rosmarinus officinalis essential oils against reproductive damage and oxidative stress in alloxan-induced diabetic rats. J Med Food. 2015;18:241–9.
Souri F, Rakhshan K, Erfani S, Azizi Y, Nasseri Maleki S, Aboutaleb N. Natural lavender oil (Lavandula angustifolia) exerts cardioprotective effects against myocardial infarction by targeting inflammation and oxidative stress. Inflammopharmacology. 2019;27:799–807.
Baker J, Brown K, Rajendiran E, Yip A, DeCoffe D, Dai C, et al. Medicinal lavender modulates the enteric microbiota to protect against Citrobacter rodentium-induced colitis. Am J Physiol Gastrointest Liver Physiol. 2012;303(7):G825–36.
Sérgio LF, Pereira GF, Fernanda BM, Tireli HM, Pena GV, Fernanda RP, et al. Beta-caryophyllene as an antioxidant anti-inflammatory and re-epithelialization activities in a rat skin wound excision model. Oxid Med Cellular Longev. 2022;20221–21. https://0-doi-org.brum.beds.ac.uk/10.1155/2022/9004014.
Quintans-Júnior L, Moreira JCF, Pasquali MAB, Rabie SMS, Pires AS, Schröder R, Rabelo TH, Santos JPA, Lima PSS, Cavalcanti SCH, Araújo AAS, Quintans JSS, Gela DP. ISRN Toxicology. 2013. p. 20131–11. https://0-doi-org.brum.beds.ac.uk/10.1155/2013/459530.
Aprotosoaie AC, Hăncianu M, Costache II, Anca Miron A. Linalool: a review on a key odorant molecule with valuable biological properties. Flavour and Fragrance J. 2014;29(4):193–219. https://0-doi-org.brum.beds.ac.uk/10.1002/ffj.3197.
GuKim M, ManKim S, Min JH, Kwon OK, Park MH, Park JW, Ahn HI, Hwang JY, Oh SR, Lee JW, Ahn KS. Anti-inflammatory effects of linalool on ovalbumin-induced pulmonary inflammation. Intern Immunopharmacology. 2019:74105706-S1567576919301286 105706. https://0-doi-org.brum.beds.ac.uk/10.1016/j.intimp.2019.105706.
Li Y, Lv O, Zhou F, Li Q, Wu Z, Zheng Y. Linalool Inhibits LPS-Induced Inflammation in BV2 Microglia Cells by Activating Nrf2. Neurochem Res. 2015;40(7):1520–5. https://0-doi-org.brum.beds.ac.uk/10.1007/s11064-015-1629-7.
Ma J, Xu H, Wu J, Qu C, Sun F, Xu S. Linalool inhibits cigarette smoke-induced lung inflammation by inhibiting NF-κB activation. Intern Immunopharmacol. 2015;29(2):708-13. https://0-doi-org.brum.beds.ac.uk/10.1016/j.intimp.2015.09.005.
Huo M, Cui X, Xue J, Chi G, Gao R, Deng X, Guan S, Wei J, Soromou LW, Feng H, Wang D. Anti-inflammatory effects of linalool in RAW 264.7 macrophages and lipopolysaccharide-induced lung injury model. J Surg Res. 2013;180(1):e47–e54 S0022480412009699. https://0-doi-org.brum.beds.ac.uk/10.1016/j.jss.2012.10.050.
Horváth A, Pandur E, Sipos K, Micalizzi G, Mondello L, Böszörményi A, Birinyi P, Horváth G. Anti-inflammatory effects of lavender and eucalyptus essential oils on the in vitro cell culture model of bladder pain syndrome using T24 cells. BMC Complement Med Ther. 2022;22(1):119. https://0-doi-org.brum.beds.ac.uk/10.1186/s12906-022-03604-2.
This research was supported Beijing Natural Science Foundation (Grant No. 6212002), Beijing municipal education commission general project (KM202010011010), and Enterprise technological innovation project of Shandong Province (202250101132).
Mengya Wei, Fei Liu and Rifat Nowshin Raka contributed equally to this work.
Beijing Technology and Business University, Beijing, 100048, China
Mengya Wei, Rifat Nowshin Raka, Jie Xiang, Junsong Xiao & Hua Wu
Shandong Freda Biotech Co., Ltd, Ji'nan, 250101, Shandong, China
Fei Liu, Tingting Han, Fengjiao Guo & Suzhen Yang
Xinjiang Eprhan Spices Co., Ltd, Cocodala, 835213, Xinjiang, China
Mengya Wei
Fei Liu
Rifat Nowshin Raka
Jie Xiang
Junsong Xiao
Tingting Han
Fengjiao Guo
Suzhen Yang
Hua Wu
Wei Mengya: Experiment plan, Methodology, Formal analysis; Liu Fei: Experiment plan, Data Curation; Raka Rifat Nowshin: Methodology (In silico), Data Curation, Paper Writing (Original draft), Editing; Jie Xiang: Visualization; Dr. Xiao Junsong: Formal analysis, Editing and reviewing draft; Tingting Han:—Experimental Analysis, Fengjiao Guo:—Reviewing the draft; Suzhen Yang:- Fund acquisition, Conceptualization; Dr. Wu Hua:—Conceptualization, Project administration, Fund acquisition, Paper writing, Editing, Reviewing. The author(s) read and approved the final manuscript.
Correspondence to Suzhen Yang or Hua Wu.
There is no conflict to declare.
Fig. S1. 11 Ligands with their structure. Figure S2. Ramachandran plot for Mice IkB-α(A) and JNK(B) protein from swiss model. Fig. S3. Molecular docking of TLEO components with Human and Mice IkB-αprotein. Fig. S4. Moleculardocking of TLEO components with Human and Mice JNK protein.Fig. S5. Molecular docking of TLEO components withHuman and Mice p50 protein. Fig. S6. Molecular docking of TLEO components with Human and Mice p65 protein. Fig. S7. Moleculardocking of TLEO components with Human and Mice p38 protein.Fig. S8. Effects of TLEO on mRNA andprotein level of pro-inflammatory cytokines IL-6 by LPS-induced RAW264.7cells. The data are the means ± S.D (n = 3). Statistical analysis was performedby one-way ANOVA with a scheffe's test. "*" and "#" indicate significant difference (p <0.05) compared with the LPS-treated group. Fig. S9. Total ionchromatography from GC-MS analysis of TLEO. Table S1. ADME analysis of TLEOcomponents. Table S2. Validation of mice protein IKB-α and JNK. Table S3. Molecular docking of major TLEO componentswith protein IKB-α. Table S4. Molecular docking of major TLEO componentswith protein JNK. Table S5. Molecular docking of major TLEO components withprotein p50. Table S6. Molecular docking of major TLEO components with protein p65. Table S7. Molecular docking of major TLEO components with protein p38.
Wei, M., Liu, F., Raka, R.N. et al. In vitro and in silico analysis of 'Taikong blue' lavender essential oil in LPS-induced HaCaT cells and RAW264.7 murine macrophages. BMC Complement Med Ther 22, 324 (2022). https://0-doi-org.brum.beds.ac.uk/10.1186/s12906-022-03800-0
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12906-022-03800-0
Antioxidative
Macrophages
Submission enquiries: [email protected] | CommonCrawl |
Multiplex gene and phenotype network to characterize shared genetic pathways of epilepsy and autism
Brain-specific functional relationship networks inform autism spectrum disorder gene prediction
Marlena Duda, Hongjiu Zhang, … Yuanfang Guan
SFARI genes and where to find them; modelling Autism Spectrum Disorder specific gene expression dysregulation with RNA-seq data
Magdalena Navarro Torres Arpi & T. Ian Simpson
Identification and prioritization of gene sets associated with schizophrenia risk by co-expression network analysis in human brain
Eugenia Radulescu, Andrew E. Jaffe, … Daniel R. Weinberger
Convergent roles of de novo mutations and common variants in schizophrenia in tissue-specific and spatiotemporal co-expression network
Peilin Jia, Xiangning Chen, … Zhongming Zhao
"Guilt by association" is not competitive with genetic association for identifying autism risk genes
Margot Gunning & Paul Pavlidis
Integrative analysis of genome-wide association studies identifies novel loci associated with neuropsychiatric disorders
Xueming Yao, Joseph T. Glessner, … Jin Li
An integrated transcriptomic analysis of autism spectrum disorder
Yi He, Yuan Zhou, … Juan Wang
Shared mechanisms across the major psychiatric and neurodegenerative diseases
Thomas S. Wingo, Yue Liu, … Aliza P. Wingo
Coexpression network architecture reveals the brain-wide and multiregional basis of disease susceptibility
Christopher L. Hartl, Gokul Ramaswami, … Daniel H. Geschwind
Jacqueline Peng1,2,
Yunyun Zhou2 &
Kai Wang2,3
Scientific Reports volume 11, Article number: 952 (2021) Cite this article
Functional clustering
It is well established that epilepsy and autism spectrum disorder (ASD) commonly co-occur; however, the underlying biological mechanisms of the co-occurence from their genetic susceptibility are not well understood. Our aim in this study is to characterize genetic modules of subgroups of epilepsy and autism genes that have similar phenotypic manifestations and biological functions. We first integrate a large number of expert-compiled and well-established epilepsy- and ASD-associated genes in a multiplex network, where one layer is connected through protein–protein interaction (PPI) and the other layer through gene-phenotype associations. We identify two modules in the multiplex network, which are significantly enriched in genes associated with both epilepsy and autism as well as genes highly expressed in brain tissues. We find that the first module, which represents the Gene Ontology category of ion transmembrane transport, is more epilepsy-focused, while the second module, representing synaptic signaling, is more ASD-focused. However, because of their enrichment in common genes and association with both epilepsy and ASD phenotypes, these modules point to genetic etiologies and biological processes shared between specific subtypes of epilepsy and ASD. Finally, we use our analysis to prioritize new candidate genes for epilepsy (i.e. ANK2, CACNA1E, CACNA2D3, GRIA2, DLG4) for further validation. The analytical approaches in our study can be applied to similar studies in the future to investigate the genetic connections between different human diseases.
Epilepsy and autism spectrum disorder (ASD) are two broad categories of brain disorders that are each characterized by substantial variability in the range of their clinical symptoms and strong but heterogeneous genetic association signals1,2,3,4,5,6,7,8,9,10. Epilepsy is a neurological disease characterized by recurrent seizures of different types. It is estimated that as much as 70% of epilepsies could have a strong genetic basis due to genetic defects11. ASD represents a complex range of neurodevelopmental conditions characterized by challenges in social interaction, nonverbal communication and repetitive behavior, each with varying degrees of impairment12. The heritability of ASD is estimated to be 56%-95% in various studies13,14,15, suggesting a strong genetic basis in ASD. Hereafter in the manuscript, we may use autism and ASD interchangeably for convenience.
There is a surprisingly high co-occurrence of these two disorders, for example, about 2–3% of children are estimated to have epilepsy, yet that percentage rises to around 30% in autism cases16,17. One current hypothesis is that the co-occurrence of epilepsy and autism stems from the disruption of shared neurodevelopmental pathways implicated by the relatively high number of genes associated with both disorders18,19. Certain biological pathways are involved in both disease processes, such as transcription regulation, cellular growth, and synaptic regulation20. The excitation/inhibition balance hypothesis suggests that neurodevelopmental defects, primarily of GABAergic and glutamatergic systems, lead to an imbalance in excitatory and inhibitory neural circuits that lead to the pathogenesis of both disorders21. However, the exact mechanisms involved in these two disorders still need to be further elucidated24.
Several studies using network-based approaches have been taken to identify shared pathways and candidate genes for epilepsy or autism22,23,24,25,26,27. These studies generally use protein–protein interaction (PPI), co-expression, shared biological processes/pathways, and other networks to represent relationships between autism- or epilepsy-associated genes from sequencing data or curated databases. However, few of them have studied epilepsy and autism within the context of each other. In one such study, a random walk-based clustering approach was used to dissect modules of highly interacting genes representing epilepsy phenotypes from genes more generally involved in neurodevelopmental disorders22. Furthermore, epilepsy genes were predicted based on these modules. While the assumption is that genes in highly connected biological modules are likely to manifest similar phenotypes28, as far as we know, none or few epilepsy/autism network studies directly use phenotype networks based on gene-phenotype associations. Phenotype networks can potentially reveal additional information about the relationships among entities in the network, such as distinguishing genetic modules of primarily epilepsy-related phenotypes from those of autism-related phenotypes29.
This present project is motivated not only by the high co-morbidity rate of these two diseases, but also by the frequent observation that when a new candidate gene is published for one disease, it is often already a well-known gene for another disease. Our central hypothesis is that the genetic overlap is due to high heterogeneity of both diseases, so there will be autism-specific, epilepsy-specific and shared genetic modules, and studying each type of module allows us to identify novel candidate genes for each disease and characterize their phenotypic consequences. Therefore, in this present study, our aim is to characterize genetic modules and prioritize candidate genes for both disorders, taking advantage of the large number of expert-compiled and well-established disease-associated genes for each of the diseases. First, we constructed a phenotype network of epilepsy- and autism-associated genes using gene-phenotype associations from the comprehensive database of Phen2Gene, a phenotype-drive gene prioritization tool30. We then integrated the protein–protein interaction (PPI) relationships of these genes using a multilayer network. Then, we used a community detection algorithm to identify modules of highly interacting genes with similar phenotypic manifestations in the multiplex network. While previous network-based studies have focused primarily on studying either epilepsy or autism genes, the novelty in this study comes from analyzing both epilepsy- and autism-associated genes together in a multiplex network, formally taking both PPI interactions and gene-phenotype associations into consideration. As the result of our study, we prioritized two modules enriched in common genes, genes associated with both epilepsy and autism, representing the biological processes of ion transmembrane transport and synaptic signaling, respectively, that may contribute to the shared genetic etiology between epilepsy and autism. One of the two modules is an epilepsy-focused module enriched in genes directly causing epilepsy and epilepsy phenotypes and the other is an autism-focused module enriched in highest confidence autism genes and autism phenotypes. Additionally, we identified corresponding modules in a similar multiplex network constructed with solely epilepsy and autism genes from whole-exome sequencing (WES) studies. Finally, we prioritized candidate epilepsy genes based on the overlap of these prioritized modules. These findings are summarized in Fig. 1. Understanding the genetic connection of epilepsy and autism can aid in the discovery and prioritization of candidate genes for either disorder and the understanding of their shared molecular pathophysiology.
Summary of study findings. (A) A network of all 1707 epilepsy- and autism-associated genes from Wang et al. (2017)10 and SFARI60 and (B) A network of 294 epilepsy and autism genes from WES studies38,61. The edges represent the union of edges from the PPI network layer and phenotype network layer of the multiplex network. The color of a node represents the module it belongs to in the multiplex network and the size of the node is relative to its degree in the network. Key modules identified in the study are annotated. The network plots were generated using Gephi version 0.9.2, a graph visualization software. (C) The five autism-specific genes (i.e. not listed in Wang et al. (2017)10) in module 3 and 6 of the larger multiplex network that overlap with epilepsy-focused module 2 of the WES network are predicted candidate epilepsy genes.
Constructing a multiplex network from PPI and phenotype relationships between autism- and epilepsy-associated genes
From recently published review literature and from the SFARI (Simons Foundation Autism Research Initiative) database, we compiled 999 established epilepsy-associated genes and 913 known autism-associated genes to be used as nodes in the PPI, phenotype, and multilayer networks. Among them, 205 genes are shared by both groups (Fig. 3A), which we will subsequently refer to as "common genes", so in total there were 1707 unique genes. We note that each of the two gene lists can be sub-divided into several subgroups based on varying levels of association or confidence (Table 1). For example, epilepsy genes that "only cause epilepsies or syndromes with epilepsy as the core symptom" are classified in subgroup 1, yet genes associated with "gross brain developmental malformations and epilepsies" are classified in subgroup 2. Similarly, SFARI genes are classified into distinct groups based on a scoring system that takes into account different sources of information reflecting the strength of the evidence linking it to the development of autism. Our subsequent analysis considers all epilepsy and autism subgroups.
Table 1 Summary of epilepsy- and autism-associated genes used in the current study. Gene clusters from the original epilepsy and autism gene lists were separated into individual genes and gene symbols were standardized; the counts shown were taken after this step.
The 1707 epilepsy- and autism-associated genes and all 7903 edges between them from the STRING PPI network constituted one of two layers of the epilepsy-autism multiplex network (Fig. 3B). The same 1707 genes and all 12,070 edges between them from the gene-phenotype network constituted the other layer in the multiplex network. When analyzing both layers as individual networks, we found that their degree distributions were similar (Fig. 2A) and there was a significant (p < 0.0001) overlap in their edges compared to random networks with the same degree distribution (Fig. 2B). Moreover, after running the Louvain clustering algorithm on both networks, the normalized mutual information, which represents how similar the two partitions are, was significantly (p < 0.001) greater than random networks with the same degree distribution (Fig. 2C). There is also a positive correlation between a gene's degree in the PPI network and its degree in the phenotype network (Fig. 2D). These results support that information from the gene-PPI network is consistent with that of the gene-phenotype network. The multiplex network was created by stacking the two layers such that the same genes in the two layers aligned. From the 1707 epilepsy- and autism-associated genes, there were 1556 genes that had a degree of at least one in either the gene-PPI or gene-phenotype layer.
Similarity of the PPI and phenotype network layers in the epilepsy-autism multiplex network. (A) The degree distribution within each layer of the multiplex network is plotted. (B) There is a significant number of edges overlapping between the PPI network layer and phenotype network layer (p < 0.0001). The distribution represents the number of overlapping edges from 10,000 trials where the PPI network and phenotype network were randomly generated maintaining their original degree distribution. The red line represents the actual number of overlapping edges between the two networks. (C) There is a significant overlap in the modules in the PPI network layer and the modules in the phenotype network layer (p < 0.001). The distribution represents the normalized mutual information from 1000 trials where the PPI network and phenotype network were randomly generated maintaining their original degree distribution. The red line represents the actual normalized mutual information. (D) There is a correlation between the degree of a node in the PPI network layer and phenotype network layer. Each point represents one of the 1707 genes/nodes in the multiplex network.
We used the Louvain algorithm to partition the multiplex network (Fig. 1B). While there were 1707 genes in the network, only 17 modules contain at least two genes (Fig. 3C). We chose to focus on the 14 largest modules for the remainder of the analysis because of the large drop in number genes from module 14 (37 genes) to module 15 (8 genes). We also used the Louvain algorithm on the individual PPI and phenotype network layers. While the module sizes were similar among the two layers, the module sizes for multiplex partition were slightly larger, owing to the greater number of gene–gene relationships revealed when both layers were taken into account (Fig. 3C). In summary, the epilepsy and autism genes converge on a limited number of modules and we will show the biological significance of those modules in the next section.
Summary of multiplex network construction and network modules. (A) In the epilepsy-autism gene network there are a total of 1707 genes represented, including 999 epilepsy-associated genes and 913 autism-associated genes (205 genes are shared). (B) Using the 1707 genes a multiplex gene network was created. One layer of the multiplex network represents protein–protein interactions (PPIs) between the genes retrieved from the STRING database. The other layer was created using gene-phenotype relations retrieved from the Phen2Gene knowledge base. A multiplex version of the well-known Louvain algorithm was applied on the multiplex network to generate modules taking both layers of the network into consideration. The regular Louvain algorithm was also applied to each layer separately to generate modules using only one layer. (C) The figure plots the size of the 30 largest modules generated from the PPI network layer, the phenotype network layer, and the multiplex network.
Gene set enrichment analysis of modules within the multiplex network
We focused on the 205 common genes that are associated with both epilepsy and autism and found that several modules tend to have significant enrichment of these genes (Fig. 4). Among the 14 largest modules in the multiplex network, modules 3, consisting of 151 genes, and module 6, consisting of 109 genes, are significantly enriched in common genes (FDR = 1.89e−05 and FDR = 2.99e−06). Furthermore, module 3 is enriched in high confidence (HC) common genes, which is the intersection of subgroup 1 of epilepsy and subgroup 1 of autism genes (FDR = 3.92e−04). The most significantly enriched biological process GO (Gene Ontology) term for these two modules are ion transmembrane transport (GO:0034220, FDR = 1.09e−82, 99/151 genes in the module) and synaptic signaling (GO:0099536, FDR = 4.83e−43, 52/109 genes in the module), respectively, for modules 3 and 6. The enrichment of common genes in these two modules indicate that they likely represent biological processes relevant to both epilepsy and autism.
Gene enrichment analysis on modules in the epilepsy-autism multiplex network. The enrichment analysis of different gene groups over the 14 largest modules in the epilepsy-autism multiplex network. The hypergeometric test was used to determine the p-value and the false discovery rate (FDR) is reported since multiple gene groups were tested. (A) The background of the hypergeometric test is the 1707 genes in the network. (B) The background of the hypergeometric test is 19,556 genes (the number of genes in the STRING database). COMMON GENES (WES) = genes in both the epilepsy and autism WES gene lists, COMMON GENES (HC) = genes that are both in the epilepsy 1 subgroup and autism 1 subgroup (high confidence), COMMON GENES (ALL) = all genes in both epilepsy subgroup and autism subgroup, BD = bipolar disorder, ID = intellectual disability, BE GENES = genes that have significantly higher expression in brain tissue vs control tissue. For both A) and B), "***" denotes FDR < 0.01, "**" denotes FDR < 0.05, and "*" denotes FDR < 0.1, assuming a hypergeometric distribution. The heatmap was generated using seaborn version 0.10.0 (https://seaborn.pydata.org/).
In addition to GO-based functional enrichment analysis, we also explored whether there are specific patterns of tissue-specific gene expression in the modules that are enriched for pleotropic genes affecting both epilepsy and autism, specifically module 3 and 6. For this analysis, we utilized the RNA-Seq data of Genotype-Tissue Expression (GTEx) consortium to identify genes that tend to have higher expression levels in brain tissue than other tissues, which we will refer to as brain-enriched genes (BEGs). Among the 14 largest modules, only modules 3 (FDR = 1.74e−11) and module 6 (FDR = 9.59e−07) have a significant enrichment of BEGs (Fig. 4A) relative to other modules. We acknowledge that since both epilepsy and autism are brain diseases, it is expected that disease-relevant genes are likely brain-expressed genes a priori. As expected, we confirmed that nearly all BEGs in these two modules are upregulated genes. In summary, the relatively large number of BEGs and their highly significant enrichment further support the functional relevance of these two modules in the shared etiology between epilepsy and autism, and possibly other developmental disorders. In fact, module 3 was enriched in bipolar disorder (BD) genes (FDR = 1.69e−05), and module 6 was enriched in intellectual disability (ID) genes (FDR = 3.95e−04).
Next, we explored the relationship between subtypes of epilepsy and autism by considering the different subgroups of genes that are associated with epilepsy (subgroup 1, 2, 3, 4) and autism (subgroup 1, 2, 3 and S). Additional details on the classification of these subgroups are given in Table 1. The distribution of these subgroups within the 14 largest multiplex modules are shown in Fig. S2. We found that certain modules are enriched in specific subgroups of epilepsy genes (Fig. 4). Module 3, which represents the GO biological process of ion transmembrane transport as discussed previously, is enriched in Epilepsy 1 genes (genes directly causing epilepsy) compared to other modules (FDR = 7.14e−19). Module 3 is also the only module enriched in whole-exome sequencing (WES) epilepsy genes (FDR = 1.52e−05) and Epilepsy 4 genes (FDR = 0.0270) (predicted epilepsy genes) relative to other modules. Modules 5 and 10 are enriched in Epilepsy 2 genes (neurodevelopment-associated genes) compared to other modules (FDR = 2.58e−11 and FDR = 0.0255, respectively). The most significant biological process GO term for these two modules are cell cycle process (GO:0022402, FDR = 7.07e−29, 56/119 genes in the module) and RNA metabolic process (GO:0016070, FDR = 2.04e−12, 49/66 genes in the module), respectively. Modules 2, 7, and 8 are enriched in Epilepsy 3 genes (genes associated with other abnormalities and accompanied by epilepsy or seizures) compared to other modules (FDR = 2.37e−30, FDR = 2.00e−05, FDR = 0.0343, respectively). The most significant GO biological process for these three modules are organic acid metabolic process (GO:0006082, FDR = 2.85e−62, 96/217 genes in the module), carbohydrate derivative biosynthetic process (GO:1901137, FDR = 6.09e−34, 48/103 genes in the module), and vesicle-mediated transport (GO:0016192, FDR = 1.06e–05, 28/90 genes in the module) respectively. Therefore, module 3 and its associated GO biological functions likely have more of a direct relationship to epilepsy pathogenesis, while modules 2, 5, 7, 8, and 10, although epilepsy-related, have a more indirect relationship to epilepsy through neurodevelopment or other abnormalities. Table S2 in the supplementary Excel file contains additional information on the GO enrichments of each module as well as the subgroups of epilepsy and autism genes in the module.
Similarly, specific modules are enriched in specific subgroups of autism genes (Fig. 4). Unlike epilepsy, we note that the SFARI autism genes are classified based on confidence of association, given that autism is a complex neurodevelopmental disorder. Modules 4 and 6 are enriched in Autism 1 genes (autism genes with the highest confidence) compared to other modules (FDR = 4.30e−05 and FDR = 9.15e−05, respectively). The most significant GO biological process for these two modules, respectively, are chromatin organization (GO: 0006325, FDR = 7.45e−50, 68/140 genes in the module) and synaptic signaling, as discussed previously. Module 4 is also enriched in WES autism genes (FDR = 4.30e−05) and Autism S genes (syndromic autism genes) (FDR = 0.0152) relative to other modules. No module was relatively enriched in Autism 2 genes (strong autism candidates) with FDR < 0.05 although module 3 had the lowest FDR (FDR = 0.0821). Modules 9 and 11 were enriched in Autism 3 genes (genes with suggestive evidence) compared to other modules (FDR = 0.0467 and FDR = 0.0299, respectively). Therefore, modules 4 and 6 probably represent the set of most confident autism-associated genes and their associated GO biological functions give insight into autism pathogenesis.
Phenotype enrichment analysis of modules within the multiplex network
Along with characterizing each module by enrichment in gene sets, we can also characterize them by enrichment in human phenotype ontology (HPO) associations31. Figure 3 displays the various HPO enrichments among the 14 largest multiplex modules. Module 3 is most significantly enriched in epilepsy phenotypes (HPO IDs in the HPO subtree with a root of HP:0001250–seizure) compared to other modules in the network (Fig. 5A,C). This is consistent with the module's enrichment in Epilepsy 1 genes and WES epilepsy genes. Module 6 is mostly significantly enriched in autism phenotypes (HPO IDs in the HPO subtree with a root of HPO: HP:0000729—autistic behavior) compared to other modules in the network (Fig. 5A,D). This is consistent with the module's enrichment in Autism 1 genes. While module 4 was also enriched in Autism 1 genes, as well as WES autism genes, the module was not enriched in any autism phenotypes relative to other modules in the network. This could be because the module is enriched in syndromic autism genes that are strongly associated with other phenotypes besides ASD, leading to a relatively lower gene-ASD phenotype association score. Interestingly, module 2 has enrichment in autism phenotypes despite only being enriched in Epilepsy 3 genes. Module 6, as well as being enriched in autism phenotypes, was also enriched in epilepsy phenotypes relative to other modules, which could be explained by the enrichment in common genes in the module. While module 3 was also enriched in common genes and enriched in epilepsy phenotypes, it was not enriched in autism phenotypes relative to other modules. However, when looking at enrichment relative to all genes, including those outside the network, most of the 14 largest modules have enrichment in both epilepsy and autism phenotypes (Fig. 5B,D). Thus, while module 6 is likely most representative of autism relative to the other modules because of its enrichment autism genes and phenotypes, it may also be relevant to epilepsy because of its enrichment in common genes, BEGs, and some epilepsy phenotypes. Similarly, while module 3 is most representative of epilepsy relative to the other modules because of its enrichment in epilepsy genes and phenotypes, its enrichment in common genes and BEGs suggest that it shares a genetic basis with autism. Table S2 in supplementary Excel file contains additional information on the HPO enrichments of each module.
Enrichment analysis of epilepsy- and autism-related HPO terms for modules in the multiplex network. The enrichment of different epilepsy and autism phenotypes over the 14 largest modules in the epilepsy-autism multiplex network is shown. The first cluster of HPO terms represent autism phenotypes and the rest represent epilepsy phenotypes. Only HPO IDs with gene-HPO relationships in the Phen2Gene knowledgebase are shown. The p-value was determined by computing the mean gene-phenotype association score for each HPO ID over the genes in the module and comparing it to the mean of 10,000 trials using n genes, where n is the size of the module, randomly chosen from (A) the 1707 genes in the multiplex network or (B) all genes in the Phen2Gene knowledge base. (C) and (D) correspond to (A) and (B), respectively, except that the phenotype enrichment was calculated using annotated gene-HPO relationships from hpo.jax.org, The hypergeometric test was used to determine the p-value and the false discovery rate (FDR) is reported. For all plots, the false discovery rate (FDR) is reported since multiple HPO IDs were tested. "****" denotes FDR < 0.0001, "***" denotes FDR < 0.01, "**" denotes FDR < 0.05, and "*" denotes FDR < 0.1. The clustermap was generated using seaborn version 0.10.0 (https://seaborn.pydata.org/). The linkage on the rows was generated based on the distance between HPO IDs in the HPO tree.
Analysis on WES epilepsy-autism multiplex network
All analyses were repeated with a multiplex network generated purely from epilepsy and autism genes from the most up-to-date WES study, to the best of our knowledge, for either disorder in order to validate our results on a relatively unbiased gene set (Fig. 1A). In both the gene and phenotype enrichment analysis, we found a similar trend as the previous analysis on the multiplex network with all epilepsy- and autism-associated genes (which we will refer to as the larger multiplex network). That is, there is one module most specific to epilepsy and one module most specific to autism, both being the only modules significantly enriched in common genes (FDR = 3.90e−06 and FDR = 0.0186, respectively) (Fig. 6A). Moreover, the epilepsy-focused module, module 2, consisting of 30 genes, is also enriched in HC common genes (FDR = 2.64e−08) as well as WES common genes (FDR = 0.0113) that are both in the epilepsy and autism WES gene lists, relative to other modules. In the WES multiplex network, module 2 was the only module enriched in Epilepsy 1 genes (FDR = 5.74e−14) and it was the most strongly enriched in epilepsy phenotypes relative to other modules (Fig. 6B). Module 2 was also enriched in Autism 1 genes (FDR = 0.0119), WES autism genes (FDR = 0.0149), schizophrenia genes (FDR = 1.17e−04), BD genes (FDR = 7.55e−04), and BEGs (FDR = 3.01e−08). The most significantly enriched GO biological process for the module is ion transmembrane transport (GO: 0034220, FDR = 1.20e−17, 22/30 genes in the module). Twenty-six out of the thirty genes in module 2 also exist in the larger multiplex network; 20/26 (77%) are in module 3, 4/26 in module 6 (15%), and 1 in each of modules 1 and 13, in the larger network. Thus, module 2 in the WES multiplex network corresponds to module 3 of the larger multiplex network, sharing the same GO biological process enrichment, genes, and phenotypes (Fig. 7A,C). Table S3 in supplementary Excel file contains additional information on the GO and HPO enrichments of each WES module as well as the subgroups of epilepsy and autism genes in the module.
Enrichment analysis on modules in the multiplex network generated with WES epilepsy and autism genes. The enrichment analysis of (A) different gene groups and (B) epilepsy and autism phenotypes over the 13 largest modules (those that have at least 5 genes) in the multiplex network generated using only WES epilepsy and autism genes. (A) The hypergeometric test was used to determine the p-value for enrichment in each gene group. The false discovery rate (FDR) is reported since multiple gene groups were tested. The background of the hypergeometric test is the 294 genes in the network. COMMON GENES (WES) = genes in both the epilepsy and autism WES gene lists, COMMON GENES (HC) = genes that are both in the epilepsy 1 subgroup and autism 1 subgroup (high confidence), COMMON GENES (ALL) = all genes in an epilepsy subgroup and autism subgroup, BD = bipolar disorder, ID = intellectual disability, BE GENES = genes that have a significantly higher expression in brain tissue vs control tissue. (B) The first cluster of HPO terms represent autism phenotypes and the rest represent epilepsy phenotypes. Only HPO IDs with gene-HPO relationships in the Phen2Gene knowledgebase are shown. The p-value was determined by computing the mean gene-phenotype association score for each HPO ID over the genes in the module and comparing it to the mean of 10,000 trials using n genes, where n is the size of the module, randomly chosen from the 294 genes in the WES multiplex network. The FDR is reported since multiple HPO IDs were tested. For both (A) and (B) "***" denotes FDR < 0.01, "**" denotes FDR < 0.05, and "*" denotes FDR < 0.1 and for (B) "****" denotes FDR < 0.0001. The heatmap and cluster map were generated using seaborn version 0.10.0 (https://seaborn.pydata.org/). The linkage on the rows of the cluster map was generated based on the distance between HPO IDs in the HPO tree.
Comparison of prioritized modules in the WES multiplex network and larger epilepsy-autism multiplex network. The top 10 most significant biological process Gene Ontology terms by FDR are shown for (A) Module 3 and (B) Module 6 of the larger multiplex network and their corresponding modules (C) Module 2 and (D) Module 7 of the WES multiplex network.
Along with module 2 in the WES multiplex network, module 7, consisting of 12 genes, was also enriched in common genes (FDR = 0.0186). It was also enriched in Autism 1 (FDR = 0.0186) and Epilepsy 3 (FDR = 4.62e−03) genes and it was the most strongly enriched in autism phenotypes (Fig. 6B) relative to other modules. The most significantly enriched GO biological process for the module was modulation of synaptic transmission (GO: 0050804, FDR = 0.0816, 4/12 genes in the module). Eight out of the twelve genes in module 7 also exist in the larger multiplex network: 3/8 (38%) in each of modules 6 and 4, and 1 in each of modules 1 and 8. Module 7 in the WES multiplex network corresponds most closely to module 6 of the larger multiplex network because of sharing similar GO biological process enrichment (related to synaptic signaling) and autism gene and phenotype enrichment (Fig. 7B,D). Even with the limited number of genes in the WES multiplex network (294 genes) and small module sizes, we obtain a similar enrichment pattern, namely we show that module 2 and module 7 in the WES multiplex network correspond to modules 3 and 6, respectively, of the larger multiplex network. The results support that modules 3 and 6 of the larger multiplex network, while primarily associated with epilepsy and autism respectively, contain a significant number of common genes and their representative biological processes (ion transmembrane transport and synaptic signaling, respectively) are relevant to epilepsy and autism etiology.
Prioritizing candidate epilepsy and autism genes
We can also prioritize candidate genes using the module characterizations. Module 3 of the multiplex network and module 2 of the WES network, showed significant enrichment in Epilepsy 1 genes (genes directly causing epilepsy) and epilepsy phenotypes compared to other modules in the network. Looking at the overlap of these two modules which consists of 20 genes, ANK2, CACNA1E, CACNA2D3, GRIA2, are the only autism-specific genes, meaning that they were not identified as epilepsy genes in Wang et al. (2017) (Fig. 1C)10. It can be hypothesized that these genes are also associated with epilepsy because they fall in a high confidence epilepsy module that is also enriched in common genes (genes associated with both epilepsy and autism). The only other autism-specific gene in module 2 of the WES network is DLG4, which belongs to module 6 of the larger multiplex network. Module 6 is also enriched in common genes as well has some epilepsy phenotypes relative to other modules, so it can also be hypothesized that DLG4, although only labelled as an Autism 1 gene, is also associated with epilepsy.
To validate the novel predictions, we performed literature review to identify evidence supporting the potential role of these genes in epilepsy. ANK2, which encodes for the ankyrin-B protein, a member of the ankyrins family, was predicted as a novel epilepsy-related gene by a recent network-based study using a random walk with restart algorithm32. A recent study showed that an ANK2 variant is associated with seizure possibly through its interactions with the voltage-gated CaV2.1 calcium channel33. De novo pathogenic variants of CACNA1E, which encodes the α1-subunit of the voltage-gated CaV2.3 channel, were recently identified to cause developmental and epileptic encephalopathy34,35. De-novo variants of GRIA2, which encodes the GluA2 subunit of AMPA type ionotropic glutamate receptors, have been to be shown to cause neurodevelopmental disorders36. A recent study demonstrated that an engineered mutation in GRIA2 caused seizure vulnerability as well as learning and memory impairments37. DLG4, which encodes a scaffold protein in the postsynaptic region, was predicted as a candidate epilepsy gene in a random walk-based module prediction study because it showed up frequently and exclusively in modules with epilepsy genes22. Furthermore, DLG4 was reported as a candidate gene in the epilepsy WES study38. There is no literature specifically supporting the role of CACNA2D3 in causing seizures or epilepsy, but it encodes for a member of the α2δ subunit family of voltage-gated calcium channels, which have a role in epilepsy and antiepileptic drug pharmacology39,40. Therefore, there are literature supporting the role of the five proposed candidate genes in epilepsy.
Moreover, common genes collectively have a significantly higher centrality in the network than epilepsy- or autism- specific genes (Fig. S4). Therefore, within our two prioritized modules, modules 3 and 6, the genes can be further prioritized by their degree in the PPI and phenotype network and well as their betweenness centrality (Tables 2 and 3). Because module 3 is an epilepsy-focused module, autism-specific genes in the module may also be associated with epilepsy. Similarly, while module 6 is an autism-focused module, epilepsy-specific genes in the module may also be associated with autism. The genes in these two modules should be further examined to understand the shared genetic etiology of epilepsy and autism.
Table 2 Highest centrality genes in module 3.
In our study, we used a multiplex network of epilepsy- and autism-associated genes to elucidate the relationship between the two disorders, epilepsy and autism, that often co-occur. The multiplex network contains a gene-PPI layer and a gene-phenotype layer. PPI networks have recently become widely used to understand the molecular basis behind human diseases28. PPI networks can be used to discover new disease genes, study characteristics of the disease gene network, identify disease-related and functional sub-networks, and help classify diseases using network properties41. While it is often assumed that PPI modules overlap with disease phenotypes modules, we demonstrated this observation by comparing the gene-PPI layer to the gene-phenotype layer of the multilayer network of expert-compiled and well-established epilepsy- and autism-associated genes (Fig. 2). While there is a significant overlap of PPI modules and phenotype modules, combining both layers in a multiplex network formally factors in both functional similarities, through PPI interactions, and similarities in phenotypic manifestations of genes, which we wanted to capture in the multiplex modules. Furthermore, important genes such as SHANK3 and NLGNY that could not be found in the STRING PPI database, were sorted into appropriate modules using their relationships in the gene-phenotype layer of the multiplex network.
Because the global information of the entire multiplex network of 1707 genes is too general, we used a multiplex community detection to cluster highly interacting and similar genes together in modules. We showed that genes within a module interact with each other in the same biological processes and have specific gene and phenotype enrichments (Figs. 4, 5, Tables S2, S3). In particular, we found that modules 3 and 6 are significant in the epilepsy-autism multiplex network because of their enrichment in common genes, BEGs, high confidence epilepsy and autism genes, as well as epilepsy and autism phenotypes relative to other modules. Similar modules showed the same patterns of enrichment in a multiplex network constructed using epilepsy and autism genes solely from WES studies.
Module 3 in the epilepsy-autism multiplex network represents genes involved in ion transmembrane transport; many of the genes in this module encode for subunits of ion channels. Several monogenetic epilepsies are associated with mutations in genes encoding for ion channels10,42,43,44, and ion channel dysfunctions are also linked to susceptibility to autism, as well as bipolar disorder, schizophrenia and other neuropsychiatric disorders45, which explains their enrichment in module 3 and the comparable module 2 of the WES multiplex network. This module also contains genes encoding GABA receptors and nicotinic acetylcholine receptors, which are known to be related to epilepsy, autism, and other brain disorders46,47. Module 6 contains genes involved in synaptic signaling, a shared pathway between epilepsy and autism that has been supported by several studies20,22. This module contains genes encoding ionotropic and metabotropic glutamate receptors, and families of genes involved in the regulation and maintenance of synapses, such as DLGAP, NRXN, and NLGN, which are all known to be related to brain disorders48,49,50,51. Furthermore, module 6 was enriched in genes involved in intellectual disability, the severity of which is related to epilepsy risk in autism52. From modules 3 and 6 we prioritize ANK2, CACNA1E, CACNA2D3, GRIA2, and DLG4, genes not included the list of epilepsy genes in Wang et al. (2017)10, as candidate epilepsy genes because of their overlap with the epilepsy-focused module 2 of the WES network. Moreover, we find that common genes have greater centrality in both the PPI and phenotype network, so we list the highest centrality genes in modules 3 and 6 in Tables 2 and 3, respectively. We recommend these genes for further review as potential common genes.
We also wish to discuss several limitations of the current study. First, our analysis, including the epilepsy and autism subgroups, depends on previously compiled list of disease genes from human experts, and therefore it is biased towards well studied genes and probably biased towards genes with more known interaction partners in the PPI or phenotype network. In other words, genes with high degree may be studied more because they are related to disease, creating bias in the number of connections they have53. This is something that should be considered when interpreting results, since less studied genes could still be important contributors to both diseases. It also highlights the utility of the module-based analysis that groups genes with similar features together so lesser-known genes can be characterized. The bias limitations can be reduced as the datasets are further validated and improved in the future.
Another limitation comes from our generalization of genetic etiologies to discrete genes in the multiplex network. This generalization does not account for how different mutations within a gene can result in different phenotypes54. For example variations in SCN1A, one of the most commonly studied epilepsy genes, can result in a range of epilepsy syndromes55, from generalized epilepsy with febrile seizures plus (GEFS +) to Dravet syndrome (DS), which is associated with intellectual disability and autism56. Moreover, due to incomplete penetrance and variable expressivity54, similar mutations may result in varying phenotypes when compared across individuals57. Moreover, unlike epilepsy which manifest as characteristic seizures, the phenotypic spectrum of autism makes its diagnosis especially challenging58, which is important to keep in mind when studying the disorder. In the future, we should incorporate additional genotype–phenotype analyses at the individual patient level. These analyses will help validate our current model of modules and pathways involved in both epilepsy and autism.
Beyond attempting to overcome these limitations, there are also other ways to expand on this research. It has been shown that many neurological disorders have a common genetic etiology54. Therefore, beyond epilepsy and autism, other neurological disorders like depression, anxiety, obsessive compulsive disorder (OCD), and attention deficit hyperactivity disorder (ADHD) should also be explored in relation to epilepsy and autism, when we have a reasonable number (> 100) high confidence genes for these disorders. Finally, there is a known deep genetic relationships between neurodevelopmental disorders and cancer59 and given that the Cancer Gene Census already documents several hundred cancer-relevant genes, it would be of interest to perform similar analysis on neurodevelopmental disorders and cancer (a preliminary analysis shows that 105/723 of the genes in Cancer Gene Census are in the SFARI gene list, confirming the striking genetic connection between these two distinct conditions). In summary, while we acknowledge the exploratory nature of this current study, the approaches presented in the manuscript enables these future research directions and may generate novel insights into the shared genetic etiology between multiple well-studied diseases.
Data resource and description
Epilepsy- and autism-associated genes
We collected 977 genes, including gene clusters, associated with epilepsy from the expert-compiled list from Wang et al. (2017)10. These genes were manually curated and examined from multiple genetic databases and represent genes are directly related to epilepsy, or indirectly lead to epilepsy through influence on the central nervous system or other systems; the subgroups of epilepsy genes are defined in Table 1. These 977 epilepsy-associated genes were mapped to 999 genes in the multiplex network. The number of genes increased because gene clusters were separated into individual genes. We also collected 913 autism-associated genes from the Simons Foundation Autism Research Initiative (SFARI) Gene (access date: January 5, 2020), a community-driven knowledgebase of autism spectrum disorder60. SFARI Gene has the evidence and the strength of the genetic association for each gene; subgroups of autism genes are defined in Table 1. These 913 autism-associated genes were also mapped to the multiplex network. In total there were 1707 epilepsy- and autism-associated genes.
Whole exome sequence data from other independent studies
In additional analyses, we also used the most updated whole-exome sequencing (WES) data we could find for each of autism and epilepsy in order to test whether our results were also applicable to a less biased gene set38,61. The 102 autism genes with FDR < = 0.1 from the autism WES study (see Table S2 in Satterstrom et al., 2020) were used as the autism gene set. The top 200 most significant epilepsy genes outputted by the gene burden test from the epilepsy WES study (see Table S17 in Feng et al., 2019) were used as the epilepsy gene set.
Genes associated with other brain disorders
The gene lists for schizophrenia, bipolar disorder, and intellectual disability were retrieved from a paper by Wang et al. (2018)62. The schizophrenia gene list originally comes from the SZgene63 database and a GWAS study64. The bipolar disorder gene list originally comes from the BDgene database65. The intellectual disability gene list originally comes from BrainSpan and are documented in a previous publication by the same authors66.
Network generation
Protein–protein interaction (PPI) network
We mapped all 999 epilepsy and 913 autism-associated genes to the STRING PPI database (version 11)67 for Homo sapiens to generate the epilepsy-autism PPI network. The interactions in STRING consist of known and predicted interactions including direct physical and indirect functional associations evaluated from experiments, predictions, and knowledgebase. A node with degree zero was created for genes that did not exist in STRING, since they may have non-zero degree in the phenotype network. A total of 1707 epilepsy- and autism-associated genes were represented as nodes in the PPI network. Interactions from the STRING database were used as edges between the nodes, at first weighted by their confidence score determined by STRING. The edges in the PPI network were thresholded at a weight of 700, representing high confidence connections67, so that only edges with a weight of at least 700 were used in the PPI network, and the resulting PPI network had binary edge weights.
Phenotype network
The phenotype network was generated based on a phenotypic similarity score between pairs of genes representing how similar they are in their phenotype associations. For each of the 1707 epilepsy- and autism-associated genes, all phenotype associations were retrieved from the Phen2Gene knowledgebase, a comprehensive and standardized database of phenotype-gene associations that standardizes phenotypes using the Human Phenotype Ontology (HPO)30,31. Only HPO IDs under the parent "Phenotypic abnormality" (HP:0000118) were used in order to retrieve the most relevant phenotypic information. The Phen2Gene knowledgebase contains a score for each HPO-gene relationship, representing the strength of their association, and the top 1000 and top 500 scoring genes for each HPO ID were considered, respectively, to generate the phenotype network in the larger multiplex and WES multiplex network (Table S1). For each gene, a corresponding phenotype vector was created where the length was the total number of HPO IDs in the Phen2Gene database. Each vector element represents an HPO ID and the value is the phenotype-gene association score weighted by the skewness of the HPO ID (described in the original Phen2Gene paper). The phenotypic similarity score between two genes is then the cosine similarity between their corresponding pair of phenotype vectors. An edge between the two genes was added to the phenotype network if their phenotypic similarity score was above a certain threshold. The threshold was determined by randomly shuffling the two phenotype vectors 1000 times, sorting the phenotypic similarity scores, and choosing the 10th largest value (representing a p-value of 0.01). A significance level of 0.01 was chosen in order to uncover the modular nature of the phenotype network by keeping the most important edges (Table S1). The resulting phenotype network had binary edge weights.
Multiplex network and clustering algorithm
The multiplex network represents a multilayer network where the same nodes exist in each layer; the network encodes both the PPI relationships and phenotype relationships between the genes in the network. The gene-PPI-phenotype multiplex network was created by stacking the PPI network and gene-phenotype network layers, generated as detailed in the previous sections, such that each node in one layer is connected to itself in the other layer.
The Louvain algorithm is a modularity maximization approach that is commonly used to detect modules in a network and has been shown to perform well on biological networks68. For the individual PPI and phenotype layers, the Louvain algorithm was used to maximize the modularity, H, defined by:
$$H= \frac{1}{2m}{\sum }_{c}({e}_{c}-\gamma \frac{{K}_{c}^{2}}{2m})$$
Here, \({e}_{c}\) is the total number of edges in community \(c\), \(m\) is the total number of edges in the network, and \({K}_{c}\) is the sum of the degree of the nodes in community \(c\). To maximize modularity, which the Louvain algorithm is useful for, then is to maximize the difference between the actual number of edges and the expected number of edges in a community. In the equation, \(\gamma\), is the resolution parameter which controls the size of the communities. The Louvain algorithm was applied 1000 times with different random seeds at a range of resolutions and the partition with the globally optimal modularity was chosen (Fig. S3). The Louvain algorithm can be easily extended to be applicable to a multiplex network. In this case the overall modularity, which the algorithm will try to maximize, is the sum of the modularity of each layer weighted by some constant:
$$H={{w}_{ppi}H}_{ppi}+{{w}_{phenotype}H}_{phenotype}$$
We set both layers to have equal weights in order to have equal contribution from the PPI and phenotype layers. The louvain-igraph Python package was used to run the Louvain algorithm (https://louvain-igraph.readthedocs.io/).
Gene Ontology enrichment analysis
Gene Ontology (GO) is a tool that helps unify understanding of biological functions of genes and proteins in eukaryotes69. After modularity detection, the genes within each prescribed module were collectively analyzed using GO for their biological processes, defined as a biological objective to which the genes in the module contribute. The Database for Annotation, Visualization and Integrated Discovery (DAVID) was used to retrieved GO terms and their enrichment using a Fisher's exact test70. DAVID also reports the FDR for each enrichment.
Differentially expressed gene analysis in brain tissues
To further evaluate whether epilepsy- and autism-associated genes are correlated with gene expression within human brain tissue, we downloaded RNA-Seq data of Genotype-Tissue Expression (GTEx) from UCSC Xena71. As there are 1141 samples from post-mortem, multi-region brain tissue, we randomly selected a comparable size of control samples (n = 1011) from whole blood, muscle and nerve tissue. We performed differential expression genes analysis to compare brain and control tissues using the Limma-voom method in the edgeR package72. The pattern of identified brain-enriched genes (BEGs) are shown in Fig. S5.
Statistical analyses and network properties
To measure the enrichment of the different subgroups of epilepsy-associated genes, autism-associated genes, and common genes (associated with both disorders) in each module, gene enrichment analyses73 were performed for each gene group, assuming a hypergeometric distribution. This test measures the enrichment of a given group of genes in a module compared to what would be expected by a random distribution of genes among the modules, which is the null hypothesis. The p-values were calculated using the hypergeom function from scipy.stats version 1.4.1. False discovery rate correction was applied using the multitest.fdrcorrection function from statsmodel.stats version 0.12.0 (https://www.statsmodels.org/stable/index.html). To measure the enrichment of different HPO IDs in each module, an empirical p-value was determined by computing the mean gene-phenotype association score for each HPO ID over the genes in the module and comparing it to the mean of 10,000 trials using n genes in the Phen2Gene knowledge base, where n is the size of the background. The phenotype enrichment was also calculated using annotated gene-HPO relationships from hpo.jax.org. The hypergeometric test was used to determine the p-value and corrected using false discovery rate, similar to the gene enrichment tests.
Several network statistics were used in this paper. The Community Discovery Library (CDlib)74 version 0.1.9 was used for modularity scoring. The Newman-Girvan modularity score represents the difference of intra-module edges with the expected number of such edges according to a null model75. The python-igraph package version 0.8.3 (https://igraph.org/python/) was used to construct the multiplex network. NetworkX version 2.5 (https://networkx.org/) was used to construct the individual network layers, calculate the degree of the nodes (the number of adjacent nodes connected by an edge), generate random networks with a given degree distribution, and calculated the shortest path betweenness centrality of a node. The betweenness centrality of a node measures the number of shortest paths within the network that go through that node76. The formula for the betweenness centrality of a node, \(v\), is as follows76:
$$B\left(v\right)={\sum }_{s,t \in V}\frac{\sigma (s,t|v)}{\sigma (s,t)}$$
where \(V\) is the set of nodes in the network, \(\sigma (s,t)\) is the number of shortest paths between \(s\) and \(t\), and \(\sigma (s,t|v)\) is the number of those shortest paths passing through node \(v\). If \(s=t\), \(\sigma \left(s,t\right)=1\) and if \(v\in \{s,t\}\), \(\sigma \left(s,t|v\right)=0\).
It is well known that autism spectrum disorders (ASDs) and epilepsy commonly co-occur, but the underlying genetic connection between the two disorders requires further research, which once better understood, will facilitate the implementation of precision medicine for both diseases. In this study, we detect modules in a multilayer network of epilepsy- and autism-associated genes representative of subgroups of epilepsy genes related through involvement in similar biological processes and having similar genotypic and phenotypic features. The protein–protein interaction (PPI) layer of the multiplex network is complementary to the gene-phenotype layer and when integrated in a multiplex network allows the identification of genetic modules that are highly connected through PPI interactions and share similar phenotypic associations. We were able to identify two modules enriched in common genes, genes associated with both epilepsy and autism, representing shared biological processes disrupted in the disorders. The first module, representing ion transmembrane transport, is more epilepsy-focused in terms of genotypic and phenotypic enrichments while the second module is more autism-focused and represents synaptic signaling and synapse regulation and maintenance. Two similar modules were identified in a multiplex network constructed using epilepsy and autism genes from WES studies. We prioritize the following candidate epilepsy genes, which are found in the epilepsy-focused modules of both the larger multiplex network and WES network: ANK2, CACNA1E, CACNA2D3, GRIA2. Another candidate epilepsy gene is DLG4, which is in epilepsy focused module of the WES network and autism-focused module of the larger phenotype network. These genes, although associated with ASD in the SFARI database, were not listed as epilepsy candidates in Wang et al. (2017)10. The two modules are also enriched in genes upregulated in brain tissue, bipolar disorder-associated genes, and intellectual disability genes, so they could explain comorbidities of epilepsy and autism with other neuropsychiatric and neurodevelopmental disorders. The two modules warrant further investigation in the future, as well as the less epilepsy- and autism-specific modules that may have more indirect relationships with the disorders. The computational and analytical approaches used in our study may be also applied in similar future studies to study the genetic connection between different human diseases.
All the code and data sets are organized in a computational workflow available at https://github.com/WGLab/epilepsy-autism-multiplex-network for reproducible research.
Poduri, A. & Lowenstein, D. Epilepsy genetics—past, present, and future. Curr. Opin. Genet. Dev. 21, 325–332 (2011).
Persico, A. M. & Napolioni, V. Autism genetics. Behav. Brain Res. 251, 95–112 (2013).
Vorstman, J. A. S. et al. Autism genetics: opportunities and challenges for clinical translation. Nat. Rev. Genet. 18, 362–376 (2017).
Ramaswami, G. & Geschwind, D. H. Genetics of autism spectrum disorder. Handb. Clin. Neurol. 147, 321–329 (2018).
Jeste, S. S. & Geschwind, D. H. Disentangling the heterogeneity of autism spectrum disorder through genetic findings. Nat. Rev. Neurol. 10, 74–81 (2014).
Woodbury-Smith, M. & Scherer, S. W. Progress in the genetics of autism spectrum disorder. Dev. Med. Child Neurol. 60, 445–451 (2018).
De Rubeis, S. & Buxbaum, J. D. Recent advances in the genetics of autism spectrum disorder. Curr. Neurol. Neurosci. Rep. 15, 36 (2015).
Myers, C. T. & Mefford, H. C. Advancing epilepsy genetics in the genomic era. Genome Med. 7, 91 (2015).
Myers, K. A., Johnstone, D. L. & Dyment, D. A. Epilepsy genetics: current knowledge, applications, and future directions. Clin. Genet. 95, 95–111 (2019).
Wang, J. et al. Epilepsy-associated genes. Seizure 44, 11–20 (2017).
Hildebrand, M. S. et al. Recent advances in the molecular genetics of epilepsy. J. Med. Genet. 50, 271–279 (2013).
Rutter, M. Concepts of autism: a review of research. J. Child Psychol. Psychiatry 9, 1–25 (1968).
Sandin, S. et al. The heritability of autism spectrum disorder. JAMA 318, 1182–1184 (2017).
Tick, B., Bolton, P., Happe, F., Rutter, M. & Rijsdijk, F. Heritability of autism spectrum disorders: a meta-analysis of twin studies. J. Child Psychol. Psychiatry 57, 585–595 (2016).
Colvert, E. et al. Heritability of autism spectrum disorder in a UK population-based twin sample. JAMA Psychiatry 72, 415–423 (2015).
Berg, A. T. & Plioplys, S. Epilepsy and autism: is there a special relationship?. Epilepsy Behav. 23, 193–198 (2012).
Tuchman, R. & Rapin, I. Epilepsy in autism. Lancet Neurol. 1, 352–358 (2002).
Tuchman, R. & Cuccaro, M. Epilepsy and autism: neurodevelopmental perspective. Curr. Neurol. Neurosci. Rep. 11, 428–434 (2011).
Tuchman, R. Seminars in Pediatric Neurology Vol. 24, 292–300 (Elsevier, Amsterdam, 2017).
Lee, B. H., Smith, T. & Paciorkowski, A. R. Autism spectrum disorder and epilepsy: disorders with a shared biology. Epilepsy Behav. 47, 191–201 (2015).
Bozzi, Y., Provenzano, G. & Casarosa, S. Neurobiological bases of autism-epilepsy comorbidity: a focus on excitation/inhibition imbalance. Eur. J. Neurosci. 47, 534–548 (2018).
Chow, J. et al. Dissecting the genetic basis of comorbid epilepsy phenotypes in neurodevelopmental disorders. Genome Med. 11, 1–14 (2019).
Gilman, S. R. et al. Rare de novo variants associated with autism implicate a large functional network of genes involved in formation and function of synapses. Neuron 70, 898–907 (2011).
Hormozdiari, F., Penn, O., Borenstein, E. & Eichler, E. E. The discovery of integrated gene networks for autism and related disorders. Genome Res. 25, 142–154 (2015).
Krishnan, A. et al. Genome-wide prediction and functional characterization of the genetic basis of autism spectrum disorder. Nat. Neurosci. 19, 1454–1462 (2016).
Liu, L. et al. DAWN: a framework to identify autism genes and subnetworks using gene expression and genetics. Mol. Autism 5, 22 (2014).
Parikshak, N. N. et al. Integrative functional genomic analyses implicate specific molecular pathways and circuits in autism. Cell 155, 1008–1021 (2013).
Barabasi, A. L., Gulbahce, N. & Loscalzo, J. Network medicine: a network-based approach to human disease. Nat. Rev. Genet. 12, 56–68 (2011).
Halu, A., De Domenico, M., Arenas, A. & Sharma, A. The multiplex network of human diseases. NPJ Syst. Biol. Appl. 5, 1–12 (2019).
Zhao, M. et al. Phen2Gene: rapid phenotype-driven gene prioritization for rare diseases. NAR Genomics Bioinform. 2, lqaa032 (2020).
Köhler, S. et al. Expansion of the Human Phenotype Ontology (HPO) knowledge base and resources. Nucl. Acids Res. 47, D1018–D1027 (2019).
Guo, W. et al. Identifying and analyzing novel epilepsy-related genes using random walk with restart algorithm. BioMed Res. Int. 2017, 6132436 (2017).
Choi, C. S. W. et al. Ankyrin B and Ankyrin B variants differentially modulate intracellular and surface Cav2.1 levels. Mol. Brain 12, 75 (2019).
Helbig, K. L. et al. De novo pathogenic variants in CACNA1E cause developmental and epileptic encephalopathy with contractures, macrocephaly, and dyskinesias. Am. J. Hum. Genet. 104, 562 (2019).
Carvill, G. L. Calcium channel dysfunction in epilepsy: gain of CACNA1E. Epilepsy Curr. 19, 199–201 (2019).
Salpietro, V. et al. AMPA receptor GluA2 subunit defects are a cause of neurodevelopmental disorders. Nat. Commun. 10, 3094 (2019).
Konen, L. M. et al. A new mouse line with reduced GluA2 Q/R site RNA editing exhibits loss of dendritic spines, hippocampal CA1-neuron loss, learning and memory impairments and NMDA receptor-independent seizure vulnerability. Mol. Brain 13, 27 (2020).
Feng, Y.-C.A. et al. Ultra-rare genetic variation in the epilepsies: a whole-exome sequencing study of 17,606 individuals. Am. J. Hum. Genet. 105, 267–282 (2019).
Dolphin, A. C. The α2δ subunits of voltage-gated calcium channels. Biochimica et Biophysica Acta (BBA) - Biomembranes 1828, 1541–1549 (2013).
Taylor, C. P., Angelotti, T. & Fauman, E. Pharmacology and mechanism of action of pregabalin: the calcium channel α2–δ (alpha2–delta) subunit as a target for antiepileptic drug discovery. Epilepsy Res. 73, 137–150 (2007).
Ideker, T. & Sharan, R. Protein networks in disease. Genome Res 18, 644–652 (2008).
Klassen, T. et al. Exome sequencing of ion channel genes reveals complex profiles confounding personal risk assessment in epilepsy. Cell 145, 1036–1048 (2011).
George, A. L. Jr. Inherited channelopathies associated with epilepsy. Epilepsy Curr. 4, 65–70 (2004).
Lerche, H., Jurkat-Rott, K. & Lehmann-Horn, F. Ion channels and epilepsy. Am. J. Med. Genet. 106, 146–159 (2001).
Schmunk, G. & Gargus, J. J. Channelopathy pathogenesis in autism spectrum disorders. Front. Genet. 4, 222 (2013).
Sgadò, P., Dunleavy, M., Genovesi, S., Provenzano, G. & Bozzi, Y. The role of GABAergic system in neurodevelopmental disorders: a focus on autism and epilepsy. Int. J. Physiol. Pathophysiol. Pharmacol. 3, 223 (2011).
Dani, J. A. & Bertrand, D. Nicotinic acetylcholine receptors and nicotinic cholinergic mechanisms of the central nervous system. Annu. Rev. Pharmacol. Toxicol. 47, 699–729 (2007).
Bowie, D. Ionotropic glutamate receptors & CNS disorders. CNS Neurol. Disord. 7, 129–143 (2008).
Crupi, R., Impellizzeri, D. & Cuzzocrea, S. Role of metabotropic glutamate receptors in neurological disorders. Front. Mol. Neurosci. 12, 20 (2019).
Rasmussen, A. H., Rasmussen, H. B. & Silahtaroglu, A. The DLGAP family: neuronal expression, function and role in brain disorders. Mol. Brain 10, 1–13 (2017).
Südhof, T. C. Neuroligins and neurexins link synaptic function to cognitive disease. Nature 455, 903–911 (2008).
Amiet, C. et al. Epilepsy in autism is associated with intellectual disability and gender: evidence from a meta-analysis. Biol. Psychiat. 64, 577–582 (2008).
Schaefer, M. H., Serrano, L. & Andrade-Navarro, M. A. Correcting for the study bias associated with protein–protein interaction measurements reveals differences between protein degree distributions from different cancer types. Front. Genet. 6, 260 (2015).
Mitchell, K. J. The genetics of neurodevelopmental disease. Curr. Opin. Neurobiol. 21, 197–203 (2011).
Escayg, A. & Goldin, A. L. Sodium channel SCN1A and epilepsy: mutations and mechanisms. Epilepsia 51, 1650–1658 (2010).
Li, B. M. et al. Autism in Dravet syndrome: prevalence, features, and relationship to the clinical characteristics of epilepsy and mental retardation. Epilepsy Behav. 21, 291–295 (2011).
Meisler, M. H. & Kearney, J. A. Sodium channel mutations in epilepsy and other neurological disorders. J. Clin. Investig. 115, 2010–2017 (2005).
Jones, R. M. & Lord, C. Diagnosing autism in neurobiological research studies. Behav. Brain Res. 251, 113–124 (2013).
Qi, H., Dong, C., Chung, W. K., Wang, K. & Shen, Y. Deep genetic connection between cancer and developmental disorders. Hum. Mutat. 37, 1042–1050 (2016).
Abrahams, B. S. et al. SFARI Gene 2.0: a community-driven knowledgebase for the autism spectrum disorders (ASDs). Mol. Autism 4, 36 (2013).
Satterstrom, F. K. et al. Large-scale exome sequencing study implicates both developmental and functional changes in the neurobiology of autism. Cell 180, 568–584 (2020).
Wang, P., Zhao, D., Lachman, H. M. & Zheng, D. Enriched expression of genes associated with autism spectrum disorders in human inhibitory neurons. Transl. Psychiatry 8, 13 (2018).
Allen, N. C. et al. Systematic meta-analyses and field synopsis of genetic association studies in schizophrenia: the SzGene database. Nat. Genet. 40, 827–834 (2008).
Schizophrenia Working Group of the Psychiatric Genomics Consortium. Biological insights from 108 schizophrenia-associated genetic loci. Nature 511, 421–427 (2014).
Article ADS PubMed Central CAS Google Scholar
Chang, S. H. et al. BDgene: a genetic database for bipolar disorder and its overlap with schizophrenia and major depressive disorder. Biol. Psychiatry 74, 727–733 (2013).
Rockowitz, S. & Zheng, D. Significant expansion of the REST/NRSF cistrome in human versus mouse embryonic stem cells: potential implications for neural development. Nucl. Acids Res. 43, 5730–5743 (2015).
Szklarczyk, D. et al. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucl. Acids Res. 47, D607–D613 (2019).
Rahiminejad, S., Maurya, M. R. & Subramaniam, S. Topological and functional comparison of community detection algorithms in biological networks. BMC Bioinform. 20, 212 (2019).
Ashburner, M. et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat. Genet. 25, 25–29 (2000).
Dennis, G. Jr. et al. DAVID: database for annotation, visualization, and integrated discovery. Genome Biol. 4, P3 (2003).
Goldman, M., Craft, B., Brooks, A., Zhu, J. & Haussler, D. The UCSC Xena Platform for cancer genomics data visualization and interpretation. BioRxiv, 326470 (2018).
McCarthy, D. J., Chen, Y. & Smyth, G. K. Differential expression analysis of multifactor RNA-Seq experiments with respect to biological variation. Nucl. Acids Res. 40, 4288–4297 (2012).
Falcon, S. & Gentleman, R. Hypergeometric testing used for gene set enrichment analysis. In Bioconductor Case Studies (eds Hahne, F. et al.) 207–220 (Springer, New York, NY, 2008).
Rossetti, G., Milli, L. & Cazabet, R. CDLIB: a python library to extract, compare and evaluate communities from complex networks. Appl. Netw. Sci. 4, 52 (2019).
Newman, M. E. & Girvan, M. Finding and evaluating community structure in networks. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 69, 026113 (2004).
Brandes, U. On variants of shortest-path betweenness centrality and their generic computation. Soc. Netw. 30, 136–145 (2008).
The authors wish to acknowledge members of the Wang lab for helpful comments on the study and the manuscript. This research originates from a network neuroscience course project, and we sincerely thank Dr. Danielle S. Bassett at Penn Engineering for her feedback on the network analysis performed as part of the course. The study is supported in part by NIH/NLM/NHGRI grant LM012895 and the CHOP Research Institute.
School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, 19104, USA
Jacqueline Peng
Raymond G. Perelman Center for Cellular and Molecular Therapeutics, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, USA
Jacqueline Peng, Yunyun Zhou & Kai Wang
Department of Pathology and Laboratory Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
Yunyun Zhou
J.P. conceived the study, designed the study, performed the computational experiments, analyzed the data, and wrote the manuscript. Y.Z. performed the analysis, wrote and edited the manuscript, and supervised the project. K.W. conceived the study, wrote and edited the manuscript, and supervised the project.
Correspondence to Kai Wang.
Peng, J., Zhou, Y. & Wang, K. Multiplex gene and phenotype network to characterize shared genetic pathways of epilepsy and autism. Sci Rep 11, 952 (2021). https://doi.org/10.1038/s41598-020-78654-y
Accepted: 25 November 2020
Gene variations of glutamate metabolism pathway and epilepsy
Yan Feng
Cuirong Zhang
Yanchun Deng
Acta Epileptologica (2022)
De novo variants in CACNA1E found in patients with intellectual disability, developmental regression and social cognition deficit but no seizures
Beryl Royer-Bertrand
Marine Jequier Gygax
Andrea Superti-Furga
Molecular Autism (2021) | CommonCrawl |
Can Sulfuric acid or HCl be used with the addition of Citric acid for the control of iron and other scales in a reverse osmosis plant?
Using citric acid is very expensive. If the water source contains oxygen, has been exposed to oxidizers, or ferric based coagulants have been used, the iron will be in the ferric state and can typically be controlled by dosing sulfuric acid to reduce the pH to ~6 and dosing antiscalant.
If the water does not contain any dissolved oxygen, most of the iron will be in the Ferrous state. Ferrous ions are extremely soluble and easily controlled by most antiscalants without acid.
However, in many cases, some soluble oxygen will be present. It only takes 0.1 ppm dissolved oxygen to oxidize 0.7 ppm of ferrous ions to the ferric state.
$${Fe{^2}{^+} + \frac{1}{4}O_2 + H^+ == Fe{^3}{^+} + \frac{1}{2} H_2O}$$
When the water source is from a deep aquifer, we assume anaerobic conditions where all iron will be in the ferrous state (as long as the water goes directly to the RO with no holding tanks and no dosing of chlorine or other oxidizers). When the water source is from a surficial aquifer, oxygen will be present and iron can be assumed to be in the ferric state.
Ferric iron can be controlled to a certain extent using antiscalant without pH reduction, but the antiscalant demand becomes significant; antiscalants have a higher affinity to trivalent metal hydroxides than other surfaces. They will therefore preferentially adsorb to colloidal ferric hydroxide while allowing calcium carbonate and other sparingly soluble salts to precipitate and form scale on the membrane surface.
Some smaller plants use greensand filters for iron control and they are very effective; iron concentrations are typically reduced to below 0.1 ppm. But they are a significant capital expense, require a large footprint, and they do require maintenance to operate efficiently. | CommonCrawl |
Search Results: 1 - 10 of 10925 matches for " Amanda Young "
Page 1 /10925
Spectral Gap and Edge Excitations of $d$-dimensional PVBS models on half-spaces
Michael Bishop,Bruno Nachtergaele,Amanda Young
Abstract: We analyze a class of quantum spin models defined on half-spaces in the $d$-dimensional hypercubic lattice bounded by a hyperplane with inward unit normal vector $m\in\mathbb{R}^d$. The family of models was previously introduced as the single species Product Vacua with Boundary States (PVBS) model, which is a spin-$1/2$ model with a XXZ-type nearest neighbor interactions depending on parameters $\lambda_j\in (0,\infty)$, one for each coordinate direction. For any given values of the parameters, we prove an upper bound for the spectral gap above the unique ground state of these models, which vanishes for exactly one direction of the normal vector $m$. For all other choices of $m$ we derive a positive lower bound of the spectral gap, except for the case $\lambda_1 =\cdots =\lambda_d=1$, which is known to have gapless excitations in the bulk.
Self monitoring of blood glucose - a survey of diabetes UK members with type 2 diabetes who use SMBG
Katharine D Barnard, Amanda J Young, Norman R Waugh
BMC Research Notes , 2010, DOI: 10.1186/1756-0500-3-318
Abstract: 554 participants completed the survey, of whom 289 (52.2%) were male. 20% of respondents were recently diagnosed (< 6 months). Frequency of SMBG varied, with 43% of participants testing between once and four times a day and 22% testing less than once a month or for occasional periods.80% of respondents reported high satisfaction with SMBG, and reported feeling more 'in control' of their diabetes management using it. The most frequently reported use of SMBG was to make adjustments to food intake or confirm a hyperglycaemic episode.Women were significantly more likely to report feelings of guilt or self-chastisement associated with out of range readings (p = < .001).SMBG was clearly of benefit to this group of confirmed users, who used the results to adjust diet, physical activity or medications. However many individuals (particularly women) reported feelings of anxiety and depression associated with its use.A recent review of evidence on self monitoring of blood glucose (SMBG) in Type 2 diabetes, done to inform the deliberations of a Department of Health (England) working group on SMBG, found that the published evidence for effectiveness and cost-effectiveness was weak, and its value not proven [1]. However the working group had heard from Diabetes UK that many people with type 2 diabetes are convinced that SMBG is of value. The report of the working group is published http://www.diabetes.nhs.uk/document.php?o=1023 webcite[2]. Some individuals with Type 2 diabetes report the ability to self-monitor blood glucose levels to be empowering, enabling them to feel more 'in control' of their diabetes and able to react to readings quickly, rather than having to wait for their routine HbA1c test [3] Possible benefits of SMBG include immediate confirmation of hypoglycaemia or hyperglycaemia; an increase in motivation to stimulate greater self-care; and data with which patients or healthcare teams could adjust treatment regimens [3]. However, a recurring theme in the recent rev
PD5: A General Purpose Library for Primer Design Software
Michael C. Riley, Wayne Aubrey, Michael Young, Amanda Clare
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0080156
Abstract: Background Complex PCR applications for large genome-scale projects require fast, reliable and often highly sophisticated primer design software applications. Presently, such applications use pipelining methods to utilise many third party applications and this involves file parsing, interfacing and data conversion, which is slow and prone to error. A fully integrated suite of software tools for primer design would considerably improve the development time, the processing speed, and the reliability of bespoke primer design software applications. Results The PD5 software library is an open-source collection of classes and utilities, providing a complete collection of software building blocks for primer design and analysis. It is written in object-oriented C++ with an emphasis on classes suitable for efficient and rapid development of bespoke primer design programs. The modular design of the software library simplifies the development of specific applications and also integration with existing third party software where necessary. We demonstrate several applications created using this software library that have already proved to be effective, but we view the project as a dynamic environment for building primer design software and it is open for future development by the bioinformatics community. Therefore, the PD5 software library is published under the terms of the GNU General Public License, which guarantee access to source-code and allow redistribution and modification. Conclusions The PD5 software library is downloadable from Google Code and the accompanying Wiki includes instructions and examples: http://code.google.com/p/primer-design
Metabotropic Glutamate Receptors as Novel Therapeutic Targets on Visceral Sensory Pathways
L. Ashley Blackshaw,Amanda J. Page,Richard L. Young
Frontiers in Neuroscience , 2011, DOI: 10.3389/fnins.2011.00040
Abstract: Metabotropic glutamate receptors (mGluR) have a diverse range of structures and molecular coupling mechanisms. There are eight mGluR subtypes divided into three major groups. Group I (mGluR1 and 5) is excitatory; groups II (mGluR2 and 3) and III (mGluR 4, 6, and 7) are inhibitory. All mGluR are found in the mammalian nervous system but some are absent from sensory neurons. The focus here is on mGluR in sensory pathways from the viscera, where they have been explored as therapeutic targets. Group I mGluR are activated by endogenous glutamate or constitutively active without agonist. Constitutive activity can be exploited by inverse agonists to reduce neuronal excitability without synaptic input. This is promising for reducing activation of nociceptive afferents and pain using mGluR5 negative allosteric modulators. Many inhibitory mGluR are also expressed in visceral afferents, many of which markedly reduce excitability. Their role in visceral pain remains to be determined, but they have shown promise in inhibition of the triggering of gastro-esophageal reflux, via an action on mechanosensory gastric afferents. The extent of reflux inhibition is limited, however, and may not reach a clinically useful level. On the other hand, negative modulation of mGluR5 has very potent actions on reflux inhibition, which has produced the most likely candidates so far as therapeutic drugs. These act probably outside the central nervous system, and may therefore provide a generous therapeutic window. There are many unanswered questions about mGluR along visceral afferent pathways, the answers to which may reveal many more therapeutic candidates.
Product Vacua and Boundary State Models in d Dimensions
Sven Bachmann,Eman Hamza,Bruno Nachtergaele,Amanda Young
Statistics , 2014, DOI: 10.1007/s10955-015-1260-7
Abstract: We introduce and analyze a class of quantum spin models defined on d-dimensional lattices Lambda subset of Z^d, which we call `Product Vacua with Boundary States' (PVBS). We characterize their ground state spaces on arbitrary finite volumes and study the thermodynamic limit. Using the martingale method, we prove that the models have a gapped excitation spectrum on Z^d except for critical values of the parameters. For special values of the parameters we show that the excitation spectrum is gapless. We demonstrate the sensitivity of the spectrum to the existence and orientation of boundaries. This sensitivity can be explained by the presence or absence of edge excitations. In particular, we study a PVBS models on a slanted half-plane and show that it has gapless edge states but a gapped excitation spectrum in the bulk.
The Role of Maternalism in Contemporary Paid Domestic Work [PDF]
Amanda Moras
Sociology Mind (SM) , 2013, DOI: 10.4236/sm.2013.33033
Various studies of domestic work have identified close personal relationships between domestic workers and employers as a key instrument in the exploitation of domestic workers, allowing employers to solicit unpaid services as well as a sense of superiority (Rollins, 1985; Romero, 2002; Glenn, 1992; Hondagneu-Sotelo, 2001). Likewise, other scholars have pointed out that close employee-employer relationships may actually empower domestic workers, increasing job leverage (Thorton-Dill, 1994). Ultimately, these lines are blurry and ever changing as employers continuously redefine employee expectations. Drawing from a larger study involving thirty interviews with white upper middle class women who currently employ domestic workers (mostly housecleaners) this paper explores employers' interactions with domestic workers. Through these interviews this research elaborates on how employers and employees interact, how employers feel about these interactions, and explores to what extent these interactions are informed by the widely reported maternalistic tendencies of the past, while also considering the consequences of this.
Epistemological Limits to Scientific Prediction: The Problem of Uncertainty [PDF]
Amanda Guillan
Open Journal of Philosophy (OJPP) , 2014, DOI: 10.4236/ojpp.2014.44053
Abstract: A key issue regarding the reliability of scientific prediction is uncertainty, which also affects its possibility as scientific knowledge. Thus, uncertainty is directly related to the epistemological limits of prediction in science. Within this context, this paper considers the obstacles to scientific predictions that are related to uncertainty. The analysis is made according to the twofold character of the limits of science, which is characterized in terms of the "barriers" and the "confines." In addition, the study takes into account the presence of internal and external factors related to the epistemological limits of science. Following these lines of research, the analysis is focused on two steps. First, there is a characterization of the coordinates of Nicholas Rescher's approach, which is particularly important regarding the epistemological limits to scientific prediction. Second, there is a study of uncertainty as an epistemological obstacle to predictability. Thereafter, the consequences for the future are pointed out.
Artful Deception, Languaging, and Learning—The Brain on Seeing Itself [PDF]
Amanda Preston
Despite having named ourselves Homo sapiens—a designation contingent on word/reason (logos) as our chosen identifier—recent evidence suggests language is only a small fraction of the story. Human beings would be more aptly named Homo videns—seeing man—if percentage of cortex area per modality determined the labeling of an organism. Instead, the sentential ontology of language philosophers and linguists persists in spite of the growing body of cognitive research challenging the language instinct as our most defining characteristic. What is becoming clearer is that language is palimpsestic. It is like a marked transparency over visuospatial maps, which are wired to sensorimotor maps. The left lateralized interpreter uses language to communicably narrativize an apparent unity, but people are not the only fictionalizing animals. This examination looks to cognitive and psychological studies to suggest that a prelinguistic instinct to make sense of unrelated information is a biological consequence of intersections among pattern matching, symbolic thinking, aesthetics, and emotive tagging, which is accessible by language, but not a product thereof. Language, rather, is just an outer surface. Rather than thinking man, playing man, or tool-making man, we would be better described as storytelling animals (narrativism). Like other social mammals, we run simulation heuristics to predict causal chains, object/event frequency, value association, and problem solving. The post hoc product is episodic fiction. Language merely serves to magnify what Friederich Nietzsche is rightfully identified as an art of dissimulation—lying. In short, the moral of the story is that we are making it all up as we go along.
Developing a matrix to identify and prioritise research recommendations in HIV Prevention
Sydney Anstee, Alison Price, Amanda Young, Katharine Barnard, Bob Coates, Simon Fraser, Rebecca Moran
BMC Public Health , 2011, DOI: 10.1186/1471-2458-11-381
Abstract: Categories for prevention and risk groups were developed for HIV prevention in consultation with external experts. These were used as axes on a matrix tool to map evidence. Systematic searches for publications on HIV prevention were undertaken using electronic databases for primary and secondary research undertaken mainly in UK, USA, Canada, Australia and New Zealand, 2006-9. Each publication was screened for inclusion then coded. The risk groups and prevention areas in each paper were counted: several publications addressed multiple risk groups. The counts were exported to the matrix and clearly illustrate the concentrations and gaps of literature in HIV prevention.716 systematic reviews, randomised control trials and other primary research met the inclusion criteria for HIV prevention. The matrix identified several under researched areas in HIV prevention.This is the first categorisation system for HIV prevention and the matrix is a novel tool for evidence mapping. Some important yet under-researched areas have been identified in HIV prevention evidence: identifying the undiagnosed population; international adaptation; education; intervention combinations; transgender; sex-workers; heterosexuals and older age groups.Other research recommendations: develop the classification system further and investigate transferability of the matrix to other prevention areas; evidence syntheses may be appropriate in areas dense with research; have studies with positive findings been translated to practice?The authors of this study invite research suggestions relating to the evidence gaps identified within remits of Public Health or any appropriate NETSCC programme.Follow the 'Suggest Research' links from:http://www.netscc.ac.uk/ webcite. Enter - HIVProject - in optional ID for HTA or in first information box for other programmes.HIV/AIDS persists as a major global health priority with the number of people living with HIV continuing to increase[1]. A report from the Global HIV Pre
Can color vision variation explain sex differences in invertebrate foraging by capuchin monkeys?
Amanda D. MELIN, Linda M. FEDIGAN, Hilary C. YOUNG, Shoji KAWAMURA
Current Zoology , 2010,
Abstract: Invertebrates are the main source of protein for many small-bodied monkeys. Prey vary in size, mobility, degree of protective covering, and use of the forest, i.e. canopy height, and whether they are exposed or embed themselves in substrates. Sex-differentiation in foraging patterns is well documented for some monkey species and recent studies find that color vision phenotype can also affect invertebrate foraging. Since vision phenotype is polymorphic and sex-linked in most New World monkeys - males have dichromatic vision and females have either dichromatic or trichromatic vision - this raises the possibility that sex differences are linked to visual ecology. We tested predicted sex differences for invertebrate foraging in white-faced capuchins Cebus capucinus and conducted 12 months of study on four free-ranging groups between January 2007 and September 2008. We found both sex and color vision effects. Sex: Males spent more time foraging for invertebrates on the ground. Females spent more time consuming embedded, colonial invertebrates, ate relatively more "soft" sedentary invertebrates, and devoted more of their activity budget to invertebrate foraging. Color Vision: Dichromatic monkeys had a higher capture efficiency of exposed invertebrates and spent less time visually foraging. Trichromats ate relatively more "hard" sedentary invertebrates. We conclude that some variation in invertebrate foraging reflects differences between the sexes that may be due to disparities in size, strength, reproductive demands or niche preferences. However, other intraspecific variation in invertebrate foraging that might be mistakenly attributed to sex differences actually reflects differences in color vision [Current Zoology 56 (3): 300–312, 2010]. | CommonCrawl |
Water flow probabilistic predictions based on a rainfall–runoff simulator: a two-regime model with variable selection
Marie Courbariaux1,
Pierre Barbillon ORCID: orcid.org/0000-0002-7766-76931 &
Éric Parent1
Journal of Agricultural, Biological and Environmental Statistics volume 22, pages194–219(2017)Cite this article
Probabilistic forecasting aims at producing a predictive distribution of the quantity of interest instead of a single best guess point-wise estimate. With regard to water flow forecasts, the two main sources of uncertainty stem from unknown future rainfall and temperature (input error, i.e., meteorological uncertainty) and from the inadequacy of the deterministic simulator mimicking the rainfall–runoff (RR) transformation (hydrological uncertainty or RR error). These two sources of uncertainty can be dealt with separately and only the latter will be considered here. Only hydrological uncertainty is at stake when recorded meteorological data (instead of meteorological forecasts) are used as inputs to feed the RR simulator (RRS) for probabilistic predictions. The predictive performance of the RRS may strongly depend on the hydrological regimes: rapid flood variations induce large errors of anticipation but a series of dry events will translate into a much more smoother sequence of river levels due to the easily predictable behavior of the soil reservoir emptying. Consequently, a model with several regimes adapted to different error structures appears as a solution to cope with the issue of unstationary predictive variance. The river regime is modeled as a latent variable, the distribution of which is based on additional outputs of the RRS to be selected. Inference is performed by the EM algorithm with both steps leading to explicit analytic expressions. Asymptotic confidence regions for the estimates are provided within the same EM framework. Model selection is also performed, including the length of the model memory as well as the choice of explanatory variables for the latent regimes. The model is applied to a series of water flow forecasts routinely issued by two hydroelectricity producers in France and in Québec and compared with their present operational forecasting methods.
Fig. 10
Ailliot, P. and Monbet, V. (2012). Markov-switching autoregressive models for wind time series. Environmental Modelling & Software, 30:92–101.
Albert, J. H. and Chib, S. (1993). Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88(422):669–679.
MathSciNet Article MATH Google Scholar
Andreassian, V., Bergstrom, S., Chahinian, N., Duan, Q., Gusev, Y., Littlewood, I., Mathevet, T., Michel, C., Montanari, A., Moretti, G., et al. (2006). Catalogue of the models used in MOPEX 2004/2005. IAHS publication, 307:41.
Bates, B. C. and Campbell, E. P. (2001). A Markov chain Monte Carlo scheme for parameter estimation and inference in conceptual rainfall-runoff modeling. Water Resources Research, 37(4):937–947.
Box, G. and Jenkins, G. (1970). Time Series Analysis: Forecasting and Control. Holden–Day, San Francisco, Ca.
Box, G. E. and Cox, D. R. (1964). An analysis of transformations. Journal of the Royal Statistical Society. Series B (Methodological), pages 211–252.
Chib, S. (1996). Calculating posterior distributions and modal estimates in markov mixture models. Journal of Econometrics, 75(1):79–97.
Collet, J., Épiard, X., and Coudray, P. (2009). Simulating hydraulic inflows using PCA and ARMAX. The European Physical Journal-Special Topics, 174(1):125–134.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (methodological), pages 1–38.
Engeland, K., Renard, B., Steinsland, I., and Kolberg, S. (2010). Evaluation of statistical models for forecast errors from the HBV model. Journal of Hydrology, 384(1):142–155.
Evin, G., Kavetski, D., Thyer, M., and Kuczera, G. (2013). Pitfalls and improvements in the joint inference of heteroscedasticity and autocorrelation in hydrological model calibration. Water Resources Research, 49(7):4518–4524.
Evin, G., Thyer, M., Kavetski, D., McInerney, D., and Kuczera, G. (2014). Comparison of joint versus postprocessor approaches for hydrological uncertainty estimation accounting for error autocorrelation and heteroscedasticity. Water Resources Research, 50(3):2350–2375.
Fortin, V. (2000). Le modèle météo-apport HSAMI: historique, théorie et application. Institut de recherche d'Hydro-Québec, Varennes.
Furrer, E. M., Jacques, C., and Favre, A.-C. (2006). Short term discharge prediction using a Markovian regime switching model. Technical report, INRS-ETE.
Gailhard, J. (2014). Algorithme de recalage associé à MORDOR diagnostic et proposition d'améliorations. Note Technique Interne H-44200965-2014-000075, EDF-DTG.
Gelfand, A. E. and Smith, A. F. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85(410):398–409.
Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378.
Gneiting, T., Raftery, A. E., Westveld, A. H., and Goldman, T. (2005). Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Monthly Weather Review, 133(5):1098–1118.
Hemri, S., Fundel, F., and Zappa, M. (2013). Simultaneous calibration of ensemble river flow predictions over an entire range of lead times. Water Resources Research, 49(10):6744–6755.
Hemri, S., Lisniak, D., and Klein, B. (2015). Multivariate postprocessing techniques for probabilistic hydrological forecasting. Water Resources Research, 51(9):7436–7451.
Hersbach, H. (2000). Decomposition of the continuous ranked probability score for ensemble prediction systems. Weather and Forecasting, 15(5):559–570.
Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions, vol. 1–2. New York: John Wiley & Sons.
Krzysztofowicz, R. (2002). Bayesian system for probabilistic river stage forecasting. Journal of Hydrology, 268(1):16–40.
Kuczera, G. (1983). Improved parameter inference in catchment models: 1. evaluating parameter uncertainty. Water Resources Research, 19(5):1151–1162.
Li, M., Wang, Q., Bennett, J., and Robertson, D. (2015). A strategy to overcome adverse effects of autoregressive updating of streamflow forecasts. Hydrology and Earth System Sciences, 19(1):1–15.
Louis, T. A. (1982). Finding the observed information matrix when using the EM algorithm. Journal of the Royal Statistical Society. Series B, 44(2):226–233.
MathSciNet MATH Google Scholar
Lu, Z.-Q. and Berliner, L. M. (1999). Markov switching time series models with application to a daily runoff series. Water Resources Research, 35(2):523–534.
Matheson, J. E. and Winkler, R. L. (1976). Scoring rules for continuous probability distributions. Management Science, 22(10):1087–1096.
Article MATH Google Scholar
Mathevet, T. (2010). Erreur empirique de modèle. Note Technique Interne D4165/NT/2010-00395-A, EDF-DTG.
Morawietz, M., Xu, C.-Y., Gottschalk, L., and Tallaksen, L. M. (2011). Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system. Journal of Hydrology, 407(1):58–72.
Perreault, L., Garçon, R., and Gaudet, J. (2007). Analyse de séquences de variables aléatoires hydrologiques à l'aide de modèles de changement de régime exploitant des variables atmosphériques. La Houille Blanche (6):111–123.
Pianosi, F. and Raso, L. (2012). Dynamic modeling of predictive uncertainty by regression on absolute errors. Water Resources Research, 48(3).
Raftery, A. E., Gneiting, T., Balabdaoui, F., and Polakowski, M. (2005). Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review, 133(5).
Schaefli, B., Talamba, D. B., and Musy, A. (2007). Quantifying hydrological modeling errors through a mixture of normal distributions. Journal of Hydrology, 332(3):303–315.
Schoups, G. and Vrugt, J. A. (2010). A formal likelihood function for parameter and predictive inference of hydrologic models with correlated, heteroscedastic, and non-Gaussian errors. Water Resources Research, 46(10).
Schwarz, G. (1978). Estimating the Dimension of a Model. The Annals of Statistics, 6(2):461–464.
Sorooshian, S. and Dracup, J. A. (1980). Stochastic parameter estimation procedures for hydrologie rainfall-runoff models: Correlated and heteroscedastic error cases. Water Resources Research, 16(2):430–442.
Thyer, M., Kuczera, G., and Wang, Q. (2002). Quantifying parameter uncertainty in stochastic models using the Box-Cox transformation. Journal of Hydrology, 265(1):246–257.
Todini, E. (2008). A model conditional processor to assess predictive uncertainty in flood forecasting. International Journal of River Basin Management, 6(2):123–137.
Vrugt, J. A. and Robinson, B. A. (2007). Treatment of uncertainty using ensemble methods: Comparison of sequential data assimilation and Bayesian model averaging. Water Resources Research, 43(1).
Wang, Q., Shrestha, D. L., Robertson, D., and Pokhrel, P. (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, 48(5).
This work was supported by Électricité de France and by Hydro-Québec [research Grant Number 694R] through the thesis of M. Courbariaux. We would like to thank Anne-Catherine Favre, Joël Gailhard and Luc Perreault for their unfailing help and constructive comments on earlier drafts of the article. The forecasting and development teams at EDF-DTG and Hydro-Québec have provided the necessary material and case studies as well as many valuable advises ; we thank in particular Catherine Guay, Isabelle Chartier and Marie Minville from IREQ, Rémy Garçon, Matthieu Le-Lay and Federico Garavaglia from EDF-DTG. We also thank Joan Sobota for English proofreading. We finally thank the Associate Editor and the two reviewers for their comments and questions which help us to improve the paper.
UMR MIA-Paris, AgroParisTech, INRA, Université Paris-Saclay, 75005, Paris, France
Marie Courbariaux, Pierre Barbillon & Éric Parent
Marie Courbariaux
Pierre Barbillon
Éric Parent
Correspondence to Pierre Barbillon.
Appendix 1: Operational predictive method
EDF's operational predictive method consists of 3 independent modules: a deterministic model, an error model and an empirical copula.
Deterministic model (Gailhard 2014) The deterministic model in use at EDF is an autoregressive model combined with exponential smoothing. The strength of the autocorrelation is supposed to increase with the rate of water flow coming from the deep reservoirs of the watershed.
Error model (Mathevet 2010) The error model is an heteroscedastic conditional normal model derived for each forecasting lead time h (after normalization):
$$\begin{aligned} \left( Y_{t+h}|X_{t+h}=x\right) =b_{h}\left( x\right) +x+\sigma _{h}\left( x\right) \varepsilon ,\;\;\varepsilon \sim \mathcal {N}\left( 0,1\right) , \end{aligned}$$
where \(b_{h}\) and \(\sigma _{h}\) are tabulated functions of x.
Empirical mopula One finally resorts to an empirical copula to get samples of a space and time multivariate distribution from samples from the marginal (lead time by lead time) distributions.
Appendix 2: Fisher information matrix
$$\begin{aligned} G_{\gamma _k}(\mathbf {Y},\mathbf {Z})&=\sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}}\left( \mathbf {U}_t/\sigma _k^2 \cdot Y_t +0 \cdot Y_t^2-\mathbf {U}_t\mathbf {U}_t^T\varvec{\gamma }_k/\sigma ^2_k \right) \\&=\sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}}\left( \mathbf {U}_t/\sigma _k^2\cdot Y_t-\mathbf {U}_t\mathbf {U}_t^T\varvec{\gamma }_k/\sigma ^2_k \right) ,\\ G_{\sigma ^2_k}(\mathbf {Y},\mathbf {Z})&=\sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}}\left( \frac{\left( Y_t-\varvec{\gamma }_k^T\mathbf {U}_t \right) ^2}{2\sigma ^4_k}-\frac{1}{2\sigma _k^2}\right) ,\\ G_{\mathbf {B}}(\mathbf {Y},\mathbf {Z})&= \sum _{t>t_{\min }} (Z_t-\mathbf {B}^T\mathbf {V}_t)\mathbf {V}_t. \end{aligned}$$
For any k, \(k'\not =k\),
$$\begin{aligned} \frac{\partial }{\partial \varvec{\gamma }_k}G_{\varvec{\gamma }_k}(\mathbf {Y},\mathbf {Z})= & {} -\sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}} \mathbf {U}_t\mathbf {U}_t^T/\sigma ^2_k,\\ \frac{\partial }{\partial \sigma _k^2}G_{\sigma ^2_k}(\mathbf {Y},\mathbf {Z})= & {} \sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}} \left( \frac{-\left( Y_t-\varvec{\gamma }_k^T\mathbf {U}_t \right) ^2}{\sigma ^6_k}+\frac{1}{2\sigma _k^4}\right) ,\\ \frac{\partial }{\partial \varvec{\gamma }_k}G_{\sigma ^2_k}(\mathbf {Y},\mathbf {Z})= & {} \sum _{t>t_{\min }} \mathbb {I}_{\{S_{t}=k\}}\left( \frac{-\mathbf {U}_t \left( Y_t-\varvec{\gamma }_k^T\mathbf {U}_t \right) }{\sigma ^4_k}\right) ,\\ \frac{\partial }{\partial \theta _{k'}}G_{\theta _k}(\mathbf {Y},\mathbf {Z})= & {} 0,\\ \frac{\partial }{\partial \mathbf {B}}G_{\mathbf {B}}(\mathbf {Y},\mathbf {Z})= & {} -\sum _{t>t_{\min }}\mathbf {V}_t\mathbf {V}_t^T,\\ \frac{\partial }{\partial \theta _k}G_{\mathbf {B}}(\mathbf {Y},\mathbf {Z})= & {} 0. \end{aligned}$$
Then, the first term in the Louis decomposition is computed since \(\mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}|\mathbf {Y};(\varvec{\theta },\mathbf {B}))=\tau _{kt}\) is computed above.
For the second term in the Louis decomposition, we notice that:
$$\begin{aligned} \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}^2|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}|\mathbf {Y};\varvec{\theta },\mathbf {B})=\tau _{kt},\\ \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}\mathbb {I}_{\{S_{t}=k'\}}|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} 0 \text { for } k\not =k',\\ \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}\mathbb {I}_{\{S_{t'}=k\}}|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}|\mathbf {Y};\varvec{\theta },\mathbf {B})\mathbb {E}(\mathbb {I}_{\{S_{t'}=k\}}|\mathbf {Y};\varvec{\theta },\mathbf {B})\text { by independence}. \end{aligned}$$
We also need to compute:
$$\begin{aligned} \mathbb {E}(Z_t|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} \sum _k \tau _{kt}E_{kt}\quad \text {and}\\ \mathbb {E}\left( (Z_t-\mathbf {B}^T\mathbf {V}_t)^2|\mathbf {Y};\varvec{\theta },\mathbf {B}\right)= & {} \text {var}(Z_t|\mathbf {Y};\varvec{\theta },\mathbf {B})=\sum _k \tau _{kt}\varsigma _{kt}, \end{aligned}$$
$$\begin{aligned} \varsigma _{1t}= & {} 1-\frac{\phi (-\mathbf {B}^T \mathbf {V}_t)}{\varPhi (-\mathbf {B}^T\mathbf {V}_t)}\left( \frac{\phi (-\mathbf {B}^T\mathbf {V}_t)}{\varPhi (-\mathbf {B}^T\mathbf {V}_t)}-\mathbf {B}^T\mathbf {V}_t\right) ,\\ \varsigma _{0t}= & {} 1-\frac{\phi (-\mathbf {B}^T \mathbf {V}_t)}{1-\varPhi (-\mathbf {B}^T\mathbf {V}_t)}\left( \frac{\phi (-\mathbf {B}^T\mathbf {V}_t)}{1-\varPhi (-\mathbf {B}^T\mathbf {V}_t)}+\mathbf {B}^T\mathbf {V}_t\right) . \end{aligned}$$
Again, we rely on the independence between the \(Z_t\)s.
The remaining terms are easily evaluated:
$$\begin{aligned} \mathbb {E}(\mathbb {I}_{\{S_{t'}=k\}}Z_t|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} \tau _{kt'}\mathbb {E}(Z_t|\mathbf {Y};\varvec{\theta },\mathbf {B})\text { by independence},\\ \mathbb {E}(\mathbb {I}_{\{S_{t}=k\}}Z_t|\mathbf {Y};\varvec{\theta },\mathbf {B})= & {} \tau _{kt}E_{kt}. \end{aligned}$$
Courbariaux, M., Barbillon, P. & Parent, É. Water flow probabilistic predictions based on a rainfall–runoff simulator: a two-regime model with variable selection. JABES 22, 194–219 (2017). https://doi.org/10.1007/s13253-017-0278-5
EM algorithm
Probit model
Model uncertainty
Probabilistic forecasts
Rainfall–runoff model | CommonCrawl |
DEFINITION OF TERMS, CLASSIFICATION, AND CRITERIA FOR DIAGNOSIS
PRECIPITATING EVENTS
COMPLICATIONS OF THERAPY
RESOURCE UTILIZATION IN DKA
Reviews / Commentaries / Position Statements| January 01 2001
Management of Hyperglycemic Crises in Patients With Diabetes
Abbas E. Kitabchi, PHD, MD;
Abbas E. Kitabchi, PHD, MD
From the Division of Endocrinology (A.E.K., G.E.U., M.B.M.), University of Tennessee, and the Department of Nephrology (B.M.W.), Veterans Administration Hospital, Memphis, Tennessee; the Division of Endocrinology(E.J.B.), University of Virginia, Charlottesville, Virginia; the College of Medicine (R.A.K.), University of South Alabama, Mobile, Alabama; and the Department of Pediatrics (J.I.M.), University of South Florida, Tampa,Florida.
Guillermo E. Umpierrez, MD;
Guillermo E. Umpierrez, MD
Mary Beth Murphy, RN, MS, CDE, MBA;
Mary Beth Murphy, RN, MS, CDE, MBA
Eugene J. Barrett, MD, PHD;
Eugene J. Barrett, MD, PHD
Robert A. Kreisberg, MD;
Robert A. Kreisberg, MD
John I. Malone, MD;
John I. Malone, MD
Barry M. Wall, MD
Address correspondence and reprint requests to Abbas E. Kitabchi, PhD, MD,University of Tennessee, Memphis, Division of Endocrinology, 951 Court Ave.,Room 335M, Memphis, TN 38163. E-mail: [email protected].
Diabetes Care 2001;24(1):131–153
https://doi.org/10.2337/diacare.24.1.131
Abbas E. Kitabchi, Guillermo E. Umpierrez, Mary Beth Murphy, Eugene J. Barrett, Robert A. Kreisberg, John I. Malone, Barry M. Wall; Management of Hyperglycemic Crises in Patients With Diabetes. Diabetes Care 1 January 2001; 24 (1): 131–153. https://doi.org/10.2337/diacare.24.1.131
Diabetic ketoacidosis (DKA) and hyperosmolar hyperglycemic state (HHS) are two of the most serious acute complications of diabetes. These hyperglycemic emergencies continue to be important causes of morbidity and mortality among patients with diabetes in spite of major advances in the understanding of their pathogenesis and more uniform agreement about their diagnosis and treatment. The annual incidence rate for DKA estimated from population-based studies ranges from 4.6 to 8 episodes per 1,000 patients with diabetes(1,2),and in more recent epidemiological studies in the U.S., it was estimated that hospitalizations for DKA during the past two decades are increasing(3). Currently, DKA appears in 4-9% of all hospital discharge summaries among patients with diabetes(4,5). The incidence of HHS is difficult to determine because of the lack of population-based studies and the multiple combined illnesses often found in these patients. In general, it is estimated that the rate of hospital admissions due to HHS is lower than the rate due to DKA and accounts for<1% of all primary diabetic admissions(4,5,6).
Treatment of patients with DKA and HHS uses significant health care resources, which increases health care costs. In 1983, the cost of hospitalization for DKA in Rhode Island for 1 year was estimated to be $225 million (2). It was recently reported that treatment of DKA episodes represents more than one of every four health care dollars spent on direct medical care for adult patients with type 1 diabetes and for one of every two dollars in those patients experiencing multiple episodes of ketoacidosis(7). Based on an annual average of ∼100,000 hospitalizations for DKA in the U.S.(4) and estimated annual mean medical care charges of ∼$13,000 per patient experiencing a DKA episode(7), the annual hospital cost for patients with DKA may exceed $1 billion per year.
Mortality rates, which are <5% in DKA and ∼15% in HHS(4,5,6,8,9,10,11,12,13),increase substantially with aging and the presence of concomitant life-threatening illness. Similar outcomes of treatment of DKA have been noted in both community and teaching hospitals(14,15,16),and outcomes have not been altered by whether the managing physician is a family physician, general internist, house officer with attending supervision,or endocrinologist, so long as standard written therapeutic guidelines are followed(17,18).
This technical review aims to present updated recommendations for management of patients with hyperglycemic crises based on the pathophysiological basis of these conditions.
DKA consists of the biochemical triad of hyperglycemia, ketonemia, and acidemia (Fig. 1). As indicated, each of these features by itself can be caused by other metabolic conditions (19). Although it has been difficult to classify the degree and severity of DKA, we propose a working classification that may be useful for management of such a condition. Table 1 provides an empirical classification for DKA and HHS, with the caveat that the severity of illness will be influenced by the presence of concomitant intercurrent illnesses.
—The triad of DKA (hyperglycemia, acidemia, and ketonemia)and other conditions with which the individual components are associated. From Kitabchi and Wall(19).
The terms "hyperglycemic hyperosmolar nonketotic coma" and"hyperglycemic hyperosmolar nonketotic state" have been replaced with the term "hyperglycemic hyperosmolar state" (HHS)(20) to reflect the facts that 1) alterations of sensoria may often be present without coma and 2) the hyperosmolar hyperglycemic state may consist of moderate to variable degrees of clinical ketosis as determined by the nitroprusside method. As indicated, the degree of hyperglycemia in DKA is quite variable and may not be a determinant of the severity of DKA. Serum osmolality has been shown to correlate significantly with mental status in DKA and HHS(5,6,20,21,22,23)and is the most important determinant of mental status, as demonstrated by several studies. Table 2 provides estimates of typical deficits of water and electrolytes in DKA and HHS(20,24,25).
Although it has been repeatedly shown that infection is a common precipitating event in DKA and HHS in this country and abroad(4,12),recent studies suggest that omission of insulin or undertreatment with insulin may be the most important precipitating factor in urban African-American populations(5,26). Table 3 summarizes various studies(2,5,6,27,28,29,30)describing precipitating events for DKA. It is important to note that up to 20% of patients may present in the emergency room with either DKA or HHS without a previous diagnosis of diabetes(Table 3). In the African-American population, DKA has been increasingly noted in newly diagnosed obese type 2 diabetic patients(5,26,31). Therefore, the concept that the presence of DKA in type 2 diabetes is a rare occurrence is incorrect.
The most common types of infections are pneumonia and urinary tract infection, accounting for 30-50% of cases(Table 4). Other acute medical illnesses as precipitating causes include alcohol abuse, trauma, pulmonary embolism, and myocardial infarction, which can occur both in type 1 and 2 diabetes (6). Various drugs that alter carbohydrate metabolism, such as corticosteroids, pentamidine,sympathomimetic agents, and α- and β-adrenergic blockers, and excessive use of diuretics in the elderly may also precipitate the development of DKA and HHS.
The recent increased use of continuous subcutaneous insulin infusion pumps that use small amounts of short-acting insulin has been associated with an incidence of DKA that is significantly increased over the incidence seen with conventional methods of multiple daily insulin injections, in spite of the fact that most of the mechanical problems with insulin pumps have been resolved(6,32,33,34). In the Diabetes Control and Complications Trial, the incidence of DKA in patients on insulin pumps was about twofold higher than that in the multiple-injection group over a comparable time period(35). This may be due to the exclusive use of short-acting insulin in the pump, which if interrupted leaves no reservoir of insulin for blood glucose control.
Psychological factors and poor compliance, leading to omission of insulin therapy, are important precipitating factors for recurrent ketoacidosis. In young female patients with type 1 diabetes, psychological problems complicated by eating disorders may be contributing factors in up to 20% of cases of recurrent ketoacidosis(36,37). Factors that may lead to insulin omission in younger patients include fear of weight gain with good metabolic control, fear of hypoglycemia, rebellion against authority, and stress related to chronic disease(36). Noncompliance with insulin therapy has been found to be the leading precipitating cause for DKA in urban African-Americans and medically indigent patients(5,26). In addition, a recent study showed that diabetic patients without health insurance or with Medicaid alone had hospitalization rates for DKA that were two to three times higher than the rate in diabetic individuals with private insurance (38). In addition to the above-mentioned precipitating causes of DKA and HHS, there are numerous additional medical procedures and medications that may precipitate HHS. Some of these drugs trigger the development of hyperglycemic crises by causing a reversible deficiency in insulin action or insulin secretion (e.g., diuretics,β-adrenergic blockers, and dilantin), whereas other conditions cause hyperglycemic crises by inducing insulin resistance (e.g., hypercortisolism,acromegaly, and thyrotoxicosis). Some of the major causes of HHS are included in Table 4(20).
Although the pathogenesis of DKA is better understood than that of HHS, the basic underlying mechanism for both disorders is a reduction in the net effective concentration of circulating insulin,coupled with a concomitant elevation of counterregulatory stress hormones(glucagon, catecholamines, cortisol, and growth hormone). Thus, DKA and HHS are extreme manifestations of impaired carbohydrate regulation that can occur in diabetes. Although many patients manifest overlapping metabolic clinical pictures, each condition can also occur in relatively pure form. In patients with DKA, the deficiency in insulin can be absolute, or it can be insufficient relative to an excess of counterregulatory hormones. In HHS, there is a residual amount of insulin secretion that minimizes ketosis but does not control hyperglycemia. This leads to severe dehydration and impaired renal function, leading to decreased excretion of glucose. These factors coupled with the presence of a stressful condition result in more severe hyperglycemia than that seen in DKA. In addition, inadequate fluid intake contributes to hyperosmolarity without ketosis, the hallmark of HHS. These pathogenic topics will be discussed under various subheadings.
Carbohydrate metabolism
When insulin is deficient (absolute or relative), hyperglycemia develops as a result of three processes: increased gluconeogenesis, accelerated glycogenolysis, and impaired glucose utilization by peripheral tissues(39,40,41,42,43,44). Increased hepatic glucose production results from the high availability of gluconeogenic precursors, such as amino acids (alanine and glutamine; as a result of accelerated proteolysis and decreased protein synthesis)(45), lactate (as a result of increased muscle glycogenolysis), and glycerol (as a result of increased lipolysis), and from the increased activity of gluconeogenic enzymes. These include PEPCK, fructose-1,6-biphosphatase, pyruvate carboxylase, and glucose-6-phosphatase, which are further stimulated by increased levels of stress hormones in DKA and HHS(46,47,48,49,50). From a quantitative standpoint, increased glucose production by the liver and kidney represents the major pathogenic disturbance responsible for hyperglycemia in these patients, and gluconeogenesis plays a greater metabolic role than glycogenolysis(46,47,48,49,50,51). Although the detailed biochemical mechanisms for gluconeogenesis are well established, the molecular basis and the role of counterregulatory hormones in DKA are the subject of debate; very few studies have attempted to establish a temporal relationship between the increase in the level of counterregulatory hormones and the metabolic alterations in DKA(52). However, studies of insulin withdrawal in previously controlled patients with type 1 diabetes indicate that a combination of increased catecholamines and glucagon (and a decreased level of free insulin) in a well-hydrated individual may be the initial event(41,43,53,54,55,56). Furthermore, in the absence of dehydration, vomiting, or other stress situations, ketosis is usually mild, while glucose levels increase with simultaneous increases in serum potassium(56).
Animal studies have shown that catecholamines stimulate glycogen phosphorylase via β-receptor stimulation and subsequent production of cAMP-dependent protein kinase. Decreased insulin in the presence of an ambient level of glucagon, which is usually higher in diabetic than in nondiabetic individuals, leads to a high glucagon-to-insulin ratio, which inhibits production of an important metabolic regulator: fructose-2,6-biphosphate. Reduction of this intermediate stimulates the activity of fructose-1,6-biphosphatase (an enzyme that converts fructose-1,6-biphosphate to fructose-6-phosphate) and inhibits phosphofructokinase, the rate-limiting enzyme in the glycolytic pathway(57). Gluconeogenesis is further enhanced through stimulation of PEPCK by the increased ratio of glucagon to insulin in the presence of increased cortisol in DKA(57,58,59). In addition, the rapid decrease in the level of available insulin also leads to decreased glycogen synthase. These interactions can be summarized as follows:
\[\begin{array}{c}{\uparrow}\mathrm{glucagon/insulin}+{\uparrow}\mathrm{catecholamines}{\rightarrow}\\{\uparrow}\mathrm{cAMP}{\rightarrow}{\uparrow}\mathrm{cAMP-dependent\ protein}\\\mathrm{kinase}{\rightarrow}{\downarrow}\mathrm{fructose-2,6-biphosphate}{\rightarrow}\\{\downarrow}\mathrm{glycolysis\ and}{\ }{\uparrow}\mathrm{gluconeogenesis\ and}\\{\downarrow}\mathrm{glycogen\ synthase}\end{array}\]
The final step of glucose production occurs by conversion of glucose-6-phosphate to glucose, which is catalyzed by another rate-limiting enzyme of gluconeogenesis, hepatic glucose-6-phosphatase, which is stimulated by increased catabolic hormones and decreased insulin levels. These metabolic alterations are depicted in Fig. 2. Major substrates for gluconeogenesis are lactate, glycerol,alanine (in the liver), and glutamine (in the kidney). Alanine and glutamine are provided by the process of excess proteolysis and decreased protein synthesis, which occurs as a result of increased catabolic hormones and decreased insulin(45,60). In DKA and HHS, hyperglycemia causes an osmotic diuresis due to glycosuria,resulting in loss of water and electrolytes, hypovolemia, dehydration, and decreased glomerular filtration rate, which further increase the severity of hyperglycemia (see below). Although increased hepatic gluconeogenesis is the main mechanism of hyperglycemia in severe ketoacidosis, recent studies have shown a significant portion of gluconeogenesis may be accomplished via the kidney (51). Decreased insulin availability and partial insulin resistance, which exist in DKA and HHS by different mechanisms (see below), also contribute to decreased peripheral glucose utilization and add to the overall hyperglycemic state in both conditions.
—Proposed biochemical changes that occur during DKA leading to increased gluconeogenesis and lipolysis and decreased glycolysis. Note that lipolysis occurs mainly in adipose tissue. Other events occur primarily in the liver (except some gluconeogenesis in the kidney). Lighter arrows indicate inhibited pathways in DKA. F-6-P, fructose-6-phosphate;G-(X)-P, glucose-(X)-phosphate; HK, hexokinase; HMP, hexose monophosphate; PC,pyruvate carboxylase; PFK, phosphofructokinase; PEP, phosphoenolpyruvate; PK,pyruvate kinase; TCA, tricarboxylic acid; TG, triglycerides. From Kitabchi et al. (6).
Lipid and ketone metabolism
The increased production of ketones in DKA is the result of a combination of insulin deficiency and increased concentrations of counterregulatory hormones, particularly epinephrine, which lead to the activation of hormone-sensitive lipase in adipose tissue(61,62,63,64). The increased activity of tissue lipase causes a breakdown of triglyceride into glycerol and free fatty acids (FFAs). Although glycerol is used as a substrate for gluconeogenesis in the liver and the kidney, the massive release of FFAs assumes pathophysiological predominance in the liver, the FFAs serving as precursors of the ketoacids in DKA(44,63). In the liver, FFAs are oxidized to ketone bodies, a process predominantly stimulated by glucagon. Increased concentration of glucagon in DKA reduces the hepatic levels of malonyl-CoA by blocking the conversion of pyruvate to acetyl-CoA through inhibition of acetyl-CoA carboxylase, the first rate-limiting enzyme in de novo fatty acid synthesis(63,64,65,66). Malonyl-CoA inhibits carnitine palmitoyl-transferase (CPT)-I, the rate-limiting enzyme for transesterification of fatty acyl-CoA to fatty acyl-carnitine, allowing oxidation of fatty acids to ketone bodies. CPT-I is required for movement of FFA into the mitochondria, where fatty acid oxidation takes place. The increased fatty acyl-CoA and CPT-I activity in DKA leads to increased ketogenesis in DKA(67,68). In addition to increased production of ketone bodies, there is evidence that clearance of ketones is decreased in patients with DKA(69,70,71). This decrease may be due to low insulin concentration, increased glucocorticoid level, and decreased glucose utilization by peripheral tissues(72).
The role of individual counterregulatory hormones in the process of ketogenesis is reviewed below. Some of the first studies demonstrating net ketogenesis by the human liver in patients with DKA were done nearly 50 years ago (39). By combining measurements of arterial and hepatic venous ketone concentrations and estimation of splanchnic blood flow in patients with DKA, the liver was demonstrated to produce large amounts of ketones, and insulin treatment was demonstrated to reduce ketone production promptly. These findings were subsequently confirmed and extended with improved analytical techniques(73). To our knowledge, rates of ketogenesis have not been measured in hyperosmolar nonketotic patients using either organ balance or isotopic methods. Subsequent work using tracer methods(41,74)has demonstrated that even brief withdrawal of insulin from type 1 diabetic patients results in prompt development of ketosis. Insulin withdrawal from diabetic patients, however, leads to complex changes in circulating concentrations of many stress hormones. As a result, it is difficult to dissect the relative contributions of insulin deficiency and stress hormone excess in the regulation of ketogenesis. This is well illustrated in studies examining glucagon action. Numerous in vitro and some in vivo studies have demonstrated a potent role for glucagon in the stimulation of ketogenesis. However, some of these studies have used very high glucagon concentrations,and their physiological significance has been questioned. In a recent study in which blood glucose concentrations were carefully controlled (to eliminate suppressive effects of hyperglycemia on lipolysis), a lipolytic effect of glucagon was demonstrated(75). Another human study(76) demonstrated modest increases in ketogenesis when plasma glucagon was increased in insulin-deficient subjects. In contrast with the somewhat equivocal actions of physiological or near-physiological concentrations of glucagon, cortisol appears to have a more predictable stimulatory action on ketogenesis(77,78). This may result from both effects on peripheral lipolysis and increased supply of FFAs, as well as from direct hepatic effects.
Growth hormone may also play a prominent role in ketogenesis. Even modest physiological doses of growth hormone can markedly increase circulating levels of FFAs and ketone bodies(79,80). Because these changes with growth hormone administration are observed within 60 min, increased ketogenesis appears to be the result of the action of growth hormone itself rather than locally generated IGF-1. It has been reported that in patients with type 1 diabetes, the administration of growth hormone leads to significant increases in FFAs, ketone bodies, and glucose concentrations(81).
Adrenergic stimulation can also increase lipolysis and hepatic ketogenesis. Epinephrine secretion by the adrenal medulla is markedly enhanced in DKA(Table 5). In vitro, epinephrine has a marked effect to increase lipolysis in adipocytes. In vivo, epinephrine can increase plasma concentrations of FFAs, at least when insulin deficiency is present. In addition, epinephrine facilitates hepatic ketogenesis directly(82,83). Norepinephrine at concentrations that approximate those seen in the synaptic cleft stimulates lipolysis by adipocytes and enhances ketogenesis(84,85).
In addition to the individual effects of stress hormones, infusion of combinations of counterregulatory hormones has been observed to have synergistic effects when compared with those seen with single hormone infusions(86,87). Indeed, in the setting of fixed levels of insulin, infusing mixtures of stress hormones to reach high physiological/severe stress levels, can precipitate marked increases in lipolysis and ketogenesis(44,67). Spontaneous DKA is characterized by simultaneous elevations of multiple insulin-antagonizing (counterregulatory) hormones(6,88,89,90)in the face of reduced insulin, which brings about the altered metabolic profiles seen in DKA. Thus, DKA is analogous to a fasting state, where ketosis is accompanied by elevations of counterregulatory hormones and reduction of insulin but to a lesser degree than in DKA. The condition in DKA has been referred to as a "superfasted" state(91).
Having suggested that stress hormones either singly or in combination are major contributors to ketogenesis and the development of the acidotic state in DKA, the question arises whether HHS differs from DKA with regard to stress hormone secretion. There are surprisingly few data regarding this issue. Reduced concentrations of FFAs, cortisol, and growth hormone(92) and reduced levels of glucagon have been demonstrated in HHS relative to DKA(93). In another study, the concentrations of glucagon, cortisol, growth hormone, epinephrine, and norepinephrine were measured in patients presenting with acute decompensation of their diabetes (94). Some subjects were hyperglycemic with little or no ketosis, whereas others were frankly ketoacidotic. In this study, no clear-cut differences between hormonal levels in DKA and those in HHS could be identified. However, there were significant positive correlations between degree of ketonemia and plasma concentrations of growth hormone and FFAs, and there was a negative correlation with serum C-peptide. Glucagon and cortisol concentrations correlated well with plasma glucose, but not with degree of ketonemia. This study presents correlative data, but does not establish causal relationships between hormonal levels and alterations of metabolic pathways; hence, it does not settle the controversy of hormonal status in DKA and HHS. In another study, 12 HHS and 22 DKA patients showed no differences with regard to FFAs,cortisol, or glucagon (95). This work is of special interest because it demonstrated that in HHS, both basal and stimulated C-peptide levels were five- to sevenfold higher than those in the DKA group. These data are depicted in Table 5 and are contrasted with data from other authors.
The scarcity of data available in HHS prevents firm conclusions as to whether or not differences in stress hormone profiles contribute to the less prominent ketosis in that setting. Available data are consistent with multiple contributing factors, with the most consistent differences being lower growth hormone and higher insulin in HHS than in DKA(Table 5)(92,95). The higher insulin levels (demonstrated by high basal and stimulated C-peptide) in HHS provide enough insulin to inhibit lipolysis in HHS (since it takes less insulin for antilipolysis than for peripheral glucose uptake[67,96,97,98])but not enough for optimal carbohydrate metabolism. Although Table 5 shows similar levels of FFAs in HHS and DKA, plasma FFAs may not be reflective of portal vein FFA levels, which in turn regulate ketogenesis. It is important to emphasize that studies performed before 1980, which showed similar blood levels of insulin in DKA and HHS (99), used assays that were not free from interference from proinsulin. Because patients with DKA and HHS present with an overlapping syndrome, the differences between DKA and HHS become matters of degree, not fundamental pathogenetic differences. However, it is important to remember that hyperosmolarity of severe DKA, which occurs in about one-third of DKA patients(23), is secondary to fluid losses due to osmotic diuresis and to variable degrees of impaired fluid intake due to nausea and vomiting; the hyperosmolarity in HHS patients is due to more prolonged osmotic diuresis and to inability to take fluid. This can be secondary either to mental retardation (in certain cases in children) or to chronic debilitation in elderly patients who are unaware of or unable to take adequate fluid(216,217).
Water and electrolyte metabolism
The development of dehydration and sodium depletion in DKA and HHS is the result of increased urinary output and electrolyte losses(25,100,101). Hyperglycemia leads to osmotic diuresis in both DKA and HHS. In DKA, urinary ketoanion excretion on a molar basis is generally less than half that of glucose. Ketoanion excretion, which obligates urinary cation excretion as sodium, potassium, and ammonium salts, also contributes to a solute diuresis. The extent of dehydration, however, is typically greater in HHS than in DKA. At first, this seems paradoxical because patients with DKA experience the dual osmotic load of ketones and glucose. The more severe dehydration in HHS,despite the lack of severe ketonuria, may be attributable to the more gradual onset and longer duration of metabolic decompensation(102) and partially to the fact that patients presenting with HHS typically have an impaired fluid intake. Other factors that may contribute to excessive volume losses include diuretic use, fever, diarrhea, and nausea and vomiting. The more severe dehydration, together with the older average age of patients with HHS and the presence of other comorbidities, almost certainly accounts for the higher mortality of HHS (102). In addition, osmotic diuresis promotes the net loss of multiple minerals and electrolytes (Na, K, Ca, Mg, Cl, and PO4). Although some of these can be replaced rapidly during treatment (Na, K, and Cl), others require days or weeks to restore losses and achieve balance(25,100,101).
The severe derangement of water and electrolytes in DKA and HHS is the result of insulin deficiency, hyperglycemia, and hyperketonemia (in DKA). In DKA and HHS, insulin deficiency per se may also contribute to renal losses of water and electrolytes because insulin stimulates salt and water reabsorption in the proximal and distal nephron and phosphate reabsorption in the proximal tubule(100,101,103). During severe hyperglycemia, the renal threshold of glucose (∼200 mg/dl)and ketones is exceeded; therefore, urinary excretion of glucose in DKA and HHS may be as much as 200 g/day, and urinary excretion of ketones in DKA may be ∼20-30 g/day, with total osmolar load of ∼2,000 mOsm(103). The osmotic effects of glucosuria result in impairment of NaCl and H2O reabsorption in the proximal tubule and loop of Henle(100). The ketoacids formed during DKA (β-hydroxybutyric and acetoacetic) are strong acids that fully dissociate at physiological pH. Thus, ketonuria obligates excretion of positively charged cations (Na, K, NH4+). The hydrogen ions are titrated by plasma bicarbonate, resulting in metabolic acidosis. The retention of ketoanions leads to an increase in the plasma anion gap.
The losses of electrolytes and water in DKA and HHS are summarized in Tables 1 and 2. During HHS and DKA,intracellular dehydration occurs as hyperglycemia and water loss lead to increased plasma tonicity, leading to a shift of water out of cells. This shift of water is also associated with a shift of potassium out of cells into the extracellular space. Potassium shifts are further enhanced by the presence of acidosis and the breakdown of intracellular protein secondary to insulin deficiency (104). Furthermore, entry of potassium into cells is impaired in the presence of insulinopenia. Marked renal potassium losses occur as a result of osmotic diuresis and ketonuria. Progressive volume depletion leads to decreased glomerular filtration rate and greater retention of glucose and ketoanions in plasma. Thus, patients with a better history of food, salt, and fluid intake prior to and during DKA have better preservation of kidney function, greater ketonuria, lower ketonemia, and lower anion gap and are less hyperosmolar. These patients may, therefore, present with greater degrees of hyperchloremic metabolic acidosis (105). On the other hand, diabetic patients with a history of diminished fluid and solute intake during the development of acute metabolic decompensation, plus loss of fluid through nausea and vomiting, typically present with greater degrees of volume depletion, increased hyperosmolarity, and impaired renal function and greater retention of glucose and ketoanions in plasma. The greater retention of plasma ketoanions is reflected in a greater increment in the plasma anion gap. Such patients may present with greater alteration of sensoria, which is more commonly found in HHS than DKA(8,102). However, in HHS, as mentioned above, the inability to take fluid (often in elderly patients) plus other pathogenic mechanisms leads to greater hyperosmolarity. These pathogenic pathways and their relationship to clinical conditions of DKA and HHS are depicted in Fig. 3.
—Pathogensis of DKA and HHS.
During treatment of DKA with insulin, hydrogen ions are consumed as ketoanion metabolism is facilitated. This contributes to regeneration of bicarbonate, correction of metabolic acidosis, and decrease in plasma anion gap. The urinary loss of ketoanions, as sodium and potassium salts, therefore represents the loss of potential bicarbonate(106), which is gradually recovered within a few days or weeks(107).
Insulin resistance in hyperglycemic crises
Soon after insulin therapy became available, the administration of 10 U insulin every 2 h was reported to be effective for the treatment of DKA(108,109). In subsequent decades, however, large doses of insulin were recommended because two early studies suggested that larger doses of insulin were more effective(110,111). In the 1950s and 1960s, two prospective randomized studies compared high-,moderate-, and intermediate-dose insulin therapy in the treatment of DKA. The results showed no difference in response to therapy regardless of insulin dose(112,113). In the early 1970s, numerous studies demonstrated that "low-dose"or "physiological" (0.1 U · kg-1 ·h-1) doses of insulin were effective in controlling DKA(114,115,116,117,118,119,120). None of these studies used randomized prospective protocols(121). Between 1976 and 1980,however, numerous prospective randomized studies in adults and children demonstrated the efficacy of lower or physiological doses of insulin by various routes of therapy, which, unlike the high-dose protocol, were associated with a lower incidence of hypokalemia and hypoglycemia(122,123,124,125,126,127,128,129). The average glucose decrement under such low-dose protocols was between 75 and 120 mg · dl-1 · h-1, which was very similar to the response to larger doses of insulin. Because of the similar metabolic response to high or low doses of insulin, it was questioned whether DKA patients were significantly more insulin resistant than well-controlled type 1 diabetic patients(18,56).
Several studies, however, have demonstrated that when insulin's action on glucose disposal in diabetic subjects is compared with that in healthy control subjects, both DKA and HHS are associated with a significant amount of insulin resistance(130,131,132,133). One of the major reasons for the success of low-dose insulin is the fact that most of the protocols recommend that patients in DKA or HHS be aggressively hydrated before or during insulin therapy. The hyperosmolar state alone has been shown to cause insulin resistance both in vivo and in vitro(90,130). Hydration before insulin therapy has also been shown to decrease glucagon,cortisol, catecholamines, and aldosterone by at least threefold, whereas growth hormone, prolactin, and parathyroid hormone do not exhibit such changes(90). The blood glucose decrement during hydration is partially due to improvement in glomerular filtration rate and excretion of large amounts of glucose in the urine(90,134,135). Lack of blood glucose decrement may therefore indicate inadequate hydration or renal function impairment(13). Hydration therapy alone has been reported to partially correct pH and plasma bicarbonate in two studies(44,90),but in another study, pH and plasma bicarbonate were not corrected until insulin was added to the regimen(128). There are, in addition, very rare cases of DKA in which extraordinary insulin resistance is present, which results in multiple hospital admissions(23) or in which hundreds or even thousands of units of insulin are required before resolution of hyperglycemia (136).
History and physical examination
DKA and HHS are medical emergencies that require prompt recognition and treatment. The first approach to these patients consists of a rapid but careful history and physical examination with special attention to 1)patency of airway, 2) mental status, 3) cardiovascular and renal status, 4) sources of infection, and 5) state of hydration(6,137,138,139). These steps should allow determination of the degree of urgency and priority with which various laboratory results should be obtained so that treatment can start without delay. DKA usually develops rapidly, over a time span of <24 h, whereas HHS symptoms may occur more insidiously, with polyuria, polydipsia,and weight loss persisting for several days before admission. In patients with DKA, nausea and vomiting is a common symptom. Abdominal pain is occasionally seen in adults (and is commonly seen in children), sometimes mimicking an acute abdomen (140). Although the cause has not been elucidated, dehydration of the muscle tissue, delayed gastric emptying, and ileus induced by electrolyte disturbance and metabolic acidosis have been implicated as possible causes of abdominal pain. Acidosis,which can stimulate the medullar respiratory center, can cause rapid and deep respiration (Kussmaul breathing).
Physical examination reveals other findings, such as a fruity breath odor(similar to the odor of nail polish remover) as the result of volatile acetone and signs of dehydration, including loss of skin turgor, dry mucous membranes,tachycardia, and hypotension. Mental status can vary from full alertness to profound lethargy; however, <20% of patients with DKA or HHS are hospitalized with loss of consciousness(5,6,8,9,20,23,24). In HHS, mental obtundatioh and coma are more frequent because the majority of patients, by definition, are hyperosmolar(20,141). In some patients with HHS, focal neurological signs (hemiparesis or hemianopsia) and seizures may be the dominant clinical features(141,142,143,144). Although the most common precipitating event is infection, most patients are normothermic or even hypothermic at presentation, because of either skin vasodilation or low fuel-substrate availability.
The easiest and most urgent laboratory tests after a prompt history and physical examination are determination of blood glucose by finger stick and urinalysis with reagent strips to assess qualitative amounts of glucose,ketones, nitrite, and leukocyte esterase in the urine.
Laboratory evaluation
The initial laboratory evaluation of a patient with suspected DKA or HHS should include immediate determination of arterial blood gases, blood glucose, and blood urea nitrogen (BUN); determination of serum electrolytes,osmolality, creatinine, and ketones; urinalysis; and a complete blood count with differential. Bacterial cultures of urine, blood, and other tissues should be obtained, and appropriate antibiotics should be administered if infection is suspected. In children without heart, lung, or kidney disease,the initial evaluation may be modified, at the discretion of the physician, to include a venous pH in lieu of an arterial pH. The workup for sepsis may be omitted in children, unless warranted by initial evaluation, because the most common precipitating factor of DKA in this age-group is insulin omission.
Tables 1 and 2 summarize the biochemical criteria for diagnosis and empirical subclassification of DKA and HHS. The most widely used diagnostic criteria for DKA are blood glucose >250 mg/dl,arterial pH <7.3, serum bicarbonate <15 mEq/l, and moderate degree of ketonemia and/or ketonuria. Accumulation of ketoacids usually results in an increased anion gap metabolic acidosis. The plasma anion gap is calculated by subtracting the major measured anions (chloride and bicarbonate) from the major measured cation (sodium). Because potassium concentration may be altered by acid-base disturbances and by total-body stores, it is not routinely used in the calculation of anion gap(44,145). The normal anion gap has been historically reported to be 12 mEq/l, and values>14-15 mEq/l have been considered to indicate the presence of an increased anion gap metabolic acidosis(44,145). Most laboratories, however, currently measure sodium and chloride concentrations using ion-specific electrodes. The plasma chloride concentration typically measures 2-6 mEq/l higher with ion-specific electrodes than with prior methods; thus, the normal anion gap using the current methodology has been reported to be in the range of 7-9 mEq/l(146,147). Using these values, an anion gap of >10-12 mEq/l would indicate the presence of increased anion gap acidosis(146,147).
Although these criteria for DKA have served well for research purposes,they may be somewhat restrictive for clinical practice. For example, the majority of patients admitted with the diagnosis of DKA present with mild metabolic acidosis; however, they show elevations of both serum glucose andβ-hydroxybutyrate concentration(5). Most of these patients with mild ketoacidosis are alert and could be managed in a general hospital ward. Milder cases of DKA in which the patient is alert and able to tolerate oral intake may be treated and observed in the emergency room for a few hours and then discharged when stable. Patients with severe ketoacidosis typically present with a bicarbonate level <10 mEq/l and/or a pH <7.0, have total serum osmolality >330 mOsm/kg, usually present with mental obtundation(23), and are more likely to develop complications than are those patients with mild or moderate forms of ketoacidosis. Therefore, a classification of the severity of DKA appears to be more clinically appropriate because it may help with patient disposition and choice of therapy (see TREATMENT). This classification must be coupled with an understanding of any concomitant conditions affecting the patient's prognosis and the need for intravenous therapy for hydration.
Assessment of ketonuria and ketonemia, the key diagnostic features of ketoacidosis, is usually performed by nitroprusside reaction. However,nitroprusside reaction provides a semiquantitative estimation of acetoacetate and acetone levels. This assay underestimates the severity of ketoacidosis because it does not recognize the presence of β-hydroxybutyric acid,which is the main ketoacid in DKA(148). Therefore, if possible, direct measurement of β-hydroxybutyrate, which is now available in many hospital settings, is preferable in establishing the diagnosis of ketoacidosis(149,150).
Diagnostic criteria for HHS include plasma glucose concentration >600 mg/dl, serum total osmolality >330 mOsm/kg, and absence of severe ketoacidosis. However, the laboratory profiles of HHS in previous series have shown higher mean values of glucose (998 mg/dl) and osmolality (363 mOsm/l),with BUN 65 mg/dl, HCO3 21.6 mEq/l, sodium 143 mEq/l, creatinine 2.9 mg/dl, and anion gap 23.4 mg/l(100,151). By definition, patients with HHS have a serum pH ≥7.3, a serum bicarbonate>18 mEq/l, and mild ketonemia and ketonuria. Approximately 50% of the patients with HHS have an increased anion gap metabolic acidosis as the result of concomitant ketoacidosis and/or an increase in serum lactate levels(151). Table 6 provides methods for measurement of anion gap and serum total and effective osmolality from serum chemistries.
In some cases, the diagnosis of DKA can be confounded by the coexistence of other acid-base disorders. Arterial pH may be normal or even increased,depending on the degree of respiratory compensation and the presence of metabolic alkalosis from frequent vomiting or diuretic use(152). Similarly, blood glucose concentration may be normal or only minimally elevated in 15% of patients with DKA (<300 mg/dl), such as in alcoholic subjects or patients receiving insulin. In addition, wide variability in the type of metabolic acidosis has been reported. It has been reported that 46% of patients admitted for DKA had high anion gap acidosis, 43% had mixed anion gap acidosis and hyperchloremic metabolic acidosis, and 11% had only hyperchloremic metabolic acidosis (105).
The majority of patients with hyperglycemic emergencies present with leukocytosis. Admission serum sodium concentration is usually low in DKA because of the osmotic flux of water from the intracellular to the extracellular space in the presence of hyperglycemia. To assess the severity of sodium and water deficits, serum sodium may be corrected by adding 1.6 mEq to the measured serum sodium for each 100 mg/dl of glucose above 100 mg/dl(153). Admission serum potassium concentration is usually elevated because of a shift of potassium from the intracellular to the extracellular space caused by acidemia, insulin deficiency, and hypertonicity. On the other hand, in HHS, the measured serum sodium concentration is usually normal or elevated because of severe dehydration. In this setting, the corrected serum sodium concentration would be very high. Admission serum phosphate level in DKA may be elevated despite total-body phosphate depletion.
Pitfalls of laboratory diagnosis
In assessment of blood glucose and electrolytes in DKA, certain precautions need to be taken in interpreting results. Severe hyperlipidemia, which is occasionally seen in DKA, could reduce serum glucose(154) and sodium(155) levels, factitiously leading to pseudohypo- or normoglycemia and pseudohyponatremia, respectively,in laboratories still using volumetric testing or dilution of samples with ion-specific electrodes. This should be rectified by clearing lipemic blood before measuring glucose or sodium or by using undiluted samples with ion-specific electrodes. Creatinine, which is measured by a colormetric method, may be falsely elevated as a result of acetoacetate interference with the method(156,157). Hyperamylasemia, which is frequently seen in DKA, may be the result of extrapancreatic secretion(158) and should be interpreted cautiously as a sign of pancreatitis. The usefulness of urinalysis is only in the initial diagnosis for glycosuria and ketonuria and detection of urinary tract infection. For quantitative assessment of glucose or ketones,the urine test is unreliable, because urine glucose concentration has poor correlation with blood glucose levels(159,160)and the major urine ketone, β-hydroxybutyrate, cannot be measured by the standard nitroprusside method(148).
Not all patients with ketoacidosis have DKA. Patients with chronic ethanol abuse with a recent binge culminating in nausea, vomiting, and acute starvation may present with alcoholic ketoacidosis (AKA). In virtually all reported series of AKA, the elevation of total ketone body concentration (7-10 mmol/l) is comparable to that reported in patients with DKA(161,162). However, in in vitro studies, the altered redox cellular state in AKA caused by an increased ratio of NADH to NAD levels leads to a reduction of pyruvate and oxaloacetate, which results in impaired gluconeogenesis(163). Additionally, low levels of malonyl-CoA stimulate ketoacidosis and high catecholamines, which result in decreased insulin secretion and increased ratio of glucagon to insulin. This sets the stage for a shift in the equilibrium reaction towardβ-hydroxybutyrate production(163,164). Consequently, AKA patients usually present with normal or even low plasma glucose levels and much higher levels of β-hydroxybutyrate than of acetoacetate. The average β-hydroxybutyrate-to-acetoacetate ratio observed in AKA might be as high as 7-10:1, as opposed to the 3:1 ratio observed in DKA (165). The variable that differentiates diabetes-induced and alcohol-induced ketoacidosis is the concentration of blood glucose. Whereas DKA is characterized by hyperglycemia (plasma glucose >250 mg/dl), the presence of ketoacidosis without hyperglycemia in an alcoholic patient is virtually diagnostic of AKA. Additionally, AKA patients frequently have hypomagnesemia, hypokalemia, and hypophosphatemia, as well as hypocalcemia, due to decreased PTH as a result of hypomagnesemia (165).
Some patients with decreased food intake (<500 kcal/day) for several days may present with mild ketoacidosis (starvation ketosis). However, a healthy subject is able to adapt to prolonged fasting by increasing the clearance of ketone bodies in peripheral tissues (brain and muscle) and by enhancing the kidneys' ability to excrete ammonium to compensate for the increased ketoacid production(91). Thus, patients with starvation ketosis rarely present with a serum bicarbonate concentration<18 mEq/l and do not exhibit hyperglycemia.
DKA must also be distinguished from other causes of high anion gap metabolic acidosis, including lactic acidosis, advanced chronic renal failure,and ingestion of such drugs as salicylate, methanol, ethylene glycol, and paraldehyde. Measuring blood lactate concentration easily establishes the diagnosis of lactic acidosis (>5 mmol/l) because DKA patients seldom demonstrate this level of serum lactate(122,127,128). However, an altered redox state may obscure ketoacidosis in diabetic patients with lactic acidosis (166). Salicylate overdose is suspected in the presence of mixed acid-base disorder(primary respiratory alkalosis and increased anion gap metabolic acidosis) in the absence of increased ketone levels. Diagnosis is confirmed by a serum salicylate level >80-100 mg/dl. Methanol ingestion results in acidosis from the accumulation of formic acid and to a lesser extent lactic acid. Methanol intoxication develops within 24 h after ingestion, and patients usually present with abdominal pain secondary to gastritis or pancreatitis and visual disturbances that vary from blurred vision to blindness (optic neuritis). Diagnosis is confirmed by the presence of an elevated methanol level. Ethylene glycol (antifreeze) ingestion leads to excessive production of glycolic acid. The diagnosis of ethylene glycol ingestion is suggested by the presence of increased serum osmolality and high anion gap acidosis without ketonemia, as well as neurological and cardiovascular abnormalities (seizures and vascular collapse), and the presence of calcium oxalate and hippurate crystals in the urine. Because methanol and ethylene glycol are low-molecular weight alcohols,their presence in plasma may be indicated by an increased (>20 mOsm/kg)plasma osmolar gap, defined as the difference between measured and calculated plasma osmolality. Paraldehyde ingestion is indicated by its characteristic strong odor on the breath. Table 7 also summarizes the differential diagnosis of various states of coma in regard to acid-base balances, etc.(167).
Therapeutic goals
The therapeutic goals for treatment of hyperglycemic crises in diabetes consist of 1) improving circulatory volume and tissue perfusion, 2) decreasing serum glucose and plasma osmolality toward normal levels, 3) clearing the serum and urine of ketones at a steady rate, 4) correcting electrolyte imbalances, and 5) identifying and treating precipitating events (Tables 2 and 3).
As shown in Figs. 4 and 5, monitoring of serum glucose values must be done every 1-2 h during treatment. Serum electrolytes,phosphate, and venous pH must be assessed every 2-6 h, depending on the clinical response of the patient. Foremost, the precipitating factor must be identified and treated. See Table 7 for a review of the laboratory evaluation of metabolic causes of acidosis and coma. A flow sheet (Fig. 6) is invaluable for recording vital signs, volume and rate of fluid administration, insulin dosage, and urine output and for assessing the efficacy of medical therapy(6). Figures 4 and 5 represent a successful protocol used by the authors for the treatment of DKA and HHS in adult patients. There are some differences in the treatment of children with DKA,which are described throughout the following sections. A protocol for the management of the pediatric patient with DKA and HHS is shown in Fig. 7.
—Protocol for the management of adult patients with DKA. *DKA diagnostic criteria: blood glucose >250 mg/dl, arterial pH<7.3, bicarbonate <15 mEq/l, and moderate ketonuria or ketonemia.†After history and physical examination, obtain arterial blood gases,complete blood count with differential, urinalysis, blood glucose, BUN,electrolytes, chemistry profile, and creatinine levels STAT as well as an electrocardiogram. Obtain chest X ray and cultures as needed. ‡Serum Na should be corrected for hyperglycemia (for each 100 mg/dl glucose >100 mg/dl, add 1.6 mEq to sodium value for corrected serum sodium value).
—Protocol for the management of adult patients with HHS. *Diagnostic criteria: blood glucose >600 mg/dl, arterial pH>7.3, bicarbonate >15 mEq/l, effective serum osmolality >320 mOsm/kg H2O, and mild ketonuria or ketonemia. This protocol is for patients admitted with mental status change or severe dehydration who require admission to an intensive care unit. For less severe cases, see text for management guidelines. Effective serum osmolality calculation: 2[measured Na (mEq/l)] +glucose (mg/dl)/18. †After history and physical examination, obtain arterial blood gases, complete blood count with differential, urinalysis,plasma glucose, BUN, electrolytes, chemistry profile, and creatinine levels STAT as well as an electrocardiogram. Chest X ray and cultures as needed.‡Serum Na should be corrected for hyperglycemia (for each 100 mg/dl glucose >100 mg/dl, add 1.6 mEq to sodium value for corrected serum value).
—DKA/HHS flowsheet for the documentation of clinical parameters, fluid and electrolytes, laboratory values, insulin therapy, and urinary output. From Kitabchi et al.(6).
—Protocol for the management of pediatric patients (<20 years) with DKA or HHS. *DKA diagnostic criteria: blood glucose>250 mg/dl, venous pH <7.3, bicarbonate <15 mEq/l, and moderate ketonuria or ketonemia. †HHS diagnostic criteria: blood glucose >600 mg/dl, venous pH >7.3, bicarbonate >15 mEq/l, and altered mental status or severe dehydration. ‡After the initial history and physical examination, obtain blood glucose, venous blood gasses, electrolytes, BUN,creatinine, calcium, phosphorous, and urine analysis STAT. §Usually 1.5 times the 24-h maintenance requirements (∼5 ml · kg-1· h-1) will accomplish a smooth rehydration; do not exceed two times the maintenance requirement. ∥The potassium in solution should be 1/3 KPO4 and 2/3 KCl or Kacetate.
Replacement of fluid and electrolytes
The severity of fluid and sodium deficits, as shown in Table 2, is determined primarily by duration of hyperglycemia, level of renal function, and patient's oral intake of solute and water(23,24,44,145,167,168,169,170,171,172,173,174,175). The severity of dehydration and volume depletion can be estimated by clinical examination (44) using the following guidelines, with the caveat that these criteria are less reliable in patients with neuropathy and impaired cardiovascular reflexes:
An orthostatic increase in pulse without change in blood pressure indicates∼10% decrease in extracellular volume (i.e., ∼2 liters isotonic saline).
An orthostatic drop in blood pressure (>15/10 mmHg) indicates a 15-20%decrease in extracellular volume (i.e., 3-4 liters).
Supine hypotension indicates a decrease of >20% in extracellular fluid volume (i.e., >4 liters).
The use of isotonic versus hypotonic saline in treatment of DKA and HHS is still controversial, but there is uniform agreement that in both DKA and HHS,the first liter of hydrating solution should be normal saline (0.9% NaCl),given as quickly as possible within the 1st hour and followed by 500-1,000 ml/h of 0.45 or 0.9% NaCl (depending on the state of hydration and serum sodium) during the next 2 h. State of hydration can also be estimated by calculating total and effective plasma osmolality and by calculating corrected serum sodium concentration. Total plasma osmolality can be calculated by the following equation: 2(measured Na+) (mEq/l) + glucose (mg/dl)/18 +BUN (mg/dl)/2.8. Total osmolality, whether calculated or directly measured by freezing point depression, is not equivalent to tonicity, because only those solutes that are relatively restricted to the extracellular space are effective in causing osmotic flux of water from intracellular to extracellular space. Urea is an ineffective osmole; therefore, effective osmolality is defined as 2(measured Na+) (mEq/l) + glucose (mg/dl)/18(45,172). Corrected serum sodium concentrations of >140 mEq/l and calculated total osmolality of >340 mOsm/kg H2O are associated with large fluid deficits(20,23,167,168,169,170,171). Calculated total and effective osmolalities can be correlated with mental status, stupor, and coma typically occurring with total and effective osmolalities of >340 and 320 mOsm/kg H2O, respectively(21,23,174). The presence of stupor or coma in the absence of such hyperosmolarity demands prompt consideration of other causes of altered mental status(145). Severe hypertonicity is also more frequently associated with large sodium deficits and hypovolemic shock(21,168,169,170,171,172,173,174).
The initial goal of rehydration therapy is repletion of extracellular fluid volume by intravenous administration of isotonic saline(175) to restore intravascular volume; this will decrease counterregulatory hormones and lower blood glucose (90), which should augment insulin sensitivity(130). The initial fluid of choice is isotonic saline (0.9% NaCl), even in HHS patients or DKA patients with marked hypertonicity, particularly in patients with evidence of severe sodium deficits manifested by hypotension, tachycardia, and oliguria. Isotonic saline is hypotonic relative to the patient's extracellular fluid and remains restricted to the extracellular fluid compartment(175). Administration of hypotonic saline, which is similar in composition to fluid lost during osmotic diuresis, leads to gradual replacement of deficits in both intracellular and extracellular compartments(175). The choice of replacement fluid and the rate of administration in HHS remain controversial. Some authorities advocate the use of hypotonic fluid from the outset if effective osmolality is >320 mOsm/kg H2O. Others advocate initial use of isotonic fluid. As outlined in Fig. 5, an initial liter of 0.9% NaCl over the 1st hour is followed by either 0.45 or 0.9% NaCl, depending on the corrected serum sodium and the hemodynamic status of the patient.
Dextrose should be added to replacement fluids when blood glucose concentrations are <250 mg/dl in DKA or <300 mg/dl in HHS. This can usually be accomplished with the administration of 5% dextrose; however, in rare cases, a 10% dextrose solution may be needed to maintain plasma glucose levels and clear ketonemia. This allows continued insulin administration until ketogenesis is controlled in DKA and avoids too rapid correction of hyperglycemia, which may be associated with development of cerebral edema(especially in children)(176). An additional important aspect of fluid replacement therapy in both DKA and HHS is the replacement of ongoing urinary losses. Failure to adjust fluid replacement for urinary losses leads to a delay in repair of sodium, potassium, and water deficits(21,170,176). Overhydration is a concern when treating children with DKA, adults with compromised renal or cardiac function, and elderly patients with incipient congestive heart failure. Once blood pressure stability is achieved with the use of 10-20 ml · kg-1 · h-1 0.9% NaCl for 1-2 h, one should become more conservative with hydrating fluid (Figs. 4 and 5). Reduction in glucose and ketone concentrations should result in concomitant resolution in osmotic diuresis of DKA. The resulting decrease in urine volume should lead to a reduction in the rate of intravenous fluid replacement. This reduces the risk of retention of excess free water, which contributes to brain swelling and cerebral edema, particularly in children. The duration of intravenous fluid replacement in adults and children is ∼48 h depending on the clinical response to therapy. However, in a child, once cardiovascular stability is achieved and vomiting has stopped, it is safer and as effective to pursue oral rehydration.
Insulin therapy
The use of low-dose insulin reemerged in the 1970s in the U.S. after a prospective randomized study using high doses of intravenous and subcutaneous insulin (total dose 263 ± 45 U) or low-dose insulin (total dose 46± 5 U) administered intramuscularly after aggressive hydration demonstrated similar outcomes in the two groups. Furthermore, significant reduction in hypokalemia and no hypoglycemia were demonstrated in the low-dose group (122). These findings were confirmed in many subsequent studies in both adults and children(23,123,124,125,126,127,128).
An important question raised during this period concerned the optimum route of insulin delivery (17). In one comparative study, 45 patients (15 in each of three groups) were randomly assigned to receive low-dose insulin intravenously, subcutaneously, or intramuscularly, with initial therapy consisting of 0.33 U/kg body wt, as either an intravenous bolus or subcutaneous or intramuscular injections,followed by 7 U/h regular insulin administered in the same manner(127). Outcome parameters were found to be similar in the three groups. However, during the first 2 h of therapy, the group receiving intravenous insulin showed a greater decline in plasma glucose and ketone bodies. In fact, the group that received subcutaneous or intramuscular injections showed an increase rather than a decrease in ketone bodies in the 1st hour. It was of interest that the 10%glucose decrement, which was defined as an acceptable response in the 1st hour of insulin therapy, was achieved in 90% of the intravenous group but only in 30-40% of the intramuscular and subcutaneous groups. These groups required second and third doses of insulin to produce an acceptable glucose decrement. Because 15 of the 45 patients had never taken insulin, it was possible to determine their level of immunoreactive insulin (IRI) during therapy. Insulin levels during 8 h of therapy were measured with the following results: 1) the intravenous insulin bolus gave rise within a few minutes to>3,000 μU/ml of IRI, and 2) a similar amount of insulin given subcutaneously or intramuscularly barely doubled the initial level of IRI to∼20 μU/ml in ∼15-30 min, and it took ∼4 h before the plasma insulin level reached a plateau at a level of 100 μU/ml. In the intravenous protocol, IRI declined after the initial peak and plateaued at the same level as in the intramuscular and subcutaneous groups, i.e., ∼100 μU/ml in 4 h. The rate of decline in blood glucose and ketone bodies after the first 2 h remained comparable in all three groups(88). In a subsequent study,administration of half the initial dose of insulin as an intravenous bolus and the other half as either intramuscular or subcutaneous injections was shown to be as effective in lowering ketone bodies as administration of the entire insulin dose intravenously(128). Furthermore, it was shown that addition of albumin to the infusate was not necessary to prevent insulin adsorption into the tubes and containers.
It has been well established that insulin resistance is present in many type 1 (without DKA) and most type 2 diabetic patients(44). During severe DKA, there are additional confounding factors, such as stress (elevated counterregulatory hormones), ketone bodies, FFAs, hemoconcentration, electrolyte deficiencies(132), and particularly hyperosmolarity, that exaggerate the insulin resistance state. However,replacement of fluid and electrolytes alone may diminish this insulin resistance by decreasing levels of counterregulatory hormones and hyperglycemia as well as by decreasing osmolarity, making the cells more responsive to insulin(90,130). Low-dose insulin therapy is therefore most effective when preceded or accompanied by initial fluid and electrolyte replacement.
In the present proposed protocol, we have used essentially the same insulin regimen for both DKA and HHS, but because of a greater level of mental obtundation in HHS, we have recommended only using the intravenous route for HHS (Figs. 4 and 5). The important point to emphasize in insulin treatment of patients with DKA and HHS is that insulin should be used after initial serum electrolyte values are obtained while the patient is being hydrated with 1 liter of 0.9% saline. Insulin therapy is then initiated with an intravenous bolus of 0.15 U/kg or 10 U regular insulin,followed by either intravenous infusion of insulin at a rate of 0.1 U ·kg-1 · h-1 or subcutaneous or intramuscular injection of 7-10 U/h. However, in children, the initial dose may be 0.1 U/kg continuous infusion with or without an insulin bolus. Some pediatric endocrinologists do not use >3 U/h in children.
As noted earlier(26,127),the rates of absorption of regular insulin administered intramuscularly and subcutaneously are comparable, with the subcutaneous route being less painful. However, an intravenous route should be used exclusively in the case of hypovolemic shock due to poor tissue perfusion. As depicted in Figs. 4 and 5, the insulin rate is decreased to 0.05-0.1 U · kg-1 · h-1 when blood glucose reaches 250-300 mg/dl. A 5% or, rarely, a 10% solution of dextrose is added to the hydrating solution at this time to keep blood glucose at its respective level (by adjusting the insulin rate) until the patient has recovered from DKA (i.e., HCO3 >18 mEq/l, anion gap ≤12, and pH >7.3) or HHS (osmolality <315 mOsm/kg and patient is alert). Blood glucose monitoring every 60 min will indicate whether this is sufficient to produce a consistent reduction in blood glucose. If blood glucose fails to decrease at a rate of 50-70 mg · dl-1 ·h-1, the patient's volume status should be reassessed to ensure adequate volume repletion. An additional factor that may contribute to the failure of blood glucose to decline is an error in preparation of the insulin infusion mixture, which should be redone with greater care for the appropriate inclusion of insulin into the infusion solution. If the infusion continues to be ineffective, the infusion rate should be increased until the desired glucose-lowering effect is produced.
Table 2 shows typical potassium deficits, which represent mainly intracellular losses, in both DKA and HHS. Extracellular hyperosmolarity, secondary to hyperglycemia, causes a shift of water and potassium from the intracellular to the extracellular space, resulting in normal or elevated serum potassium concentrations despite total-body potassium deficits of 500-700 mEq/l(44,177,178,179). This potassium shift is further enhanced by insulin deficiency and the presence of acidosis and accelerated breakdown of intracellular protein(180).
Excessive urinary potassium losses, which occur as a result of osmotic diuresis with increased delivery of fluid and sodium to potassium secretory sites in the distal nephron, are ultimately responsible for the development of potassium depletion(177,178,179,180). Secondary hyperaldosteronism and urinary ketoanion excretion, as potassium salts, further augment potassium losses.
During treatment of DKA and HHS with hydration and insulin, there is typically a rapid decline in plasma potassium concentration as potassium reenters the intracellular compartment. However, potassium replacement should not be initiated until the serum potassium concentration is <5.5 mEq/l. We recommend administering one-third of the potassium replacement as potassium phosphate to avoid excessive chloride administration and to prevent severe hypophosphatemia. Others use potassium acetate to avoid an excessive chloride load. Because hypokalemia is the most life-threatening electrolyte derangement occurring during treatment, in the rare patients (∼4-10%) presenting with hypokalemia (179), potassium replacement should be initiated before insulin therapy and insulin therapy held until plasma potassium levels are >3.3 mEq/l. Intravenous potassium administration should not generally exceed 40 mEq in the 1st hour; thereafter,20-30 mEq/h is needed to maintain plasma potassium levels between 4 and 5 mEq/l. We recommend electrocardiogram monitoring during potassium therapy in patients presenting with hypokalemia or in patients with any abnormal rhythms other than sinus tachycardia.
Most current reviews do not recommend the routine use of alkali therapy in DKA because DKA tends to correct with insulin therapy. Insulin administration inhibits ongoing lipolysis and ketoacid production and promotes ketoanion metabolism. Because protons are consumed during ketoanion metabolism,bicarbonate is regenerated, leading to partial correction of metabolic acidosis. Arguments that favor the use of alkali therapy are based on the assumption that severe metabolic acidosis is associated with intracellular acidosis, which could contribute to organ dysfunction, such as in the heart,liver, or brain. Such organ dysfunction could in turn result in increased morbidity and mortality. Potential adverse effects of alkali therapy include worsened hypokalemia, worsened intracellular acidosis due to increased carbon dioxide production, delay of ketoanion metabolism, and development of paradoxical central nervous system acidosis(181).
A retrospective review(182) has failed to identify changes in morbidity or mortality with sodium bicarbonate therapy. After reviewing the risks and benefits of bicarbonate therapy, one author concluded that the only clear indication for use of bicarbonate is life-threatening hyperkalemia (183). Another study showed that ketoanion metabolism was delayed in the presence of bicarbonate therapy, but no significant difference in response between the bicarbonate and no bicarbonate groups was noted(184). A prospective randomized study examined the effect of bicarbonate versus no bicarbonate in two groups of DKA patients with similar degrees of acidemia (pH 6.9-7.14)(185). In some patients,initial cerebrospinal fluid (CSF) chemistry was measured and compared with initial plasma chemistry. It was of interest that HCO3 and pH in CSF were significantly higher than those in plasma of DKA patients. Conversely, ketones and glucose were higher in plasma than in CSF. However,CSF and plasma osmolalities were similar, indicating that the blood-brain barrier provided greater protection against acidosis for the brain(185). Furthermore,regression analysis of the level of lactate, ketones, pH, bicarbonate, and glucose showed no significant difference in the two groups with regard to slopes of these variables during recovery from DKA. It was therefore concluded that administration of bicarbonate in DKA patients (with pH of 6.9-7.14)provided no measurable advantage either biochemically or clinically(181,185). However, because there were very few in a subclass of patients who had an admission pH of 6.9-7.0, additional studies are needed at this level of acidosis. No prospective randomized studies concerning the use of bicarbonate in DKA with arterial pH values <6.9 have been reported. In the absence of such studies, bicarbonate therapy in patients with pH <7.0 seems prudent. As outlined in Fig. 4, a pH of 6.9-7.0 warrants a dose of 50 mmol of intravenous bicarbonate; a larger dose is recommended for a venous pH of <6.9 because of the increased severity of acidosis. Bicarbonate should be administered as an isotonic solution, which can be prepared by addition of one ampoule of 7.5% NaHCO3 solution(50 mmol HCO3-) to 250 ml sterile H2O. Add 15 mEq of KCl for each ampoule of bicarbonate administration (if serum potassium is <5.5 mEq/l).
Regarding the use of bicarbonate in children with DKA, no prospective randomized study has been reported. Because good tissue perfusion created with the initial fluid bolus reduces the lactic acidosis of DKA and because organic acid production is reduced as the result of administered exogenous insulin,the metabolic acid load in DKA is reduced enough that it appears to be unnecessary to add buffer NaHCO3. Young people who are at the least risk for cardiovascular failure should not receive NaHCO3 in their rehydration fluids until there is some clinical evidence of cardiac failure. Furthermore, in a recent retrospective study of 147 admissions of severe DKA in children with pH <7.15 (two with <6.9), the effect of bicarbonate or no bicarbonate was compared. This study concluded that there was no benefit of bicarbonate and that use of bicarbonate may be disadvantageous in severe pediatric DKA (186). There have been suggestions that administration of NaHCO3 in children with DKA may be associated with altered consciousness and headache, but no definitive causal relationship has been established. It must be stated,however, that a definitive study on the efficacy of bicarbonate or no bicarbonate in DKA requires a larger number of patients to provide enough power for conclusive results. Until such a time, we recommend that adult patients with a pH of <6.9 receive 100 mmol isotonic bicarbonate with KCl and 50 mmol bicarbonate for a serum bicarbonate of 6.9-7.0. In children, the use of bicarbonate must be based on the condition of the individual patient.
Phosphate therapy
Phosphate, along with potassium, shifts from the intracellular to the extracellular compartment in response to hyperglycemia and hyperosmolarity. Osmotic diuresis subsequently leads to enhanced urinary phosphate losses(Tables 1 and 2). Because of the shift of phosphate from the intracellular to the extracellular compartment, serum levels of phosphate at presentation with DKA or HHS are typically normal or increased(187,188). During insulin therapy, phosphate reenters the intracellular compartment,leading to mild to moderate reductions in serum phosphate concentrations. Adverse complications of hypophosphatemia are uncommon, occurring primarily in the setting of severe hypophosphatemia (phosphate <1 mg/dl).
Potential complications of severe hypophosphatemia include respiratory and skeletal muscle weakness, hemolytic anemia, and worsened cardiac systolic performance (189). Phosphate depletion may also contribute to decreased concentrations of 2,3-diphosphoglycerate, thus shifting the oxygen dissociation curve to the left and limiting tissue oxygen delivery(190). Controlled and randomized studies have not demonstrated clinical benefits from the routine use of phosphate replacement in DKA(187,188). Five days of PO4 therapy increased 2,3-diphosphoglycerate without a significant change in the oxygen dissociation curve and resulted in a significant decrease in serum ionized calcium(187). Similar studies have not been performed in patients with HHS.
Although routine phosphate replacement is unnecessary in DKA, replacement should be given to patients with serum phosphate concentrations <1.0 mg/dl and to patients with moderate hypophosphatemia and concomitant hypoxia,anemia, or cardiorespiratory compromise(189). Excessive administration of phosphate can lead to hypocalcemia with tetany and metastatic soft tissue calcifications(191). In HHS, because the duration of symptoms may be prolonged and because of comorbid conditions, the phosphate level may be lower than in DKA; therefore, it is prudent to monitor phosphate levels in these patients.
If phosphate replacement is needed, 20-30 mEq/l potassium phosphate can be added to replacement fluids and given over several hours. In such patients,because of the risk of hypocalcemia, serum calcium and phosphate levels must be monitored during phosphate infusion.
Immediate posthyperglycemic care
Low-dose insulin therapy provides a circulating insulin concentration of∼60-100 μU/ml. However, because of the short half-life of intravenous regular insulin, sudden interruption of insulin infusion can lead to rapid lowering of insulin concentration, resulting in a relapse into DKA or HHS. Thus, numerous publications have emphasized the need for frequent monitoring during the posthyperglycemic period(6,19,44, 56,137,139,192).
Patients with severe DKA and mental obtundation should be treated with continuous intravenous insulin or, if less severe, with hourly injection of subcutaneous insulin until ketoacidosis is resolved to maintain insulin levels at ∼100 μU/ml (124). Criteria for resolution of ketoacidosis include blood glucose <200 mg/dl,serum bicarbonate level ≥18 mEq/l, venous pH >7.3, and calculated anion gap ≤12 mEq/l. Once DKA is resolved, hydrating fluid is continued intravenously and subcutaneous regular insulin therapy is started every 4 h. An abrupt discontinuance of intravenous insulin coupled with a delayed onset of a subcutaneous insulin regimen may lead to worsened control; therefore,some overlap should occur in intravenous insulin therapy and initiation of the subcutaneous insulin regimen. When the patient is able to eat, a multiple daily injection schedule should be established that uses a combination of regular (short-acting) and intermediate or long-acting insulin as needed to control plasma glucose. Patients with known diabetes may be given insulin at the dose they were receiving before the onset of DKA and further adjusted using a multiple daily injection regimen. In patients with newly diagnosed diabetes, the initial total insulin dose should be ∼0.6 U ·kg-1 · day-1, divided into at least three doses in a mixed regimen including short- and long-acting insulin, until an optimal dose is established.
Although serum β-hydroxybutyrate levels are usually <1.5 mmol/l at resolution of DKA, we do not recommend routine measurement of ketone levels during therapy. However, in some patients with prolonged metabolic acidosis, combined diabetic and lactic acidosis, or other mixed acid-base disorders, direct measurement of β-hydroxybutyrate levels may be helpful. During treatment of DKA, use of the nitroprusside test, which measures acetoacetate and acetone levels but not β-hydroxybutyrate, should be avoided because the fall in acetone and acetoacetate lags behind the resolution of DKA(6).
Hypoglycemia and hypokalemia
Before the advent of low-dose insulin protocols(193), these two complications were seen in as many as 25% of patients treated with large doses of insulin (122). Both complications were significantly reduced with lower-dose therapy(122). In spite of this,hypoglycemia still constitutes one of the potential complications of therapy,the incidence of which may be underreported(194). The use of dextrose-containing solutions when blood glucose reaches 250 mg/dl in DKA and a simultaneous reduction in the rate of insulin delivery should further reduce the incidence of hypoglycemia. Similarly, the addition of potassium to the hydrating solution and frequent monitoring of serum potassium during the early phases of DKA and HHS therapy should reduce the incidence of hypokalemia.
Cerebral edema
An asymptomatic increase in CSF pressure during treatment of DKA has been recognized for >25 years(195,196,197). Significant decreases in the size of the lateral ventricles, as determined by echoencephalogram, were noted in 9 out of 11 DKA patients during therapy(198,199). However, in another study, nine children in DKA were compared with regard to brain swelling before and after therapy, and it was concluded that brain swelling is usually present in DKA before treatment is begun(200). Symptomatic cerebral edema, which is extremely rare in adult HHS or DKA patients, has been reported to occur primarily in pediatric patients, particularly in those with newly diagnosed diabetes. No single factor has been identified that can be used to predict the development of cerebral edema(201,202). Lowering blood glucose in patients with HHS at a rate of 50-70 mg ·dl-1 · h-1 and adding 5% dextrose to the hydrating solution when blood glucose is ∼300 mg/dl are prudent until more knowledge on the mechanism of cerebral edema is obtained(203,204). A 20-year review of cerebral edema in children with DKA from the Royal Children's Hospital in Melbourne, Australia, concluded that although no predictive factors for survival of cerebral edema were identified, protocols that use slow rates of rehydration with isotonic fluids should be recommended(205).
Several other reviews have found a correlation between the development of cerebral edema and higher rates of fluid administration, especially in the first hours of fluid resuscitation. The most current recommendation is to limit fluid administration in the first 4 h of therapy to <50 ml/kg isotonic solution(206,207).
Adult respiratory distress syndrome
A rare but potentially fatal complication of therapy is adult respiratory distress syndrome (ARDS)(208). During rehydration with fluid and electrolytes, an initially elevated colloid osmotic pressure is reduced to subnormal levels. This change is accompanied by a progressive decrease in arteriolar partial pressure of oxygen (Pao2) and an increase in alveolar-to-arteriolar oxygen (Aao2) gradient, which is usually normal at presentation in DKA(19,175,198). In a small subset of patients, this may progress to ARDS. By increasing left atrial pressure and decreasing colloid osmotic pressure, excessive crystalloid infusion favors edema formation in the lungs (even in the presence of normal cardiac function). Patients with an increased Aao2 gradient or those who have pulmonary rales on physical examination may be at an increased risk for development of this syndrome. Monitoring of Pao2 with pulse oximetry and monitoring of Aao2 gradient may assist in the management of such patients. Because crystalloid infusion may be the major factor, we advise that such patients have lower fluid intake, with addition of colloid administration for treatment of hypotension unresponsive to crystalloid replacement.
Hyperchloremic metabolic acidosis
Hyperchloremic normal anion gap metabolic acidosis is present in ∼10%of patients admitted with DKA; however, it is almost uniformly present after resolution of ketonemia(105,209,210). This acidosis has no adverse clinical effects and is gradually corrected over the subsequent 24-48 h by enhanced renal acid excretion. The severity of hyperchloremia can be exaggerated by excessive chloride administration(211) because 0.9% NaCl contains 154 mmol/l of both sodium and chloride, which is 54 mmol/l in excess of the 100 mmol/l of chloride in serum. Further causes of non—anion gap hyperchloremic acidosis include 1) loss of potential bicarbonate due to excretion of ketoanions as sodium and potassium salts; 2)decreased availability of bicarbonate in proximal tubule, leading to greater chloride reabsorption; and 3) reduction of bicarbonate and other buffering capacity in other body compartments. In general, hyperchloremic metabolic acidosis is self-limiting with reduction of chloride load and judicious use of hydration solution(212,213). Serum bicarbonate that does not normalize with other metabolic parameters should alert the clinician to the need for more aggressive insulin therapy or further investigation.
The process of health care reform demands cost-efficient modes of delivering optimal care. The choice of management site (intensive care unit, stepdown unit, or general medical ward)therefore becomes a critical issue. Unfortunately, there are no randomized prospective studies that have evaluated the optimal site of care for either DKA or HHS. Given the lack of such studies, the decision concerning the site of care must be based on known clinical prognostic indicators and on the availability of hospital resources.
Recent studies that have emphasized the use of standardized written guidelines for therapy have demonstrated mortality rates <5% in DKA and∼15% in HHS(6,9,10,12,13,14,15,16,214). The majority of deaths have occurred in patients >50 years of age because of concomitant life-threatening illnesses, suggesting that further major decreases in mortality rates may not be attainable based on treatment of DKA alone (9). As stated earlier,similar outcomes of treatment of DKA have been noted in both community and training hospitals, and outcomes have not been altered by whether the managing physician is a family physician, a general internist, a house officer with attending supervision, or an endocrinologist(14,15,16),so long as standard written therapeutic guidelines are followed.
The response to initial therapy, which would preferably be in the emergency ward, can be used as a guideline for choosing the most appropriate hospital site for further care. All patients with hypotension or oliguria refractory to initial rehydration and those patients with mental obtundation or coma with hyperosmolarity (effective osmolality >320 mOsm/kg H2O) should be considered for admission to stepdown or intensive care units in order to receive continuous intravenous insulin therapy. In the absence of indications for hemodynamic monitoring, the majority of such patients can be managed in less expensive step-down units rather than intensive care units after the initial emergency room evaluation and care(19,215).
Options of site of care for DKA patients with less mental obtundation and no hypotension following initial rehydration are based primarily on the availability of hospital resources. Those patients who are mildly ketotic can be effectively managed on a general medical ward, assuming there are 1) sufficient nursing staff to allow frequent monitoring of vital signs and hourly administration of subcutaneous insulin and 2)on-site blood glucose monitoring equipment and rapid turn-around time for routine laboratory services. Continuous intravenous insulin therapy is not generally recommended for use in general medical wards unless appropriately trained personnel are available. DKA patients with a mild condition who are alert and able to tolerate oral intake may be treated in the emergency room and observed for a few hours before discharge.
Given the known high mortality rate of HHS, the frequent presence of serious concomitant illnesses, and the usually advanced age of HHS patients,it is reasonable that all such patients be admitted to either step-down or intensive care units.
The two major precipitating factors in the development of DKA are inadequate insulin treatment (including noncompliance)and infection. In many cases, these events may be prevented by better access to medical care, including intensive patient education and effective communication with a health care provider during acute illnesses.
Goals in the prevention of hyperglycemic crises precipitated by either acute illness or stress have been outlined(216). These goals included controlling insulin deficiency, decreasing excess stress hormone secretion,avoiding prolonged fasting state, and preventing severe dehydration. Therefore, an educational program should review sick-day management with specific information on administration of short-acting insulin, including frequency of insulin administration, blood glucose goals during illness, means to suppress fever and treat infection, and initiation of an easily digestible liquid diet containing carbohydrates and salt. Most importantly, the patient should never discontinue insulin and should solicit professional advice early in the course of the illness.
Success with such a program depends on frequent interaction between the patient and the health care provider and on the level of involvement that the patient or family member is willing to take to avoid hospitalization. The patient/family must be willing to keep an accurate record of blood glucose,urine ketones, insulin administration, temperature, respiratory and pulse rate, and body weight. Indicators for hospitalization include >5% loss of body weight, respiratory rate >36/min, intractable elevations in blood glucose, mental status change, uncontrolled fever, and unresolved nausea and vomiting. A group of investigators reported on the successful prevention of recurrent DKA (RDKA) in a pediatric population with the introduction of a hierarchical set of medical, educational, and psychosocial interventions in a lower socioeconomic group(217). Insulin omission was documented in 31 out of 44 patients (70%) with a history of RDKA and in 13 of the 44 with inadequate education (30%). After initiation of the program, the episodes of RDKA were reduced to 2.6 episodes per 100 patient-months, compared with the initial number of 25.2 episodes before the program (P <0.0001). RDKA ceased with or without psychotherapy. The authors concluded that RDKA is causally related to a variety of social and economic problems, but its prevention requires recognition that its proximate cause in certain groups is omission of insulin. There is therefore a need for a support system to ensure adherence (217). In addition,an education program directed toward pediatricians and school educators that promoted the signs and symptoms of diabetes was shown to be effective in decreasing ketoacidosis at the onset of diabetes(218).
As previously mentioned, many of the admissions for HHS are nursing home residents or elderly individuals who become dehydrated and are unaware or unable to treat the increasingly dehydrated state. Better education of care givers as well as patients regarding conditions, procedures, and medications that may worsen diabetes control, use of glucose monitoring, and signs and symptoms of newly onset diabetes could potentially decrease the incidence and severity of HHS.
Beyond the educational issues, recent reports on DKA in urban African-American type 1 diabetic patients showed that the major precipitating cause of DKA was discontinuance of insulin (67%). Reasons for stopping insulin included economic reasons (50%), lack of appetite (21%), behavioral reasons(14%), or lack of knowledge about how to manage sick days (14%)(26). Because the most common reason for interrupted insulin is economic in nature, changes in the health care delivery system and in the access patients have to care and medications may be the most effective means of preventing DKA in this population. The investigators showed that of 56 DKA admissions, only two patients tried to contact the diabetes unit for assistance(26). Similarly, a study of hyperglycemic crises in an urban black population demonstrated that socioeconomic barriers, such as a low literacy rate, limited financial resources, and limited access to health care, might explain the continuing high rates of admission for DKA in this group of patients(5).
Hospitalizations for DKA in the past two decades have increased in some areas and declined in others(3). Because repeated admissions for DKA are estimated to drain approximately one out of every two health care dollars spent on adult patients with type 1 diabetes, resources need to be redirected toward prevention by funding better access to care and educational programs that address a variety of ethnicity-related health care beliefs.
This paper was peer-reviewed, modified, and approved by the Professional Practice Committee, October 2000.
Abbreviations: Aao2, alveolar-to-arteriolar oxygen; AKA,alcoholic ketoacidosis; ARDS, adult respiratory distress syndrome; BUN, blood urea nitrogen; CPT, carnitine palmitoyl-transferase; CSF, cerebrospinal fluid;DKA, diabetic ketoacidosis; FFA, free fatty acid; HHS, hyperosmolar hyperglycemic state; IRI, immunoreactive insulin; Pao2, arteriolar partial pressure of oxygen; RDKA, recurrent DKA.
A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
Johnson DD, Palumbo PJ, Chu C: Diabetic ketoacidosis in a community-based population.
Mayo Clin Proc
-88,
Faich GA, Fishbein HA, Ellis SE: The epidemiology of diabetic acidosis: a population-based study
-558,
Centers for Disease Control, Division of Diabetes Translations:
Diabetes Surveillance, 1991
. Washington, DC, U.S. Govt. Printing Office,
Fishbein HA, Palumbo PJ: Acute metabolic complications in diabetes. In
Diabetes in America
. National Diabetes Data Group,National Institutes of Health,
-291 (NIH publ. no.: 95-1468)
Umpierrez GE, Kelly JP, Navarrete JE, Casals MMC, Kitabchi AE:Hyperglycemic crises in urban blacks.
Arch Intern Med
Kitabchi AE, Fisher JN, Murphy MB, Rumbak MJ: Diabetic ketoacidosis and the hyperglycemic hyperosmolar nonketotic state. In
Joslin's Diabetes Mellitus
. 13th ed. Kahn CR, Weir GC, Eds. Philadelphia,Lea & Febiger,
Javor KA, Kotsanos JG, McDonald RC, Baron AD, Kesterson JG, Tierney WM: Diabetic ketoacidosis charges relative to medical charges of adult patients with type I diabetes.
Wachtel TJ, Tetu-Mouradjain LM, Goldman DL, Ellis SE, O'Sullivan PS: Hyperosmolality and acidosis in diabetes mellitus: a three-year experience in Rhode Island.
J Gen Int Med
Carroll P, Matz R: Uncontrolled diabetes mellitus in adults:experience in treating diabetic ketoacidosis and hyperosmolar coma with low-dose insulin and uniform treatment regimen.
Hamblin PS, Topliss DJ, Chosich N, Lording DW, Stockigt JR: Deaths associated with diabetic ketoacidosis and hyperosmolar coma, 1973-1988.
Med J Aust
Basu A, Close CF, Jenkins D, Krentz AJ, Nattrass M, Wright AD:Persisting mortality in diabetic ketoacidosis.
Ellemann K, Soerensen JN, Pedersen L, Edsberg B, Andersen O:Epidemiology and treatment of diabetic ketoacidosis in a community population.
Clements RS, Vourganti B: Fatal, diabetic ketoacidosis: major causes and approaches to their prevention.
Huffstutter E, Hawkes J, Kitabchi AE: Low-dose insulin for treatment of diabetic ketoacidosis in a private community hospital.
South Med J
Gouin PE, Gossain VV, Rovner DR: Diabetic ketoacidosis: outcome in a community hospital.
Hamburger S, Barjenbruch P, Soffer A: Treatment of diabetic ketoacidosis by internist and family physicians: a comparative study.
J Fam Pract
Kitabchi AE, Sacks HS, Fisher JN: Clinical trials in diabetic ketoacidosis. In
Methods in Diabetes Research
. Larner J, Ed. New York, John Wiley,
Kitabchi AE, Materi R, Murphy MB: Optimal insulin delivery in diabetic ketoacidosis (DKA) and hyperglycemic hyperosmolar nonketotic coma(HHNC):
Kitabchi AE, Wall BM: Diabetic ketoacidosis.
Med Clin North Am
Ennis ED, Stahl EJVB, Kreisberg RA: The hyperosmolar hyperglycemic syndrome.
Diabetes Rev
Arieff AI, Carrol H: Nonketotic hyperosmolar coma with hyperglycemia: clinical features, pathophysiology, renal function, acid-base balance, plasma-cerebrospinal fluid equilibria, and the effects of therapy in 37 cases.
Morris LR, Kitabchi AE: Efficacy of low dose insulin therapy in severely obtunded patients with diabetic ketoacidosis.
Kitabchi AE, Fisher JN: Insulin therapy of diabetic ketoacidosis:physiologic versus pharmacologic doses of insulin and their routes of administration. In
Handbook of Diabetes Mellitus
. Brownlee M, Ed. New York, Garland ATPM,
Kreisberg RA: Diabetic ketoacidosis: new concepts and trends in pathogenesis and treatment.
Ann Int Med
Atchley DW, Loeb RF, Richards DW, Benedict EM, Driscoll ME: A detailed study of electrolyte balances following withdrawal and reestablishment of insulin therapy.
Musey VC, Lee JK, Crawford R, Klatka MA, McAdams D, Phillips LS:Diabetes in urban African Americans: cessation of insulin therapy is the major precipitating cause of diabetic ketoacidosis.
Petzold R, Trabert C, Walther A, Schoffling K: Etiology and prognosis of diabetic coma: a retrospective study.
Verh Dtsch Ges Inn Med
Soler NG, Bennett MA, FitzGerald MG, Malins JM: Intensive care in the management of diabetic ketoacidosis.
Panzram G: Epidemiology of diabetic coma.
Schweiz Med Wochenschr
Berger W, Keller U, Vorster D: Mortality from diabetic coma at the Basle Cantonal Hospital during 2 consecutive observation periods 1968-1973 and 1973-1978, using conventional insulin therapy and treatment with low dose insulin.
-1824,
Umpierrez GE, Casals MMC, Gebhart SSP, Mixon PS, Clark WS, Phillips LS: Diabetic ketoacidosis in obese African-Americans.
Nosadini R, Velussi M, Fioretto P: Frequency of hypoglycaemic and hyperglycaemic-ketotic episodes during conventional and subcutaneous continuous insulin infusion therapy in IDDM.
Diabet Nutr Metab
Teutsch SM, Herman WH, Dwyer DM, Lane JM: Mortality among diabetic patients using continuous subcutaneous insulin-infusion pumps.
Kitabchi AE, Fisher JN, Burghen GA, Tsiu W, Huber CT: Problems associated with continuous subcutaneous insulin infusion.
Horm Metab Res Suppl
The DCCT Research Group: Implementation of treatment protocols in the Diabetes Control and Complications Trial.
Polonsky WH, Anderson BJ, Lohrer PA, Aponte JE, Jacobson AM, Cole CF: Insulin omission in women with IDDM.
Rydall AC, Rodin GM, Olmsted MP, Devenyi RG, Daneman RG: Disordered eating behavior and microvascular complications in young women with insulindependent diabetes mellitus.
Weissman JS, Gatsonis C, Epstein AM: Rates of avoidable hospitalization by insurance status in Massachusetts and Maryland.
Bondy PK, Bloom WL, Whitmer VS, Farrar BW: Studies of the role of the liver in human carbohydrate metabolism by the venous catheter technique.
Felig P, Sherwin RS, Soman V, Wahren J, Hendler R, Sacca L, Eigler N, Goldberg D, Walesky M: Hormonal interactions in the regulation of blood glucose.
Recent Prog Horm Res
Miles JM, Rizza RA, Haymond MW, Gerich JE: Effects of acute insulin deficiency on glucose and ketone body turnover in man: evidence for the primacy overproduction of glucose and ketone bodies in the genesis of diabetic ketoacidosis.
Luzi L, Barrett EJ, Groop LC, Ferrannini E, DeFronzo RA: Metabolic effects of lowdose insulin therapy on glucose metabolism in diabetic ketoacidosis.
Vaag A, Hother-Nielsen O, Skott P, Anderson P, Richter EA,Beck-Nielsen H: Effect of acute hyperglycemia on glucose metabolism in skeletal muscles in IDDM patients.
DeFronzo RA, Matsuda M, Barret E: Diabetic ketoacidosis: a combined metabolic-nephrologic approach to therapy.
Felig P, Wahren J: Influence of endogenous insulin secretion on splanchnic glucose and amino acid metabolism in man.
Foster DW, McGarry JD: The metabolic derangements and treatment of diabetic ketoacidosis.
Siperstein MD: Diabetic ketoacidosis and hyperosmolar coma.
Endocrinol Metab Clin North Am
Van der Werve G, Jeanrenaud B: Liver glycogen metabolism: an overview.
Exton JH: Mechanisms of hormonal regulation of hepatic glucose metabolism.
Hue L: Gluconeogenesis and its regulation.
Meyer C, Stumvoll M, Nadkarni V, Dostou J, Mitrakou A, Gerich J:Abnormal renal and hepatic glucose metabolism in type 2 diabetes mellitus.
Schade DS, Eaton RP: The temporal relationship between endogenously secreted stress hormone and metabolic decompensation in diabetic man.
Alberti KGMM, Christensen NJ, Iversen J, Orskov H: Role of glucagon and other hormones in development of diabetic ketoacidosis.
Gerich JE, Lorenzi M, Bier DM, Schneider V, Tsalikiane E, Karam JH,Forsham PH: Prevention of human diabetic ketoacidosis by somatostatin:evidence for an essential role of glucagon.
Muller WA, Faloona GR, Unger RH: Hyperglucagonemia in diabetic ketoacidosis: its prevalence and significance.
Kitabchi AE: Low-dose insulin therapy in diabetic ketoacidosis:fact or fiction?
Pilkis SJ, El-Maghrabi MR, Claus TH: Fructose-2,6-biphosphate in control of hepatic gluconeogenesis.
Granner D, Pilkis S: The genes of hepatic glucose metabolism.
O'Brien RM, Granner DK: PEPCK gene as model of inhibitory effects of insulin on gene transcription.
Wasserman DH, Vranic M: Interaction between insulin and counterregulatory hormones in control of substrate utilization in health and diabetes during exercise.
Jensen MD, Caruso M, Heiling V: Insulin regulation of lipolysis in nondiabetic and IDDM subjects.
Arner P, Kriegholm E, Engfeldt P, Bolinder J: Adrenergic regulation of lipolysis in situ at rest and during exercise.
McGarry JD: Lilly Lecture 1978: new perspectives in the regulation of ketogenesis.
Nurjhan N, Consoli A, Gerich J: Increased lipolysis and its consequences on gluco-neogenesis in non-insulin-dependent diabetes mellitus.
Gerich JE, Lorenzi M, Bier DM, Tsalikian E, Schneider V, Karam JH,Forsham PH: Effects of physiologic levels of glucagon and growth hormone on human carbohydrate and lipid metabolism: studies involving administration of exogenous hormone during suppression of endogenous hormone secretion with somatostatin.
Cook GA, King MT, Veech RL: Ketogenesis and malonyl coenzyme A content of isolated rat hepatocytes.
McGarry JD, Woeltje KF, Kuwajima M, Foster DW: Regulation of ketogenesis and the renaissance of carnitine palmitoyl transferase.
Zammit VA: Regulation of ketone body metabolism. a cellular perspective.
Ruderman NB, Goodman MN: Inhibition of muscle acetoacetate utilization during diabetic ketoacidosis.
Reichard GA Jr, Skutches CL, Hoeldtke RD, Owen OE: Acetone metabolism in humans during diabetic ketoacidosis.
Balasse EO, Fery F: Ketone body production and disposal: effects of fasting, diabetes, and exercise.
Nosadini R, Avogaro A, Doria A, Fioretto P, Trevisan R, Morocutti A: Ketone body metabolism: a physiological and clinical overview.
Owen OE, Block BSB, Patel M, Boden G, McDonough M, Kreulen T,Shuman CR, Reichard GA: Human splanchnic metabolism during diabetic ketoacidosis.
Miles JM, Haymond M, Nissen SL, Gerich GE: Effects of free fatty acid availability, glucagon excess and insulin deficiency on ketone body production in postabsorptive man.
Carlson MG, Snead WL, Campbell PJ: Regulation of free fatty acid metabolism by glucagon.
Beylot M, Picard S, Chambrier C, Vidal H, Laville M, Cohen R,Cotisson A, Mornes R: Effect of physiological concentrations of insulin and glucagon on the relationship between nonesterified fatty acid availability and ketone body production in humans.
Johnston DG, Gill A, Orskov H, Batstone GF, Alberti KGMM: Metabolic effects of cortisol in man: studies with somatostatin.
Goldstein RE, Wasserman DH, Reed GW, Lacy DB, Abumrad NN,Cherrington AD: The effects of acute hypercortisolemia onβ-hydroxybutyrate and metabolism during insulin deficiency.
Horm Metab Res
Moeller N, Schmitz O, Moeller J, Porksen N, Jorgensen JOL:Dose-response studies on metabolic effects of a growth hormone pulse in humans.
Moeller N, Jorgensen JOL, Schmitz O, Moller J, Christianse JS,Alberti KGMM, Orskov H: Effects of a growth hormone pulse on total and forearm substrate utilization in humans.
-E91,
Press M, Tamborlane WV, Sherwin RS: Importance of raised growth hormone levels in mediating the metabolic derangements of diabetes.
Avogaro A, Cryer PE, Bier DE: Epinephrine's ketogenic effect in humans is mediated principally by lipolysis.
-E260,
Avagaro A, Gnudi I, Valerio A, Maran A, Miola M, Opportuno A,Tiengo A, Bier DE: Effects of different plasma glucose concentrations on lipolytic and ketogenic responsiveness to epinephrine in type 1 (insulin dependent) diabetic subjects.
Connolly CC, Steiner KE, Stevenson RW, Neal DW, Williams PE,Alberti KGMM, Cherrington AD: Regulation of lipolysis and ketogenesis by norepinephrine in conscious dogs.
Keller U, Gerger PPG, Stauffacher W: Stimulatory effect of norepinephrine on ketogenesis in normal and insulin deficient humans.
Sherwin RS, Shamoon HS, Hendler R, Sacca L, Eigler N, Walesky M:Epinephrine and the regulation of glucose metabolism: effect of diabetes and hormonal interactions.
Shamoon H, Hendler R, Sherwin RS: Synergistic interactions among antiinsulin hormones in the pathogenesis of stress hyperglycemia in humans.
Kitabchi AE, Young RT, Sacks HS, Morris L: Diabetic ketoacidosis:reappraisal of therapeutic approach.
Ann Rev Med
Schade DS, Eaton RP: Pathogenesis of diabetic ketoacidosis: a reappraisal.
Waldhausl W, Kleinberger G, Korn A, Dudcza R, Bratusch-Marrain P,Nowatny P: Severe hyperglycemia: effects of rehydration on endocrine derangements and blood glucose concentration.
Cahill GF: Starvation in man.
Gerich JE, Martin MM, Recant LL: Clinical and metabolic characteristics of hyperosmolar nonketotic coma.
Lindsey CA, Faloona GR, Unger RH: Plasma glucagon in nonketotic hyperosmolar coma.
Malchoff CD, Pohl SL, Kaiser DL, Carey RA: Determinants of glucose and ketoacid concentrations in acutely hyperglycemic diabetic patients.
Chupin M, Charbonnel B, Chupin F: C-peptide blood levels in ketoacidosis and in hyperosmolar non-ketotic diabetic coma.
Acta Diabetol
Yu SS, Kitabchi AE: Biological activity of proinsulin and related polypeptides in the fat tissue.
Schade DS, Eaton RP: Dose response to insulin in man: differential effects on glucose and ketone body regulation.
Groop LC, Bonadonna RC, Del Prato S, Ratheiser K, Zyck K, DeFronzo RA: Effect of insulin on oxidative and nonoxidative pathways of glucose and FFA metabolism in NIDDM: evidence for multiple sites of insulin resistance.
Vinik A, Seftel H, Joffe BI: Metabolic findings in hyperosmolar,non-ketotic diabetic stupor.
Howard RL, Bichet DG, Shrier RW: Hypernatremic and polyuric states. In
The Kidney: Physiology and Pathophysiology
. Seldin D, Giebisch G, Eds. New York, Raven,
DeFronzo RA, Cooke CR, Andres R, Faloona GR, Davis PJ: The effect of insulin on renal handling of sodium, potassium, calcium and phosphate in man.
Wachtel TJ, Silliman RA, Lamberton P: Predisposing factors for the diabetic hyperosmolar state.
DeFronzo RA, Goldberg M, Agus ZS: The effects of glucose and insulin on renal electrolyte transport.
Castellino P, Luzi L, Haymond M, Simonson D, DeFronzo RA: Effect of insulin and plasma amino acid concentrations on leucine turnover in man.
Adrogué HJ, Wilson H, Boyd AE, Suki WN, Eknpyan G: Plasma acid-base patterns in diabetic ketoacidosis.
Halperin ML, Cheema-Dhadli S: Renal and hepatic aspects of ketoacidosis: a quantitative analysis based on energy turnover.
Sacks H, Rabkin R, Kitabchi AE: Reversible hyperinsulinuria in diabetic ketoacidosis in man.
Foster NB: The treatment of diabetic coma with insulin.
Am J Med Sci
Katsch G: Insulin be Handlung des diabetischen Koma.
Dtsch Gesundheitwes
Root HF: The use of insulin and the abuse of glucose in the treatment of diabetic coma.
Black AB, Malins JM: Diabetic ketosis: a comparison of results of orthodox and intensive methods of treatment based on 170 consecutive cases.
Smith K, Martin HE: Response of diabetic coma to various insulin dosages.
Shaw CE Jr, Hurwitz GE, Schmukler M, Brager SH, Bessman SP: A clinical and laboratory study of insulin dosage in diabetic acidosis:comparison with small and large doses.
Menzel R, Zander E, Jutzi E: Treatment of diabetic coma with low-dose injections of insulin.
Sšnksen PH, Srivastava MC, Tompkins CV, Nabarro JDN: Growth-Hormone and cortisol responses to insulin infusion in patients with diabetes mellitus.
Alberti KGMM, Hockaday TDR, Turner RC: Small doses of intramuscular insulin in the treatment of diabetic "coma."
Genuth SM: Constant intravenous insulin infusion in diabetic ketoacidosis.
Kidson W, Casey J, Kraegen E, Lazarus L: Treatment of severe diabetes mellitus by insulin infusion.
Br Med J
Semple PF, White C, Manderson WG: Continuous intravenous infusion of small doses of insulin in treatment of diabetic ketoacidosis.
Soler NG, Wright AD, FitzGerald MG, Malins JM: Comparative study of different insulin regimens in management of diabetic ketoacidosis.
Alberti KGMM: Comparison of different insulin regimens in diabetic ketoacidosis (Letter).
Kitabchi AE, Ayyagari V, Guerra SMO, Medical House Staff: The efficacy of low dose versus conventional therapy of insulin for treatment of diabetic ketoacidosis.
Ann Intern Med
Heber D, Molitch ME, Sperling MA: Lowdose continuous insulin therapy for diabetic ketoacidosis; prospective comparison with"conventional" insulin therapy.
Piters KM, Kumar D, Pei E, Bessman AN: Comparison of continuous and intermittent intravenous insulin therapies for diabetic ketoacidosis.
Edwards GA, Kohaut EC, Wehring B, Hill LL: Effectiveness of low-dose continuous intravenous insulin infusion in diabetic ketoacidosis: a prospective comparative study.
J Pediatr
Burghen GA, Etteldorf JN, Fisher JN, Kitabchi AE: Comparison of high-dose and low-dose insulin by continuous intravenous infusion in the treatment of diabetic ketoacidosis in children.
Fisher JN, Shahshahani MN, Kitabchi AE: Diabetic ketoacidosis:low-dose insulin therapy by various routes.
Sacks HS, Shahshahani M, Kitabchi AE, Fisher JN, Young RT: Similar responsiveness of diabetic ketoacidosis to low-dose insulin by intramuscular injection and albumin-free infusion.
Kitabchi AE, Burghen G: Treatment of acidosis in children and adults. In
Diabetes Mellitus and obesity
. Brodoff BN,Bleicher SH, Eds. Baltimore, MD, Williams and Wilkins,
Bratusch-Marrain PR, Komajati M, Waldhausal W: The effect of hyperosmolarity on glucose metabolism.
Pract Cardiol
Ginsburg HN: Investigation of insulin resistance during diabetic ketoacidosis: role of counterregulatory substances and effect of insulin.
Barrett EJ, DeFronzo RA, Bevilacqua S, Ferrannini E: Insulin resistance in diabetic ketoacidosis.
Rosenthal NR, Barrett EJ: An assessment of insulin action in hyperosmolar hyperglycemic nonketotic diabetic patients.
Owen OE, Licht JH, Sapir DG: Renal function and effects of partial rehydration during diabetic ketoacidosis.
West ML, Marsden PA, Singer GG, Halperin ML: Quantitative analysis of glucose loss during acute therapy for hyperglycemic, hyperosmolar syndrome.
Blazar BR, Whitley CB, Kitabchi AE, Tsai MY, Santiago J, White N,Stentz FB, Brown DM: In vivo chloroquine-induced inhibition of insulin degradation in a diabetic patient with severe insulin resistance.
Marshall SM, Alberti KGGM: Diabetic ketoacidosis.
Diabetes Ann
Kitabchi AE, Murphy MB: Diabetic ketoacidosis and hyperosmolar hyperglycemic nonketotic coma.
Ennis ED, Stahl EJ, Kreisberg RA: Diabetic ketoacidosis. In
Diabetes Mellitus: Theory and Practice
. 5th ed. Porte D Jr, Sherwin RS, Eds. Amsterdam, Elsevier,
Malone ML, Gennis V, Goodwin JS: Characteristics of diabetic ketoacidosis in older versus younger adults.
J Am Geriatr Soc
Lober D: Nonketotic hypertonicity in diabetes mellitus.
Guisado R, Arieff AI: Neurologic manifestations of diabetic comas:correlation with biochemical alterations in the brain.
Maccario M: Neurological dysfunction associated with nonketotic hyperglycemia.
Arch Neurol
Harden CL, Rosenbaum DH, Daras M: Hyperglycemia presenting with occipital seizures.
Umpierrez GE, Khajavi M, Kitabchi AE: Diabetic ketoacidosis and hyperglycemic hyperosmolar nonketotic syndrome.
Winter MD, Pearson R, Gabow PA, Schultz AL, Lepoff RB: The fall of the serum anion gap.
Sadjadi SA: A new range for the anion gap (Letter).
Stephens JM, Sulway MJ, Watkins PJ: Relationship of blood acetoacetate and β-hydroxybutyrate in diabetes.
Koch DD, Feldbruegge DH: Optimized kinetic method for automated determination of β-hydroxybutyrate.
Clin Res
Umpierrez GE, Watts NB, Phillips LS: Clinical utility ofβ-hydroxybutyrate determined by reflectance meter in the management of diabetic ketoacidosis.
Matz R: Hyperosmolar nonacidotic diabetes (HNAD). In
Paulson WD, Gadallah MF: Diagnosis of mixed acid-base disorders in diabetic ketoacidosis.
Katz MA: Hyperglycemia-induced hyponatremia: calculation of expected sodium depression.
Rumbak MJ, Hughes TA, Kitabchi AE: Pseudonormoglycaemia in diabetic ketoacidosis with elevated triglycerides.
Am J Emerg Med
Kaminska ES, Pourmotabbed G: Spurious laboratory values in diabetic ketoacidosis and hyperlipidaemia.
Assadi FK, John EG, Fornell L, Rosenthal IM: Falsely elevated serum creatinine concentration in ketoacidosis.
Gerard SK, Khayam-Bashi H: Characterization of creatinine error in ketotic patients: a prospective comparison of alkaline picrate methods with an enzymatic method.
Am J Clin Pathol
Vinicor F, Lehrner LM, Karn RC, Merritt AD: Hyperamylasemia in diabetic ketoacidosis: sources and significance.
Morris LR, McGee JA, Kitabchi AE: Correlation between plasma and urine glucose in diabetes.
Malone JI, Rosenbloom AL, Gracia A, Weber F: The role of urine sugar in diabetic management.
Am J Dis Child
Fulop M, Ben-Ezra J, Bock J: Alcoholic ketosis.
Halperin ML, Hammeke M, Josse RG, Jungas RL: Metabolic acidosis in the alcoholic: a pathophysiologic approach.
Krebs HT: The effects of ethanol on the metabolic activities of the liver.
Adv Enzyme Regul
Lefèvre A, Adler H, Lieber S:Effect of ethanol on ketone metabolism.
Kreisberg RA: Acid-base and electrolyte disturbances in the alcoholic.
Probl Crit Care
Marliss EB, Ohman JL Jr, Aoki TT, Kozak GP: Altered redox state obscuring ketoacidosis in diabetic patients with lactic acidosis.
Morris LE, Kitabchi AE: Coma in the diabetic. In
Diabetes Mellitus: Problems in Management
. Schnatz JD,Ed. Menlo Park, CA, Addison-Wesley,
Pinies JA, Cairo G, Gaztambide S, Vazquez JA: Course and prognosis of 132 patients with diabetic nonketotic hyperosmolar state.
Butler AM, Talbot NB, Curnett CH: Metabolic studies in diabetic coma.
Trans Assoc Am Physicians
Martin HE, Smith K, Wilson ML: The fluid and electrolyte therapy of severe diabetic acidosis and ketosis: a study of twenty-nine episodes(twenty-six patients).
Nabarro JDN, Spencer AG, Stowers JM: Metabolic studies in severe diabetic ketosis.
Q J Med
Feig PU, McCurdy DK: The hypertonic state.
Fulop M, Rosenblatt A, Kreitzer SM, Gerstenhabner B: Hyperosmolar nature of diabetic coma.
Fulop M, Tannenbaum H, Dreyer N: Ketotic hyperosmolar coma.
Hillman K: Fluid resuscitation in diabetic emergencies: our appraisal.
Intensive Care Med
-8,
Arieff AI: Cerebral edema complicating nonketotic hyperosmolar coma.
Miner Electrolyte Metab
Adrogué HJ, Lederer ED, Suki WN,Eknoyan G: Determinants of plasma potassium levels in diabetic ketoacidosis.
Beigelman PM: Severe diabetic ketoacidosis (diabetic coma): 482 episodes in 257 patients: experience of three years.
Abramson E, Arky R: Diabetic acidosis with initial hypokalemia:therapeutic implications.
Beigelman PM: Potassium in severe diabetic ketoacidosis(Editorial).
Barnes HV, Cohen RD, Kitabchi AE, Murphy MB, Gitnick G, Barnes HV,Duffy TP, Lewis TP, Winterbauer RH: When is bicarbonate appropriate in treating metabolic acidosis including diabetic ketoacidosis? In
Debates in Medicine
. Gitnick G, Barnes HV, Duffy TP,Gitnick G, Barnes HV, Duffy TP, Lewis RP, Winterbauer RH, Eds. Chicago,Yearbook,
Lever E, Jaspan JB: Sodium bicarbonate therapy in severe diabetic ketoacidosis.
Matz R: Diabetic acidosis: rationale for not using bicarbonate.
N Y State J Med
Hale PJ, Crase J, Nattrass M: Metabolic effects of bicarbonate in the treatment of diabetic ketoacidosis.
Br Med Bull
Morris LR, Murphy MB, Kitabchi AE: Bicarbonate therapy in severe diabetic ketoacidosis.
Green SM, Rothrock SG, Ho JD, Gallant RD, Borger R, Thomas TL,Zimmerman GJ: Failure of adjunctive bicarbonate to improve outcome in severe pediatric diabetic ketoacidosis.
Ann Emerg Med
Fisher JN, Kitabchi AE: A randomized study of phosphate therapy in the treatment of diabetic ketoacidosis.
Wilson HK, Keuer SP, Lea AS, Boyd AE, Eknoyan G: Phosphate therapy in diabetic ketoacidosis.
Kreisberg RA: Phosphorus deficiency and hypophosphatemia.
Hosp Pract
Gibby OM, Veale KEA, Hayes TM, Jones JG, Wardrop CAJ: Oxygen availability from the blood and the effect of phosphate replacement on erythrocyte 2-3 diphosphoglycerate and hemoglobin-oxygen affinity in diabetic ketoacidosis.
Zipf WB, Bacon GF, Spencer ML, Kelch RP, Hopwood NJ, Hawker CD:Hypocalcemia, hypomagnesemia, and transient hypoparathyroidism during therapy with potassium phosphate in diabetic ketoacidosis.
Marshall SM, Alberti KGMM: Management of hyperglycemic emergencies.
Proc R Coll Physician Edinb
Marshall SM, Walker M, Alberti KGMM: Diabetic ketoacidosis and hyperglycaemic non-ketotic coma. In
International Text-book of Diabetes Mellitus
. 2nd ed. Alberti KGMM, Zimmet P, DeFronzo RA,Eds. New York, Wiley,
Malone ML, Klos SE, Gennis VM, Goodwin JS: Frequent hypoglycaemic episodes in the treatment of patients with diabetic ketoacidosis.
Lucas CP, Grant N, Baily WJ, Reaven GM: Diabetic coma without ketoacidosis.
Clements RS Jr, Blumenthal SA, Morrison AD, Winegrad AI: Increased cerebrospinal-fluid pressure during treatment of diabetic ketosis.
-661:
Sament S, Schwartz MD: Severe diabetic stupor without ketosis.
S Afr Med J
Fein IA, Rackow EC, Sprung CL, Grodman R: Relation of colloid osmotic pressure to arterial hypoxemia and cerebral edema during crystalloid volume loading of patients with diabetic ketoacidosis.
Krane EJ, Rockoff MA, Wallman JK, Wolfsdorf JI: Subclinical brain swelling in children during treatment of diabetic ketoacidosis.
Hoffman WH, Steinhart CM, Gammal TE, Steele S, Cyadrado AR, Morse PK: Cranial CT in children and adolescents with diabetic ketoacidosis.
AJNR
Duck SC, Wyatt DT: Factors associated with brain herniation in the treatment of diabetic ketoacidosis.
Rosenbloom AL: Intracerebral crises during treatment of diabetic ketoacidosis.
Harris GD, Fiordalisi I, Harris WL, Mosovich LL, Finberg L:Minimizing the risk of brain herniation during treatment of diabetic ketoacidemia: a retrospective and prospective study.
Ellis EN: Concepts of fluid therapy in diabetic ketoacidosis and hyperosmolar hyperglycemic nonketotic coma.
Pediatr Clin North Am
Mel JM, Werther GA: Incidence and outcome of diabetic cerebral edema in childhood: are there predictors?
J Paediatr Child Health
Finberg L: Why do patients with diabetic ketoacidosis have cerebral swelling, and why does treatment sometimes make it worse?
Pediatr Adolescent Med
Mahoney CP, Vlcek BW, DelAguila M: Risk factors for developing brain herniation during diabetic ketoacidosis.
Pediatr Neurol
Carrol P, Matz R: Adult respiratory distress syndrome complicating severely uncontrolled diabetes mellitus: report of nine cases and a review of the literature.
Oh MS, Banerji MA, Carrol HJ: The mechanism of hyperchloremic acidosis during the recovery phase of diabetic ketoacidosis.
Oh MS, Carroll HJ, Uribarri J: Mechanism of normochloremic and hyperchloremic acidosis in diabetic ketoacidosis.
Nephron
Adrogué HJ, Eknoyan G, Suki NN:Diabetic ketoacidosis: role of the kidney in the acid-base homeostasis reevaluation.
Kidney Int
Adrogué HJ, Barrero J, Eknoyan G:Salutary effects of modes fluid replacement in the treatment of adults with diabetic ketoacidosis: use in patients without extreme volume deficit.
Wall BM, Jones GV, Kaminska E, Fisher JN, Kitabchi AE, Cooke CR:Causes of hyperchloremic acidosis during treatment of diabetic ketoacidosis(Abstract).
May ME, Young C, King J: Resource utilization in treatment of diabetic ketoacidosis in adults.
Bonadio WA: Pediatric diabetic ketoacidosis: pathophysiology and potential for outpatient management of selected children.
Pediatr Emerg Care
Schade DS, Eaton RP: Diabetic ketoacidosis: pathogenesis,prevention and therapy.
Clin Endocrinol Metab
Golden MP, Herrold AJ, Orr DP: An approach to prevention of recurrent diabetic ketoacidosis in the pediatric population.
Vanelli M, Chiari G, Ghizzoni L, Costi G, Giacalone T, Chiarelli F:Effectiveness of a prevention program for diabetic ketoacidosis in children.
Zadik Z, Kayne R, Kappy M, Plotnick LP, Kowarski AA: Increased integrated concentration of norepinephrine, aldosterone, and growth hormone in patients with uncontrolled juvenile diabetes mellitus.
Bjellerup P, Kaliner A, Kollind M: GLC determination of serum-ethylene glycol interferences in ketotic patients.
J Toxicol Clin Toxicol
by the American Diabetes Association,Inc. | CommonCrawl |
Generalized Jordan triple derivations associated with Hochschild 2–cocycles of rings
O. H. Ezzat1 &
H. Nabiel1
Journal of the Egyptian Mathematical Society volume 27, Article number: 4 (2019) Cite this article
In the present work, we introduce the notion of a generalized Jordan triple derivation associated with a Hochschild 2–cocycle, and we prove results which imply under some conditions that every generalized Jordan triple derivation associated with a Hochschild 2–cocycle of a prime ring with characteristic different from 2 is a generalized derivation associated with a Hochschild 2–cocycle.
Let R denote an associative ring with center Z(R). A ring R is said to have characteristic n if n is the least positive integer such that nx=0 for all x∈R, and of characteristic not n if nx=0, x∈R, then x=0. An additive subgroup L of R is called a Lie ideal of R if [u,r]∈L for all u∈L, r∈R. A Lie ideal L is said to be a square-closed Lie ideal of R if u2∈L for all u∈L. An R-bimodule M is a left and right R-module such that x(my)=(xm)y for all m∈M and x,y∈R. Recall that a ring R is called prime if xRy=(0) implies that either x=0 or y=0, and R is called semiprime if xRx=(0) implies x=0. An additive mapping d:R→R is called a derivation if d(xy)=d(x)y+xd(y) for all x,y∈R. d is called a Jordan derivation in case d(x2)=d(x)x+xd(x) for all x∈R. Moreover, d is called a Jordan triple derivation if d(xyx)=d(x)yx+xd(y)x+xyd(x) for all x,y∈R. It is obvious to see that every derivation is a Jordan derivation and is a Jordan triple derivation but the converse is in general not true. A classical result of Herstein [1] asserts that any Jordan derivation of a prime ring with characteristic different from 2 is a derivation. In [2], Bre\(\breve {s}\)ar has proved Herstein's result in the case of a semiprime ring. Also, he has shown in [3] that any Jordan triple derivation of a 2-torsion free semiprime ring is a derivation. An additive map f of a ring R is called a generalized derivation if there is a derivation d of R such that for all x, y in R, f(xy)=f(x)y+xd(y) and is called a generalized Jordan derivation if there is a Jordan derivation d such that f(x2)=f(x)x+xd(x) for all x∈R. Furthermore, f is said to be a generalized Jordan triple derivation if there is a Jordan triple derivation d of R such that for all x, y in R, f(xyx)=f(x)yx+xd(y)x+xyd(x). In [4], Jing and Lu have proved in a prime ring R of characteristic not two that every generalized Jordan derivation of R is a generalized derivation, and also every generalized Jordan triple derivation on R is a generalized derivation.
Let θ and ϕ be endomorphisms of a ring R. f is called a (θ,ϕ)−derivation if f(xy)=f(x)θ(y)+ϕ(x)f(y) for all x, y∈R. f is called a Jordan (θ,ϕ)–derivation if f(x2)=f(x)θ(x)+ϕ(x)f(x) for all x∈R. f is called a Jordan triple (θ,ϕ)−derivation if f(xyx)=f(x)θ(y)θ(x)+ϕ(x)f(y)θ(x)+ϕ(x)ϕ(y)f(x) for all x,y∈R. In [5], Liu and Shiue have proved that every Jordan triple (θ,ϕ)−derivation on a 2-torsion free semiprime ring R is a (θ,ϕ)−derivation, where θ and ϕ are automorphisms. An additive mapping f:R→R is said to be a left (right) centralizer, if f(xy)=f(x)y(f(xy)=xf(y)) for all x,y∈R. f is called a centralizer, if f is both a left and right centralizer. In [6], Vukman and Kosi-Ulbl have shown that if R is a 2-torsion free semiprime ring and f is an additive mapping of R such that 2f(xyx)=f(x)yx+xyf(x) for all x,y∈R, then f is a centralizer.
An additive mapping f:R→R is said to be a left (right) θ−centralizer associated with a function θ of R, if f(xy)=f(x)θ(y)(f(xy)=θ(x)f(y)) for all x,y∈R. f is called a θ−centralizer, if f is both a left and right θ−centralizer. Daif, El-Sayiad, and Muthana in [7] have proved that if R is a 2−torsion free semiprime ring and f is an additive mapping of R such that 2f(xyx)=f(x)θ(yx)+θ(xy)f(x) for all x,y∈R with θ(Z(R))=Z(R), where θ is a nonzero surjective endomorphism on R, then f is a θ−centralizer.
Now let R be a ring and M be an R-bimodule. A biadditive map α:R×R→M is called a Hochschild 2–cocycle, if xα(y,z)−α(xy, z)+α(x,yz)−α(x, y)z=0 for all x,y,z∈R, and α is called symmetric if α(x,y)=α(y,x) for all x,y∈R. Nakajima [8] has introduced a new type of generalized derivations and generalized Jordan derivations associated with Hochschild 2–cocycles in the following way. An additive map f:R→M is called a generalized derivation associated with a Hochschild 2–cocycle α if f(xy)=f(x)y+xf(y)+α(x,y) for all x,y∈R, and f is called a generalized Jordan derivation associated with α if f(x2)=f(x)x+xf(x)+α(x,x) for all x∈R. If α=0, then f means the usual derivation and Jordan derivation. He has given the following examples:
(1) If f is a generalized derivation associated with a derivation d, then the map α1:R×R∋(x,y)↦x(d−f)(y)∈M is biadditive and satisfies the 2–cocycle condition. Hence, f is a generalized derivation associated with α1.
(2) If f:R→M is a left centralizer, then by f(xy)=f(x)y+xf(y)+x(−f)(y), we have a 2–cocycle α2:R×R→M defined by, α2(x,y)=x(−f)(y), and hence, f is a generalized derivation associated with α2.
(3) Let f be a (θ,ϕ)−derivation. Then, the map α3:R×R∋(x,y)↦f(x)(θ(y)−y)+(ϕ(x)−x)f(y)∈M, is biadditive and satisfies the 2–cocycle condition. Since f(xy)=f(x)y+xf(y)+α3(x,y), then f is a generalized derivation associated with α3.
(4) In general, he has mentioned the following. Let f:R→M be an additive map and let α:R×R→M be a biadditive map. If f(xy)=f(x)y+xf(y)+α(x,y) holds, then by the associativity f((xy)z)=f(x(yz)), α satisfies the 2–cocycle condition. Thus f is a generalized derivation associated with α.
In his work, Nakajima [8] has shown the following result. Let R be a 2-torsion free ring. Then, every generalized Jordan derivation associated with a Hochschild 2–cocycle α is a generalized derivation associated with α in each of the following cases:
R is a noncommutative prime ring.
There exist x,y∈R such that [x,y] is a nonzero divisor.
R is commutative and α is symmetric.
Nawzad, et al. [9] have shown the following. Let R be a 2-torsion free ring. Then, every generalized Jordan derivation associated with a Hochschild 2–cocycle α is a generalized derivation associated with α in each of the following cases:
R is a noncommutative semiprime ring and α is symmetric.
R is commutative.
In [10], Rehman and Hongan have proved the following result. Let R be a 2-torsion free ring and L a square-closed Lie ideal of R. Then, every generalized Jordan derivation associated with a Hochschild 2–cocycle α is a generalized derivation associated with α in each of the following cases.
R is a prime ring and L is noncommutative.
R is a prime ring, L is commutative and α is symmetric.
There exist x,y∈R such that [x,y] is a nonzero divisor in L.
In the present article, we introduce the notion of generalized Jordan triple derivations associated with Hochschild 2–cocycles in the following way. Let R be a ring and let M be an R-bimodule. An additive map f:R→M is called a generalized Jordan triple derivation associated with a Hochschild 2–cocycle α if f(xyx)=f(x)yx+xf(y)x+α(x,y)x+xyf(x)+α(xy,x) for all x,y∈R.
Examples (i) If f is a Jordan triple derivation, then the zero map α1 is biadditive and satisfies the 2–cocycle condition. Therefore f is a generalized Jordan triple derivation associated with α1.
(ii) If f is a generalized Jordan triple derivation associated with a Jordan triple derivation d, then α2(x,y)=x(d−f)(y) is biadditive and satisfies the 2–cocycle condition and we can see that f(xyx)=f(x)yx+xf(y)x+α2(x,y)x+xyf(x)+α2(xy,x). Hence f is a generalized Jordan triple derivation associated with α2.
Our aim in this work is to show that every generalized Jordan triple derivation associated with a Hochschild 2–cocycle α from a prime ring R with characteristic different from 2 to an R-bimodule M is a generalized derivation associated with α.
The proof of our result is based on the following series of auxiliary lemmas.
Let f be a generalized Jordan triple derivation from a ring R to an R- bimodule M associated with a Hochschild 2–cocycle map α from R×R into M. Then for all x,y,z∈R, f(xyz+zyx)=f(x)yz+xf(y)z+α(x,y)z+zyf(z)+α(xy,z)+f(z)yx+zf(y)x+α(z,y)x+zyf(x)+α(zy,x).
Let v=f((x+z)y(x+z)), we have for all x,y,z∈R
$${{} \begin{aligned} 0&=v-v\\ &=f(xyx)+f(xyz+zyx)+f(zyz)- \{f(x+z)y(x+z)\\ &\,\,\,\,\, +(x+z)f(y)(x+z)+\alpha(x+z,y)(x+z)\\& \quad+(x+z)yf(x+z)\\ &\,\,\,\,\, +\alpha((x+z)y,(x+z))\}. \end{aligned}} $$
$${{} \begin{aligned} 0&=f(xyx)+f(xyz+zyx)+f(zyz)-\{f(x)yx\\&\quad+f(x)yz+f(z)yx\\ &+f(z)yz+xf(y)x+xf(y)z+zf(y)x+zf(y)z\\&\quad+\alpha(x,y)x+\alpha(x,y)z\\ &+\alpha(z,y)x+\alpha(z,y)z+xyf(x)+xyf(z)+zyf(x)\\&\quad+zyf(z)+\alpha(xy,x)\\ &+\alpha(xy,z)+\alpha(zy,x)+\alpha(zy,z)\} \mathrm{\,\,\,for \,\,\, all\,\,\,} x,y,z\in R, \end{aligned}} $$
$${{} \begin{aligned} f(xyz+zyx)&=f(x)yz+xf(y)z+\alpha(x,y)z+zyf(z)\\&\quad+\alpha(xy,z)\\ &\,\,\,\, +f(z)yx+zf(y)x+\alpha(z,y)x+zyf(x)\\ &\,\,\,\, +\alpha(zy,x) \mathrm{\,\,\,for\,\,\, all\,\,\,} x,y,z\in R, \end{aligned}} $$
as required. □
For a generalized Jordan triple derivation f from a ring R to an R-bimodule M associated with a Hochschild 2−cocycle α, we denote by δ, F and β the maps from R×R×R into M defined by δ(x,y,z)=f(xyz)−f(x)yz-xf(y)z−α(x,y)z-xyf(z)−α(xy,z), F(x,y,z)=f(xyz)−f(x)yz-xf(y)z-xyf(z) and β(x,y,z)=xyz-zyx, respectively. Thus, δ(x,y,z)=F(x,y,z)−α(x,y)z−α(xy,z).
For all x,y,z in a ring R, the following hold:
δ(x,y,z)=−δ(z,y,x), and
δ(x,y,z) and β(x,y,z) are tri-additive.
(i) Follows easily from Lemma 1.
(ii) Replace x by a+b in the definition of δ, then (ii) is easily seen. □
For any ring R and any a,b,c,x∈R,
δ(a,b,c)xβ(a,b,c)+β(a,b,c)xδ(a,b,c)=0.
Let v=f(abcxcba+cbaxabc), then 0=v−v=f((abc)x(cba)+(cba)x(abc))−f(a(bcxcb)a+c(baxab)c). By the definition of the generalized Jordan triple derivation f associated with a Hochschild 2-cocycle α and by Lemma 1, we get
$$ {\begin{aligned} 0&=f(abc)xcba+abcf(x)cba+\alpha(abc,x)cba+abcxf(cba)\\ &+\alpha(abcx,cba)+f(cba)xabc+cbaf(x)abc+\alpha(cba,x)abc\\ &+cbaxf(abc)+\alpha(cbax,abc)-\{f(a)bcxcba+af(b)cxcba\\ &+abf(c)xcba+abcf(x)cba+ab\alpha(c,x)cba+abcxf(c)ba\\ &+ab\alpha(cx,c)ba+a\alpha(b,cxc)ba+abcxcf(b)a+a\alpha(bcxc,b)a\\ &+\alpha(a,bcxcb)a+abcxcbf(a)+\alpha(abcxcb,a)+f(c)baxabc\\ &+cf(b)axabc+cbf(a)xabc+cbaf(x)abc+cb\alpha(a,x)abc\\ &+cbaxf(a)bc+cb\alpha(ax,a)bc+c\alpha(b,axa)bc+cbaxaf(b)c\\ &+c\alpha(baxa,b)c+\alpha(c,baxab)c+cbaxabf(c)+\alpha(cbaxab,c)\}. \end{aligned}} $$
Therefore, for all a,b,c,x∈R
$$ {\begin{aligned} 0&=F(a,b,c)xcba+abcxF(c,b,a)\\&+ \{\alpha(abc,x)-ab\alpha(c,x)\}cba+ \{\alpha(abcx,cba)-\alpha(abcxcb,a)\}\\ &-\{ab\alpha(cx,c)ba+a\alpha(b,cxc)ba+ a\alpha(bcxc,b)a+\alpha(a,bcxcb)a\}\\ &+F(c,b,a)xabc+cbaxF(a,b,c)\\ &+\{\alpha(cba,x)-cb\alpha(a,x)\}abc+ \{\alpha(cbax,abc)-\alpha(cbaxab,c)\}\\ &-\{cb\alpha(ax,a)bc+c\alpha(b,axa)bc+ c\alpha(baxa,b)c+\alpha(c,baxab)c\} \end{aligned}} $$
Since α is a 2–cocycle map, we obtain the following relations for all a,b,c,x∈R:
{α((ab)c,x)−(ab)α(c,x)}cba={α(ab,cx)−α(ab,c)x}cba.
α(abcx,(cb)a)−α((abcx)(cb),a)=α(abcx,cb)a−(abcx)α(cb,a).
{α((cb)a,x)−(cb)α(a,x)}abc={α(cb,ax)−α(cb,a)x}abc.
α(cbax,(ab)c)−α((cbax)(ab),c)=α(cbax,ab)c−(cbax)α(ab,c).
Substituting from (i–iv) in (2), we get for all a,b,c,x∈R
$$ {\begin{aligned} 0&=F(a,b,c)xcba+abcxF(c,b,a)\\ &+\{\alpha(ab,cx)-\alpha(ab,c)x\}cba+ \{\alpha(abcx,cb)a-abcx\alpha(cb,a)\}\\ &-\{ab\alpha(cx,c)ba+a\alpha(b,cxc)ba+ a\alpha(bcxc,b)a+\alpha(a,bcxcb)a\}\\ &+F(c,b,a)xabc+cbaxF(a,b,c)\\ &+\{\alpha(cb,ax)-\alpha(cb,a)x\}abc+ \{\alpha(cbax,ab)c-cbax\alpha(ab,c)\}\\ &-\{cb\alpha(ax,a)bc+c\alpha(b,axa)bc+ c\alpha(baxa,b)c+\alpha(c,baxab)c\} \end{aligned}} $$
Since α is a 2–cocycle map, we conclude for all a,b,c,x∈R that
(i) α(ab,cx)=aα(b,cx)+α(a,b(cx))−α(a,b)(cx).
(ii) α(abcx,cb)a={−(abcx)α(c,b)+α((abcx)c,b)+α(abcx,c)b}a.
(iii) α(cb,ax)=cα(b,ax)+α(c,b(ax))−α(c,b)(ax).
(iv) α(cbax,ab)c={−(cbax)α(a,b)+α((cbax)a,b)+α(cbax,a)b}c.
Substituting from (i–iv) in (3), we obtain
$$ {\begin{aligned} 0&=\{F(a,b,c)-\alpha(ab,c)-\alpha(a,b)c\}xcba+abcx\{F(c,b,a)-\alpha(cb,a)\\ &-\alpha(c,b)a\}+\{a\alpha(b,cx)cba-ab\alpha(cx,c)ba-a\alpha(b,cxc)ba\}\\ &+\{\alpha(abcxc,b)a-a\alpha(bcxc,b)a-\alpha(a,bcxcb)a\}\\ &+\alpha(a,bcx)cba+\alpha(abcx,c)ba+\{F(c,b,a)-\alpha(cb,a)\\ &-\alpha(c,b)a\}xabc+cbax\{F(a,b,c)-\alpha(ab,c)-\alpha(a,b)c\}\\ &+\{c\alpha(b,ax)abc-cb\alpha(ax,a)bc-c\alpha(b,axa)bc\}\\ &+\{\alpha(cbaxa,b)c-c\alpha(baxa,b)c-\alpha(c,baxab)c\}\\ &+\alpha(c,bax)abc+\alpha(cbax,a)bc, \,\, \mathrm{for\,\, all}\,\ a,b,c,x\in R. \end{aligned}} $$
Again since α is a 2–cocycle map, we have
a{α(b,cx)c−bα(cx,c)−α(b,(cx)c)}ba=−aα(b(cx),c)ba.
{α(a(bcxc),b)−aα(bcxc,b)−α(a,(bcxc)b)}a=−α(a,bcxc)ba.
c{α(b,ax)a−bα(ax,a)−α(b,(ax)a)}bc=−cα(b(ax),a)bc.
{α(c(baxa),b)−cα(baxa,b)−α(c,(baxa)b)}c=−α(c,baxa)bc.
Replacing (i–iv) into (4), we get, for all a,b,c,x∈R
$$ {\begin{aligned} 0&=\delta(a,b,c)xcba+abcx\delta(c,b,a)-a\alpha(bcx,c)ba-\alpha(a,bcxc)ba\\ &+\alpha(a,bcx)cba+\alpha(abcx,c)ba+\delta(c,b,a)xabc+cbax\delta(a,b,c)\\ &-c\alpha(bax,a)bc-\alpha(c,baxa)bc+\alpha(c,bax)abc+\alpha(cbax,a)bc. \end{aligned}} $$
Continuing in this manner, we obtain
{−aα(bcx,c)−α(a,(bcx)c)+α(a,bcx)c+α(a(bcx),c)}ba=0.
{−cα(bax,a)−α(c,(bax)a)+α(c,bax)a+α(c(bax),a)}bc=0.
By (5), we conclude that 0=δ(a,b,c)xcba+abcxδ(c,b,a)+δ(c,b,a)xabc+cbaxδ(a,b,c) for all a,b,c,x∈R. By Lemma 2, we obtain 0=δ(a,b,c)xcba−abcxδ(a,b,c)−δ(a,b,c)xabc+cbaxδ(a,b,c) for all a,b,c,x∈R.
Therefore, δ(a,b,c)xβ(a,b,c)+β(a,b,c)xδ(a,b,c)=0 for all a,b,c,x∈R. This finishes the proof of the lemma. □
If R is a prime ring of characteristic not 2, then for all a,b,c,x∈R,δ(a,b,c)xβ(a,b,c)=0,.
By Lemma 3 and Lemma 1.1 of Bre\(\breve {s}\)ar [3], we get the proof. □
If R is a prime ring of characteristic not 2, then
δ(a1,b1,c1)xβ(a2,b2,c2)=0 for all a1,b1,c1,a2,b2,c2,x∈R.
From Lemma 2(ii), Lemma 4, and Lemma 1.2 of Bre\(\breve {s}\)ar [3], we get the proof. □
Let R be a prime ring. Then, R is commutative iff β(a,b,c)=0 for all a,b,c∈R.
If R is commutative, then, by definition of β,β(a,b,c)=0 for all a,b,c∈R. Conversely, assume that β(a,b,c)=0 for all a,b,c∈R. Let Q be the Martindale right ring of quotients of R defined by Martindale [11]. Then Q is a prime ring with identity that contains the ring R. By Chuang [12], Q satisfies the same generalized polynomial identities as R. In particular abc−cba=0 for all a,b,c∈Q. Replacing c by the identity of Q yields the commutativity of Q, and hence R. □
Let R be a prime ring of characteristic not 2. Then δ(a,b,c)=0 for all a,b,c∈R, in each of the following cases:
R is noncommutative.
There exist x,y,z∈R such that β(x,y,z) is a nonzero divisor in M.
(i) By Lemmas 5 and 6, we get our requirement.
(ii) By Lemma 5, we have δ(a,b,c)rβ(x,y,z)=0 for all a,b,c,r,x,y,z∈R. From our assumption δ(a,b,c)r=0 for all a,b,c,r∈R. Thus the primeness of R gives δ(a,b,c)=0 for all a,b,c∈R.
(iii) From Lemma 1 we have f(abc+cba)=f(a)bc+af(b)c+α(a,b)c+abf(c)+α(ab,c)+f(c)ba+cf(b)a+α(c,b)a+cbf(a)+α(cb,a) for all a,b,c∈R. Since R is commutative and α is symmetric, we get 0=2{f(abc)−f(a)bc−af(b)c−abf(c)}−α(a,b)c−α(ab,c)−aα(b,c)−α(a,bc) for all a,b,c∈R. Since α is 2–cocycle we have −aα(b,c)−α(a,bc)=−α(a,b)c−α(ab,c) for all a,b,c∈R. Therefore 0=2{f(abc)−f(a)bc−af(b)c−abf(c)−α(a,b)c−α(ab,c)} for all a,b,c∈R. Since R has characteristic not 2, then δ(a,b,c)=0 for all a,b,c∈R, as required. □
Main result
Let R be a prime ring of characteristic not 2. Then every generalized Jordan triple derivation associated with a Hochschild 2–cocycle α is a generalized derivation associated with α in each of the following cases.
(i) R is noncommutative.
(ii) There exist x,y,z∈R such that β(x,y,z) is a nonzero divisor in M.
(iii) R is commutative and α is symmetric.
Suppose that f is a generalized Jordan triple derivation associated with a Hochschild 2−cocycle α. We denote by G(a,b) and ab the elements of M defined by G(a,b)=f(ab)−f(a)b−af(b), and ab=f(ab)−f(a)b−af(b)−α(a,b), respectively. Thus, ab=G(a,b)−α(a,b). It is evident that ab+c=ab+ac, and (a+b)c=ac+bc. By Lemma 7, we have δ(a,b,c)=0 for all a,b,c∈R. Thus, for all a,b,c∈R
$$ f(abc)=f(a)bc+af(b)c+\alpha(a,b)c+abf(c)+\alpha(ab,c). $$
Now let v=f(abxab), then 0=v−v=f((ab)x(ab))−f(a(bxa)b). By (6), we have for all a,b,x∈R
$${\begin{aligned} 0&=f(ab)xab+abf(x)ab+\alpha(ab,x)ab+abxf(ab)+\alpha(abx,ab)\\ &-f(a)bxab-af(b)xab-abf(x)ab-a\alpha(b,x)ab-abxf(a)b\\ &-a\alpha(bx,a)b-\alpha(a,bxa)b-abxaf(b)-\alpha(abxa,b). \end{aligned}} $$
So, for all a,b,x∈R
$$ {{} \begin{aligned} 0&=G(a,b)xab+abxG(a,b)+ \{\alpha(ab,x)-a\alpha(b,x)\}ab\\ &+\{\alpha(abx,ab)-\alpha(abxa,b)\} -a\alpha(bx,a)b-\alpha(a,bxa)b. \end{aligned}} $$
Since α is 2-cocycle we have for all a,b,x∈R that
{α(ab,x)−aα(b,x)}ab={α(a,bx)−α(a,b)x}ab, and
α(abx,ab)−α((abx)a,b)=α(abx,a)b−(abx)α(a,b).
Substituting from (i) and (ii) in (7), we get G(a,b)xab−α(a,b)xab+abxG(a,b)−abxα(a,b)+α(a,bx)ab+α(abx,a)b−aα(bx,a)b−α(a,bxa)b=0for all a,b,x∈R. But α is 2–cocycle, hence {α(a,bx)a+α(abx,a)−aα(bx,a)−α(a,bxa)}b=0. Therefore abx(ab)+(ab)xab=0 for all a,b,x∈R. By Lemma 1.1 of Bre\(\breve {s}\)ar [3], we get
$$ a^{b}x(ab)=(ab)xa^{b}=0\,\, \mathrm{for\,\, all}\,\, a,b,x\in R. $$
Replacing a by a+c in (8) and using (8), we obtain abxcb=−cbxab for all a,b,c,x∈R, and then (abxcb)y(abxcb)=−abx(cbycb)xab=0 for all a,b,c,x,y∈R. Thus the primeness of R gives
$$ a^{b}xcb=0\,\, \mathrm{for\,\, all}\,\, a,b,c,x\in R. $$
Similarly replacing b by b+d in (9), we get
$$a^{b}xcd=0\,\, \mathrm{for\,\, all}\,\, a,b,c,d,x\in R. $$
Putting c=ab and x=dx in (10) we have abdxabd=0 for all a,b,d,x∈R. Again, the primeness of R yields that abd=0 for all a,b,d∈R, and hence ab=0 for all a,b∈R. Consequently, f is a generalized derivation associated with a Hochschild 2–cocycle α. □
Herstein, I. N.: Jordan derivations of prime rings. Proc. Amer. Math. Soc. 8, 1104–1110 (1957).
Brešar, M.: Jordan derivations on semiprime rings. Proc. Amer. Math. Soc. 104, 1003–1006 (1988).
Brešar, M.: Jordan mappings of semiprime rings. J. Algebra. 127, 218–228 (1989).
Jing, W., Lu, S.: Generalized Jordan derivations on prime rings and standard operator algebras. Taiwan. J. Math. 7, 605–613 (2003).
Liu, C., Shiue, W.: Generalized Jordan triple (θ,ϕ)–derivations on semiprime rings. Taiwan. J. Math. 11(5), 1397–1406 (2007).
Vukman, J., Kosi-Ulbl, I.: On centralizers of semiprime rings. Aequationes Math. 66(3), 277–283 (2003).
Daif, M. N., El-Sayiad, M. S., Muthana, N. M.: An identity on θ−centralizers of semiprime rings. Int. Math. Forum.937–944(3) (2008).
Nakajima, A.: Note on generalized Jordan derivation associate with Hochschild 2−cocycles of rings. Turkish J. Math. 30, 403–411 (2006).
Nawzad, A., Abdulla, H., Majeed, A. H.: Generalized derivations on semiprime rings. Sci. Magna. 6(3), 34–39 (2010).
Rehman, N., Hongan, M.: Generalized Jordan derivations on Lie ideals associate with Hochschild 2−cocycles of rings. Rend. Circolo Matematico Palermo. 60(3), 437–444 (2011).
Martindale III, W. S: Prime rings satisfying a generalized polynomial identity. J. Algebra. 12, 576–584 (1969).
Chuang, C.: GPIs having coefficients in Utumi quotient rings. Proc. Amer. Math. Soc. 103, 723–728 (1988).
The authors are very grateful to Prof. M. N. Daif for his helpful comments and suggestions. This paper is a part of the second author's Ph.D. dissertation under the supervision of Prof M. N. Daif.
Department of Mathematics, Faculty of Science, Al-Azhar University, Nasr City, 11884, Cairo, Egypt
O. H. Ezzat & H. Nabiel
O. H. Ezzat
H. Nabiel
Both authors read and approved the final manuscript.
Correspondence to O. H. Ezzat.
Ezzat, O., Nabiel, H. Generalized Jordan triple derivations associated with Hochschild 2–cocycles of rings. J Egypt Math Soc 27, 4 (2019). https://doi.org/10.1186/s42787-019-0003-3
Prime ring
Generalized Jordan triple derivation
Hochschild 2–cocycle | CommonCrawl |
Chapter 13 Temperature, Kinetic Theory, and the Gas Laws
94 13.2 Thermal Expansion of Solids and Liquids
Define and describe thermal expansion.
Calculate the linear expansion of an object given its initial length, change in temperature, and coefficient of linear expansion.
Calculate the volume expansion of an object given its initial volume, change in temperature, and coefficient of volume expansion.
Calculate thermal stress on an object given its original volume, temperature change, volume change, and bulk modulus.
Figure 1. Thermal expansion joints like these in the Auckland Harbour Bridge in New Zealand allow bridges to change length without buckling. (credit: Ingolfson, Wikimedia Commons)
The expansion of alcohol in a thermometer is one of many commonly encountered examples of thermal expansion, the change in size or volume of a given mass with temperature. Hot air rises because its volume increases, which causes the hot air's density to be smaller than the density of surrounding air, causing a buoyant (upward) force on the hot air. The same happens in all liquids and gases, driving natural heat transfer upwards in homes, oceans, and weather systems. Solids also undergo thermal expansion. Railroad tracks and bridges, for example, have expansion joints to allow them to freely expand and contract with temperature changes.
What are the basic properties of thermal expansion? First, thermal expansion is clearly related to temperature change. The greater the temperature change, the more a bimetallic strip will bend. Second, it depends on the material. In a thermometer, for example, the expansion of alcohol is much greater than the expansion of the glass containing it.
What is the underlying cause of thermal expansion? As is discussed in Chapter 13.4 Kinetic Theory: Atomic and Molecular Explanation of Pressure and Temperature, an increase in temperature implies an increase in the kinetic energy of the individual atoms. In a solid, unlike in a gas, the atoms or molecules are closely packed together, but their kinetic energy (in the form of small, rapid vibrations) pushes neighboring atoms or molecules apart from each other. This neighbor-to-neighbor pushing results in a slightly greater distance, on average, between neighbors, and adds up to a larger size for the whole body. For most substances under ordinary conditions, there is no preferred direction, and an increase in temperature will increase the solid's size by a certain fraction in each dimension.
LINEAR THERMAL EXPANSION—THERMAL EXPANSION IN ONE DIMENSION
The change in length[latex]\boldsymbol{\Delta{L}}[/latex]is proportional to length[latex]\boldsymbol{L}.[/latex]The dependence of thermal expansion on temperature, substance, and length is summarized in the equation
[latex]\boldsymbol{\Delta{L}=\alpha{L}\Delta{T}},[/latex]
where[latex]\boldsymbol{\Delta{L}}[/latex]is the change in length[latex]\boldsymbol{L},\:\boldsymbol{\Delta{T}}[/latex]is the change in temperature, and[latex]\boldsymbol{\alpha}[/latex]is the coefficient of linear expansion, which varies slightly with temperature.
Table 2 lists representative values of the coefficient of linear expansion, which may have units of[latex]\boldsymbol{1/^{\circ}\textbf{C}}[/latex]or 1/K. Because the size of a kelvin and a degree Celsius are the same, both[latex]\boldsymbol{\alpha}[/latex]and[latex]\boldsymbol{\Delta{T}}[/latex]can be expressed in units of kelvins or degrees Celsius. The equation[latex]\boldsymbol{\Delta{L}=\alpha{L}\Delta{T}}[/latex]is accurate for small changes in temperature and can be used for large changes in temperature if an average value of[latex]\boldsymbol{\alpha}[/latex]is used.
Coefficient of linear expansion α(1/ºC)
Coefficient of volume expansion β(1/ºC)
Aluminum [latex]\boldsymbol{25\times10^{-6}}[/latex] [latex]\boldsymbol{75\times10^{-6}}[/latex]
Brass [latex]\boldsymbol{19\times10^{-6}}[/latex] [latex]\boldsymbol{56\times10^{-6}}[/latex]
Copper [latex]\boldsymbol{17\times10^{-6}}[/latex] [latex]\boldsymbol{51\times10^{-6}}[/latex]
Gold [latex]\boldsymbol{14\times10^{-6}}[/latex] [latex]\boldsymbol{42\times10^{-6}}[/latex]
Iron or Steel [latex]\boldsymbol{12\times10^{-6}}[/latex] [latex]\boldsymbol{35\times10^{-6}}[/latex]
Invar (Nickel-iron alloy) [latex]\boldsymbol{0.9\times10^{-6}}[/latex] [latex]\boldsymbol{2.7\times10^{-6}}[/latex]
Lead [latex]\boldsymbol{29\times10^{-6}}[/latex] [latex]\boldsymbol{87\times10^{-6}}[/latex]
Silver [latex]\boldsymbol{18\times10^{--6}}[/latex] [latex]\boldsymbol{54\times10^{-6}}[/latex]
Glass (ordinary) [latex]\boldsymbol{9\times10^{-6}}[/latex] [latex]\boldsymbol{27\times10^{-6}}[/latex]
Glass (Pyrex®) [latex]\boldsymbol{3\times10{-6}}[/latex] [latex]\boldsymbol{9\times10^{-6}}[/latex]
Quartz [latex]\boldsymbol{0.4\times10^{-6}}[/latex] [latex]\boldsymbol{1\times10^{-6}}[/latex]
Concrete, Brick [latex]\boldsymbol{\sim12\times10^{-6}}[/latex] [latex]\boldsymbol{\sim36\times10^{-6}}[/latex]
Marble (average) [latex]\boldsymbol{7\times10^{-6}}[/latex] [latex]\boldsymbol{2.1\times10^{-5}}[/latex]
Ether [latex]\boldsymbol{1650\times10^{-6}}[/latex]
Ethyl alcohol [latex]\boldsymbol{1100\times10^{-6}}[/latex]
Petrol [latex]\boldsymbol{950\times10^{-6}}[/latex]
Glycerin [latex]\boldsymbol{500\times10^{-6}}[/latex]
Mercury [latex]\boldsymbol{180\times10^{-6}}[/latex]
Water [latex]\boldsymbol{210\times10^{-6}}[/latex]
Air and most other gases at atmospheric pressure [latex]\boldsymbol{3400\times10^{-6}}[/latex]
Table 2. Thermal Expansion Coefficients at 20ºC1
Example 1: Calculating Linear Thermal Expansion: The Golden Gate Bridge
The main span of San Francisco's Golden Gate Bridge is 1275 m long at its coldest. The bridge is exposed to temperatures ranging from[latex]\boldsymbol{-15^{\circ}\textbf{C}}[/latex]to[latex]\boldsymbol{40^{\circ}\textbf{C}}.[/latex]What is its change in length between these temperatures? Assume that the bridge is made entirely of steel.
Use the equation for linear thermal expansion[latex]\boldsymbol{\Delta{L}=\alpha{L}\Delta{T}}[/latex]to calculate the change in length,[latex]\boldsymbol{\Delta{L}}.[/latex]Use the coefficient of linear expansion,[latex]\boldsymbol{\alpha},[/latex]for steel from Table 2, and note that the change in temperature,[latex]\boldsymbol{\Delta{T}},[/latex]is[latex]\boldsymbol{55^{\circ}\textbf{C}}.[/latex]
Plug all of the known values into the equation to solve for[latex]\boldsymbol{\Delta{L}}.[/latex]
[latex]\boldsymbol{\Delta{L}=\alpha{L}\Delta{T}\:=}[/latex][latex]\boldsymbol{\left(\frac{12\times10^{-6}}{^{\circ}\textbf{C}}\right)}[/latex][latex]\boldsymbol{(1275\textbf{ m})(55^{\circ}\textbf{C})=0.84\textbf{ m.}}[/latex]
Although not large compared with the length of the bridge, this change in length is observable. It is generally spread over many expansion joints so that the expansion at each joint is small.
Thermal Expansion in Two and Three Dimensions
Objects expand in all dimensions, as illustrated in Figure 2. That is, their areas and volumes, as well as their lengths, increase with temperature. Holes also get larger with temperature. If you cut a hole in a metal plate, the remaining material will expand exactly as it would if the plug was still in place. The plug would get bigger, and so the hole must get bigger too. (Think of the ring of neighboring atoms or molecules on the wall of the hole as pushing each other farther apart as temperature increases. Obviously, the ring of neighbors must get slightly larger, so the hole gets slightly larger).
THERMAL EXPANSION IN TWO DIMENSIONS
For small temperature changes, the change in area[latex]\boldsymbol{\Delta{A}}[/latex]is given by
[latex]\boldsymbol{\Delta{A}=2\alpha{A}\Delta{T}},[/latex]
where[latex]\boldsymbol{\Delta{A}}[/latex]is the change in area[latex]\boldsymbol{A},\:\boldsymbol{\Delta{T}}[/latex]is the change in temperature, and[latex]\boldsymbol{\alpha}[/latex]is the coefficient of linear expansion, which varies slightly with temperature.
Figure 2. In general, objects expand in all directions as temperature increases. In these drawings, the original boundaries of the objects are shown with solid lines, and the expanded boundaries with dashed lines. (a) Area increases because both length and width increase. The area of a circular plug also increases. (b) If the plug is removed, the hole it leaves becomes larger with increasing temperature, just as if the expanding plug were still in place. (c) Volume also increases, because all three dimensions increase.
THERMAL EXPANSION IN THREE DIMENSIONS
The change in volume[latex]\boldsymbol{\Delta{V}}[/latex]is very nearly[latex]\boldsymbol{\Delta{V}=3\alpha{V}\Delta{T}}.[/latex]This equation is usually written as
[latex]\boldsymbol{\Delta{V}=\beta{V}\Delta{T}},[/latex]
where[latex]\boldsymbol{\beta}[/latex]is the coefficient of volume expansion and[latex]\boldsymbol{\beta\approx{3}\alpha}.[/latex]Note that the values of[latex]\boldsymbol{\beta}[/latex]in Table 2 are almost exactly equal to[latex]\boldsymbol{3\alpha}.[/latex]
In general, objects will expand with increasing temperature. Water is the most important exception to this rule. Water expands with increasing temperature (its density decreases) when it is at temperatures greater than[latex]\boldsymbol{4^{\circ}\textbf{C }(40^{\circ}\textbf{F})}.[/latex]However, it expands with decreasing temperature when it is between[latex]\boldsymbol{+4^{\circ}\textbf{C}}[/latex]and[latex]\boldsymbol{0^{\circ}\textbf{C}(40^{\circ}\textbf{F}}[/latex]to[latex]\boldsymbol{32^{\circ}\textbf{F})}.[/latex]Water is densest at[latex]\boldsymbol{+4^{\circ}\textbf{C}}.[/latex](See Figure 3.) Perhaps the most striking effect of this phenomenon is the freezing of water in a pond. When water near the surface cools down to[latex]\boldsymbol{4^{\circ}\textbf{C}}[/latex]it is denser than the remaining water and thus will sink to the bottom. This "turnover" results in a layer of warmer water near the surface, which is then cooled. Eventually the pond has a uniform temperature of[latex]\boldsymbol{4^{\circ}\textbf{C}}.[/latex]If the temperature in the surface layer drops below[latex]\boldsymbol{4^{\circ}\textbf{C}},[/latex]the water is less dense than the water below, and thus stays near the top. As a result, the pond surface can completely freeze over. The ice on top of liquid water provides an insulating layer from winter's harsh exterior air temperatures. Fish and other aquatic life can survive in[latex]\boldsymbol{4^{\circ}\textbf{C}}[/latex]water beneath ice, due to this unusual characteristic of water. It also produces circulation of water in the pond that is necessary for a healthy ecosystem of the body of water.
Figure 3. The density of water as a function of temperature. Note that the thermal expansion is actually very small. The maximum density at +40C is only 0.0075% greater than the density at 2ºC, and 0.012% greater than that at 0ºC.
MAKING CONNECTIONS: REAL-WORD CONNECTIONS—FILLING THE TANK
Differences in the thermal expansion of materials can lead to interesting effects at the gas station. One example is the dripping of gasoline from a freshly filled tank on a hot day. Gasoline starts out at the temperature of the ground under the gas station, which is cooler than the air temperature above. The gasoline cools the steel tank when it is filled. Both gasoline and steel tank expand as they warm to air temperature, but gasoline expands much more than steel, and so it may overflow.
This difference in expansion can also cause problems when interpreting the gasoline gauge. The actual amount (mass) of gasoline left in the tank when the gauge hits "empty" is a lot less in the summer than in the winter. The gasoline has the same volume as it does in the winter when the "add fuel" light goes on, but because the gasoline has expanded, there is less mass. If you are used to getting another 40 miles on "empty" in the winter, beware—you will probably run out much more quickly in the summer.
Figure 4. Because the gas expands more than the gas tank with increasing temperature, you can't drive as many miles on "empty" in the summer as you can in the winter. (credit: Hector Alejandro, Flickr)
Example 2: Calculating Thermal Expansion: Gas vs. Gas Tank
Suppose your 60.0-L (15.9-gal) steel gasoline tank is full of gas, so both the tank and the gasoline have a temperature of[latex]\boldsymbol{15.0^{\circ}\textbf{C}}.[/latex]How much gasoline has spilled by the time they warm to[latex]\boldsymbol{35.0^{\circ}\textbf{C}}?[/latex]
The tank and gasoline increase in volume, but the gasoline increases more, so the amount spilled is the difference in their volume changes. (The gasoline tank can be treated as solid steel.) We can use the equation for volume expansion to calculate the change in volume of the gasoline and of the tank.
1. Use the equation for volume expansion to calculate the increase in volume of the steel tank:
[latex]\boldsymbol{\Delta{V}_{\textbf{s}}=\beta_{\textbf{s}}V_{\textbf{s}}\Delta{T}}.[/latex]
2. The increase in volume of the gasoline is given by this equation:
[latex]\boldsymbol{\Delta{V}_{\textbf{gas}}=\beta_{\textbf{gas}}V_{\textbf{gas}}\Delta{T}}.[/latex]
3. Find the difference in volume to determine the amount spilled as
[latex]\boldsymbol{V_{\textbf{spill}}=\Delta{V}_{\textbf{gas}}-\Delta{V}_{\textbf{s}}}.[/latex]
Alternatively, we can combine these three equations into a single equation. (Note that the original volumes are equal.)
[latex]\begin{array}{lcl} \boldsymbol{V_{\textbf{spill}}} & \boldsymbol{=} & \boldsymbol{(\beta_{\textbf{gas}}-\beta_{\textbf{s}})V\Delta{T}} \\ {} & \boldsymbol{=} & \boldsymbol{[(950-35)\times10^{-6}\textbf{/}^{\circ}\textbf{C}](60.0\textbf{ L})(20.0^{\circ}\textbf{C})} \\ {} & \boldsymbol{=} & \boldsymbol{1.10\textbf{ L.}} \end{array}[/latex]
This amount is significant, particularly for a 60.0-L tank. The effect is so striking because the gasoline and steel expand quickly. The rate of change in thermal properties is discussed in Chapter 14 Heat and Heat Transfer Methods.
If you try to cap the tank tightly to prevent overflow, you will find that it leaks anyway, either around the cap or by bursting the tank. Tightly constricting the expanding gas is equivalent to compressing it, and both liquids and solids resist being compressed with extremely large forces. To avoid rupturing rigid containers, these containers have air gaps, which allow them to expand and contract without stressing them.
Thermal Stress
Thermal stress is created by thermal expansion or contraction (see Chapter 5.3 Elasticity: Stress and Strain for a discussion of stress and strain). Thermal stress can be destructive, such as when expanding gasoline ruptures a tank. It can also be useful, for example, when two parts are joined together by heating one in manufacturing, then slipping it over the other and allowing the combination to cool. Thermal stress can explain many phenomena, such as the weathering of rocks and pavement by the expansion of ice when it freezes.
Example 3: Calculating Thermal Stress: Gas Pressure
What pressure would be created in the gasoline tank considered in Example 2, if the gasoline increases in temperature from[latex]\boldsymbol{15.0^{\circ}\textbf{C}}[/latex]to[latex]\boldsymbol{35.0^{\circ}\textbf{C}}[/latex]without being allowed to expand? Assume that the bulk modulus[latex]\boldsymbol{B}[/latex]for gasoline is[latex]\boldsymbol{1.00\times10^9\textbf{ N/m}^2}.[/latex](For more on bulk modulus, see Chapter 5.3 Elasticity: Stress and Strain.)
To solve this problem, we must use the following equation, which relates a change in volume[latex]\boldsymbol{\Delta{V}}[/latex]to pressure:
[latex]\boldsymbol{\Delta{V}\:=}[/latex][latex]\boldsymbol{\frac{1}{B}\frac{F}{A}}[/latex][latex]\boldsymbol{V_0,}[/latex]
where[latex]\boldsymbol{F/A}[/latex]is pressure,[latex]\boldsymbol{V_0}[/latex]is the original volume, and[latex]\boldsymbol{B}[/latex]is the bulk modulus of the material involved. We will use the amount spilled in Example 2 as the change in volume,[latex]\boldsymbol{\Delta{V}}.[/latex]
1. Rearrange the equation for calculating pressure:
[latex]\boldsymbol{P\:=}[/latex][latex]\boldsymbol{\frac{F}{A}}[/latex][latex]\boldsymbol{=}[/latex][latex]\boldsymbol{\frac{\Delta{V}}{V_0}}[/latex][latex]\boldsymbol{B.}[/latex]
2. Insert the known values. The bulk modulus for gasoline is[latex]\boldsymbol{B=1.00\times10^9\textbf{ N/m}^2}.[/latex]In the previous example, the change in volume[latex]\boldsymbol{\Delta{V}=1.10\textbf{ L}}[/latex]is the amount that would spill. Here,[latex]\boldsymbol{V_0=60.0\textbf{ L}}[/latex]is the original volume of the gasoline. Substituting these values into the equation, we obtain
[latex]\boldsymbol{P\:=}[/latex][latex]\boldsymbol{\frac{1.10\textbf{ L}}{60.0\textbf{ L}}}[/latex][latex]\boldsymbol{(1.00\times10^9\textbf{ Pa})=1.83\times10^7\textbf{ Pa.}}[/latex]
This pressure is about[latex]\boldsymbol{2500\textbf{ lb/in}^2},[/latex]much more than a gasoline tank can handle.
Forces and pressures created by thermal stress are typically as great as that in the example above. Railroad tracks and roadways can buckle on hot days if they lack sufficient expansion joints. (See Figure 5.) Power lines sag more in the summer than in the winter, and will snap in cold weather if there is insufficient slack. Cracks open and close in plaster walls as a house warms and cools. Glass cooking pans will crack if cooled rapidly or unevenly, because of differential contraction and the stresses it creates. (Pyrex® is less susceptible because of its small coefficient of thermal expansion.) Nuclear reactor pressure vessels are threatened by overly rapid cooling, and although none have failed, several have been cooled faster than considered desirable. Biological cells are ruptured when foods are frozen, detracting from their taste. Repeated thawing and freezing accentuate the damage. Even the oceans can be affected. A significant portion of the rise in sea level that is resulting from global warming is due to the thermal expansion of sea water.
Figure 5. Thermal stress contributes to the formation of potholes. (credit: Editor5807, Wikimedia Commons)
Metal is regularly used in the human body for hip and knee implants. Most implants need to be replaced over time because, among other things, metal does not bond with bone. Researchers are trying to find better metal coatings that would allow metal-to-bone bonding. One challenge is to find a coating that has an expansion coefficient similar to that of metal. If the expansion coefficients are too different, the thermal stresses during the manufacturing process lead to cracks at the coating-metal interface.
Another example of thermal stress is found in the mouth. Dental fillings can expand differently from tooth enamel. It can give pain when eating ice cream or having a hot drink. Cracks might occur in the filling. Metal fillings (gold, silver, etc.) are being replaced by composite fillings (porcelain), which have smaller coefficients of expansion, and are closer to those of teeth.
1: Two blocks, A and B, are made of the same material. Block A has dimensions[latex]\boldsymbol{l\times{w}\times{h}=L\times{2L}\times{L}}[/latex]and Block B has dimensions[latex]\boldsymbol{2L\times{2L}\times{2L}}.[/latex]If the temperature changes, what is (a) the change in the volume of the two blocks, (b) the change in the cross-sectional area[latex]\boldsymbol{l\times{w}},[/latex]and (c) the change in the height[latex]\boldsymbol{h}[/latex]of the two blocks?
Thermal expansion is the increase, or decrease, of the size (length, area, or volume) of a body due to a change in temperature.
Thermal expansion is large for gases, and relatively small, but not negligible, for liquids and solids.
Linear thermal expansion is
The change in area due to thermal expansion is
where[latex]\boldsymbol{\Delta{A}}[/latex]is the change in area.
The change in volume due to thermal expansion is
where[latex]\boldsymbol{\beta}[/latex]is the coefficient of volume expansion and[latex]\boldsymbol{\beta\approx3\alpha}.[/latex]Thermal stress is created when thermal expansion is constrained.
1: Thermal stresses caused by uneven cooling can easily break glass cookware. Explain why Pyrex®, a glass with a small coefficient of linear expansion, is less susceptible.
2: Water expands significantly when it freezes: a volume increase of about 9% occurs. As a result of this expansion and because of the formation and growth of crystals as water freezes, anywhere from 10% to 30% of biological cells are burst when animal or plant material is frozen. Discuss the implications of this cell damage for the prospect of preserving human bodies by freezing so that they can be thawed at some future date when it is hoped that all diseases are curable.
3: One method of getting a tight fit, say of a metal peg in a hole in a metal block, is to manufacture the peg slightly larger than the hole. The peg is then inserted when at a different temperature than the block. Should the block be hotter or colder than the peg during insertion? Explain your answer.
4: Does it really help to run hot water over a tight metal lid on a glass jar before trying to open it? Explain your answer.
5: Liquids and solids expand with increasing temperature, because the kinetic energy of a body's atoms and molecules increases. Explain why some materials shrink with increasing temperature.
1: The height of the Washington Monument is measured to be 170 m on a day when the temperature is[latex]\boldsymbol{35.0^{\circ}\textbf{C}}.[/latex]What will its height be on a day when the temperature falls to[latex]\boldsymbol{-10.0^{\circ}\textbf{C}}?[/latex]Although the monument is made of limestone, assume that its thermal coefficient of expansion is the same as marble's.
2: How much taller does the Eiffel Tower become at the end of a day when the temperature has increased by[latex]\boldsymbol{15^{\circ}\textbf{C}}?[/latex]Its original height is 321 m and you can assume it is made of steel.
3: What is the change in length of a 3.00-cm-long column of mercury if its temperature changes from[latex]\boldsymbol{37.0^{\circ}\textbf{C}}[/latex]to[latex]\boldsymbol{40.0^{\circ}\textbf{C}},[/latex]assuming the mercury is unconstrained?
4: How large an expansion gap should be left between steel railroad rails if they may reach a maximum temperature[latex]\boldsymbol{35.0^{\circ}\textbf{C}}[/latex]greater than when they were laid? Their original length is 10.0 m.
5: You are looking to purchase a small piece of land in Hong Kong. The price is "only" $60,000 per square meter! The land title says the dimensions are[latex]\boldsymbol{20\textbf{ m}\times30\textbf{ m}.}[/latex]By how much would the total price change if you measured the parcel with a steel tape measure on a day when the temperature was[latex]\boldsymbol{20^{\circ}\textbf{C}}[/latex]above normal?
6: Global warming will produce rising sea levels partly due to melting ice caps but also due to the expansion of water as average ocean temperatures rise. To get some idea of the size of this effect, calculate the change in length of a column of water 1.00 km high for a temperature increase of[latex]\boldsymbol{1.00^{\circ}\textbf{C}}.[/latex]Note that this calculation is only approximate because ocean warming is not uniform with depth.
7: Show that 60.0 L of gasoline originally at[latex]\boldsymbol{15.0^{\circ}\textbf{C}}[/latex]will expand to 61.1 L when it warms to[latex]\boldsymbol{35.0^{\circ}\textbf{C}},[/latex]as claimed in Example 2.
8: (a) Suppose a meter stick made of steel and one made of invar (an alloy of iron and nickel) are the same length at[latex]\boldsymbol{0^{\circ}\textbf{C}}.[/latex]What is their difference in length at[latex]\boldsymbol{22.0^{\circ}\textbf{C}}?[/latex](b) Repeat the calculation for two 30.0-m-long surveyor's tapes.
9: (a) If a 500-mL glass beaker is filled to the brim with ethyl alcohol at a temperature of[latex]\boldsymbol{5.00^{\circ}\textbf{C}},[/latex]how much will overflow when its temperature reaches[latex]\boldsymbol{22.0^{\circ}\textbf{C}}?[/latex](b) How much less water would overflow under the same conditions?
10: Most automobiles have a coolant reservoir to catch radiator fluid that may overflow when the engine is hot. A radiator is made of copper and is filled to its 16.0-L capacity when at[latex]\boldsymbol{10.0^{\circ}\textbf{C}}.[/latex]What volume of radiator fluid will overflow when the radiator and fluid reach their[latex]\boldsymbol{95.0^{\circ}\textbf{C}}[/latex]operating temperature, given that the fluid's volume coefficient of expansion is[latex]\boldsymbol{\beta=400\times10{-6}\textbf{/}^{\circ}\textbf{C}}?[/latex]Note that this coefficient is approximate, because most car radiators have operating temperatures of greater than[latex]\boldsymbol{95.0^{\circ}\textbf{C}}.[/latex]
11: A physicist makes a cup of instant coffee and notices that, as the coffee cools, its level drops 3.00 mm in the glass cup. Show that this decrease cannot be due to thermal contraction by calculating the decrease in level if the[latex]\boldsymbol{350\textbf{ cm}^3}[/latex]of coffee is in a 7.00-cm-diameter cup and decreases in temperature from[latex]\boldsymbol{95.0^{\circ}\textbf{C}}[/latex]to[latex]\boldsymbol{45.0^{\circ}\textbf{C}}.[/latex](Most of the drop in level is actually due to escaping bubbles of air.)
12: (a) The density of water at[latex]\boldsymbol{0^{\circ}\textbf{C}}[/latex]is very nearly[latex]\boldsymbol{1000\textbf{ kg/m}^3}[/latex](it is actually[latex]\boldsymbol{999.84\textbf{ kg/m}^3}[/latex]), whereas the density of ice at[latex]\boldsymbol{0^{\circ}\textbf{C}}[/latex]is[latex]\boldsymbol{917\textbf{ kg/m}^3}.[/latex]Calculate the pressure necessary to keep ice from expanding when it freezes, neglecting the effect such a large pressure would have on the freezing temperature. (This problem gives you only an indication of how large the forces associated with freezing water might be.) (b) What are the implications of this result for biological cells that are frozen?
13: Show that[latex]\boldsymbol{\beta\approx3\alpha},[/latex]by calculating the change in volume[latex]\boldsymbol{\Delta{V}}[/latex]of a cube with sides of length[latex]\boldsymbol{L}.[/latex]
1 Values for liquids and gases are approximate.
the change in size or volume of an object with change in temperature
coefficient of linear expansion
[latex]\boldsymbol{\alpha},[/latex]the change in length, per unit length, per[latex]\boldsymbol{1^{\circ}\textbf{C}}[/latex]change in temperature; a constant used in the calculation of linear expansion; the coefficient of linear expansion depends on the material and to some degree on the temperature of the material
coefficient of volume expansion
[latex]\boldsymbol{\beta},[/latex]the change in volume, per unit volume, per[latex]\boldsymbol{1^{\circ}\textbf{C}}[/latex]change in temperature
stress caused by thermal expansion or contraction
1: (a) The change in volume is proportional to the original volume. Block A has a volume of[latex]\boldsymbol{L\times{2L}\times{L}=2L^3}.[/latex]Block B has a volume of[latex]\boldsymbol{2L\times{2L}\times{2L}=8L^3},[/latex]which is 4 times that of Block A. Thus the change in volume of Block B should be 4 times the change in volume of Block A.
(b) The change in area is proportional to the area. The cross-sectional area of Block A is[latex]\boldsymbol{L\times{2L}=2L^2},[/latex]while that of Block B is[latex]\boldsymbol{2L\times{2L}=4L^2}.[/latex]Because cross-sectional area of Block B is twice that of Block A, the change in the cross-sectional area of Block B is twice that of Block A.
(c) The change in height is proportional to the original height. Because the original height of Block B is twice that of A, the change in the height of Block B is twice that of Block A.
[latex]\boldsymbol{5.4\times10^{-6}\textbf{ m}}[/latex]
Because the area gets smaller, the price of the land DECREASES by[latex]\boldsymbol{\sim\$17,000}.[/latex]
[latex]\begin{array}{lcl} \boldsymbol{V} & \boldsymbol{=} & \boldsymbol{V_0+\Delta{V}=V_0(1+\beta\Delta{T})} \\ {} & \boldsymbol{=} & \boldsymbol{(60.00\textbf{ L})[1+(950\times10^{-6}\textbf{/}^{\circ}\textbf{C})(35.0^{\circ}\textbf{C}-15.0^{\circ}\textbf{C})]} \\ {} & \boldsymbol{=} & \boldsymbol{61.1\textbf{ L}} \end{array}[/latex]
(a) 9.35 mL
(b) 7.56 mL
We know how the length changes with temperature:[latex]\boldsymbol{\Delta{L}=\alpha{L}_0\Delta{T}}.[/latex]Also we know that the volume of a cube is related to its length by[latex]\boldsymbol{V=L^3},[/latex]so the final volume is then[latex]\boldsymbol{V=V_0+\Delta{V}=L_0+\Delta{L}^3}.[/latex]Substituting for[latex]\boldsymbol{\Delta{L}}[/latex]gives
[latex]\boldsymbol{V=(L_0+\alpha{L}_0\Delta{T})^3=L_0^3(1+\alpha\Delta{T})^3}.[/latex]
Now, because[latex]\boldsymbol{\alpha\Delta{T}}[/latex]is small, we can use the binomial expansion:
[latex]\boldsymbol{V\:\approx\:L_0^3(1+3\alpha\Delta{T})=L_0^3+3\alpha{L}_0^3\Delta{T}}.[/latex]
So writing the length terms in terms of volumes gives[latex]\boldsymbol{V=V_0+\Delta{V}\approx{V}_0+3\alpha{V}_0\Delta{T}},[/latex]and so
[latex]\boldsymbol{\Delta{V}=\beta{V}_0\Delta{T}\approx3\alpha{V}_0\Delta{T}\textbf{, or}\beta\approx{3}\alpha}.[/latex]
Previous: 13.1 Temperature
Next: 13.3 The Ideal Gas Law | CommonCrawl |
solution of a system of equations in algebraic closure of GF2
How do I look for solutions of a system of equations in a particular field? For example, the following set of equations in variables ${a,b,c,d,e,f}$ $1 + a + c + e = 0, b + d + f = 0, 1 + a e + c e + a c = 0, b e + a f + c f + e d + b c + a d = 0, b f + f d + b d = 0$ have the solution $a=1,b=1,c=1,d=\omega,e=1,f=\omega^2$ where $\omega$ is the $3^{rd}$ root of unity. This lies in an extension of $\mathbb{GF}_2$, $\mathbb{GF}_4= \frac{\mathbb{GF}_2[u]}{u^2+u+1}$ where $\mathbb{GF}_2[u]$ is a polynomial ring with variable $u$ and one representation of $\mathbb{GF}_4$ is ${0,1,\omega,\omega^2}$.
I have a set of equations and I want to know whether there exist solutions of these equations in an extension of Galois field $\mathbb{GF}_2$ and what are they? Is there a way to check this in Sage? | CommonCrawl |
Why is neodymium the most paramagnetic lanthanide?
My textbook says that paramagnetism rises to a maximum in neodymium.
I don't understand how gadolinium has 8 unpaired electrons, whereas neodymium has only 4. Shouldn't paramagnetism be higher in gadolinium?
Has this got anything to do with the magnetic moment associated with orbital angular momentum?
inorganic-chemistry magnetism
orthocresol♦
Parth MallParth Mall
This is from Shriver and Atkins' Inorganic Chemistry (p.586, 2009 ed) :
The magnetic moment of many d-metal ions can be calculated by using the spin-only approximation because the strong ligand field quenches the orbital contribution. But, for the lanthanoids, where the spin orbital coupling is strong, the orbital angular momentum contributes to the magnetic moment, and the ions behave like almost free atoms. Therefore, the magnetic moment must be expressed in terms of the total angular momentum quantum number J: $$\mu = g_J{\{J(J+1)\}}^{1/2}\mu_B $$ where the Landé g-factor is $$g_J=1+\frac{S(S+1)-L(L+1)+J(J+1)}{2J(J+1)} $$ and $\mu_B$ is the Bohr Magneton.
Being a high school student, the only thing of matter for you is the key point at the top of the page:
The magnetic moments of lanthanoid compounds arise from both spin and orbital contributions.
Mr_PeaMr_Pea
Old question, but what is your textbook? Gadolinium is ferromagnetic just below room temperature and the most paramagnetic just above. Looking at this table we extract $$\begin{array}{rl}\text{Metal}&\chi_M/10^{-6}\text{cm}^3\text{mol}^{-1}\\\hline \text{La}&95.4\\ \text{Ce}&2500\\ \text{Pr}&5530\\ \text{Nd}&5930\\ \text{Pm}&\cdots\\ \text{Sm}&1278\\ \text{Eu}&30900\\ \text{Gd}&185000\\ \text{Tb}&170000\\ \text{Dy}&98000\\ \text{Ho}&72900\\ \text{Er}&48000\\ \text{Tm}&24700\\ \text{Yb}&67\\ \text{Lu}&182.9\\ \end{array}$$ The Gadolinium result is quoted at $350 \text{ K}$ so as not to be too close to its Curie point. So Gd is in fact the most paramagnetic at temperatures where it's not actually ferromagnetic. You should not think of lanthanide metals as atoms because they are most typically $+3$ ions with the $6s$ electrons at least stripped from the atoms. Europium and Ytterbium maybe as $+2$ ions and Ce and Tb perhaps as $+4$ ions to achieve half- or fully-filled $4f$ orbitals.
So your count should be $2$ unpaired electrons for Nd and $7$ for Gd.
Not the answer you're looking for? Browse other questions tagged inorganic-chemistry magnetism or ask your own question.
Total magnetic moment of atom
How can the electronic structure of pentaaquanitrosyliron be explained?
Predicting orbital angular momentum effects on magnetic moments
Why isn't the orbital angular momentum also considered while calculating the magnetic moments 3d transition elements?
Why is gadolinium specifically used in MRI contrast agents?
Jahn Teller distortion and magnetic moments.
What is the magnetic moment of tris(oxalato)nickelate(IV)? | CommonCrawl |
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review | springerprofessional.de Skip to main content
vorheriger Artikel Review of Large Spacecraft Deployable Membrane ...
nächster Artikel Erratum to: Multi-modal Gesture Recognition usi...
01.11.2017 | Review | Ausgabe 6/2017 Open Access
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review
Wen-Lan Liu, Yun-Dou Xu, Jian-Tao Yao, Yong-Sheng Zhao
Supported by National Natural Science Foundation of China (Grant Nos. 51675458, 51275439), and Youth Top Talent Project of Hebei Province Higher Education of China (Grant No. BJ2017060).
Compared with parallel mechanisms (PMs) with six degrees of freedom (DoFs), lower mobility PMs have increasingly drawn attention from researchers and engineers in the robotics community in recent years, since 100 percent flexibility (i.e., six DoFs) is not required in many instances [ 1 ]. In terms of the relationship between the number of constraints and that of DoFs possessed by the moving platform of a PM, the lower mobility PMs can be divided into two categories: PMs in which the moving platform suffers exactly 6– n ( n represents the number of DoFs) constraints, for example, the 3-RPS PM [ 2 ], 3-UPU PM [ 3 – 5 ], 3-RCC PM [ 6 ], and so on [ 7 ], and those in which the moving platform suffers more than 6– n constraints supplied by all supporting limbs, for example, the 3-RRC PM [ 8 ], 2UPR + SPR PM [ 9 – 11 ], and 3-PRC PM [ 12 , 13 ]. The latter category of lower mobility PMs are called overconstrained PMs [ 14 , 15 ], and they contain common or redundant constraints that can be removed without changing the kinematics of the mechanisms [ 16 , 17 ]. The overconstrained PMs have the merits of higher stiffness and larger loading capacity with respect to general lower mobility PMs, which are also called passive overconstrained PMs, as the joint reactions are related to system stiffness [ 18 ].
It is well known that redundantly actuated PMs [ 19 – 21 ] have been investigated and utilized extensively, since redundant actuations can avoid kinematic singularities [ 22 – 27 ], enlarge load capability [ 28 – 30 ], improve dynamic characteristics [ 31 , 32 ], and eliminate backlash [ 33 , 34 ] of the mechanisms. The distribution of driving forces/torques of a redundantly actuated PM belongs to the statically indeterminate problem. From this view, the multi-robot cooperation system [ 35 , 36 ], walking machines with multiple legs [ 37 , 38 ], and mechanical hands grasping an object [ 39 , 40 ] can also be regarded as redundantly actuated PMs to some extent. As an actuator redundancy can transform a mechanism into an overconstrained mechanism, the redundantly actuated PMs are also overconstrained PMs. These kinds of PMs are called active overconstrained PMs, since the driving forces/torques can be distributed arbitrarily according to different optimization goals [ 18 ].
Redundant constraints exist in both active overconstrained and passive overconstrained PMs. Although they have no effect on the kinematics [ 41 ], they increase the complexity and difficulty of the force analysis of these two kinds of overconstrained PMs. The objective of force analysis of passive overconstrained PMs is to determine the driving forces/torques and constraint forces/moments that balance the external loads. In the case that only driving forces/torques are required, the methods for force analysis of non-overconstrained PMs are applicable to this kind of mechanisms, such as, the Newton–Euler formulation [ 42 , 43 ], the virtual work principle [ 44 , 45 ], the Lagrangian formulation [ 9 , 46 ], and so on [ 47 , 48 ]. In many cases, for example, when we want to take the influence of friction in joints into consideration, the constraint forces/moments need to be calculated. Bi et al. [ 10 ], built a complete and solvable dynamic model of a passive overconstrained PM by extending the Newton–Euler formulation. This method is computationally intensive because of the high-rank coefficient matrix of the simultaneous equations. Wojtyra et al. [ 17 , 49 – 52 ], proposed several mathematical and simulation methods to find the reactions for which joints can be uniquely determined. The corresponding physical interpretation is not considered in those methods. Vertechy et al. [ 53 ], Wang et al. [ 54 ], Yao et al. [ 55 ], and Hu et al. [ 56 ], studied the force analysis of passive overconstrained PMs under the condition that the deformations along different axes generated at the end of each supporting limb by the corresponding driving forces/torques and constraint forces/moments are independent of each other. The coupled deformations of each limb generated by the driving forces/torques and constraint forces/moments within the same limb are ignored. Based on the screw theory, Huang et al. [ 57 ], presented an approach for the kinetostatics of passive overconstrained PMs with collinear constraint forces or coaxial constraint moments. However, this approach is not suitable for the passive overconstrained PMs with general constraints. Xu et al. [ 58 ], defined the stiffness matrix of the limb's constraint or overconstraint wrenches, based on which a general method was proposed for the force analysis of passive overconstrained PMs. This method requires an accurate judgment of the overconstraint wrenches and the non-overconstraint wrenches. On the basis of the work shown in Ref. [ 58 ] the weighted generalized inverse method was proposed for solving the statically indeterminate problem of passive overconstrained PMs by the authors [ 59 ], in which the gravity of limbs was not considered. Besides, there are several other approaches presented in Refs. [ 60 – 64 ].
In theory, there are an infinite number of possible solutions to the statically indeterminate problem of active overconstrained PMs. At present, a variety of optimization goals have been proposed to distribute the driving forces/torques of active overconstrained PMs, such as minimizing the driving forces/torques [ 29 , 65 ], energy consumption [ 36 , 66 , 67 ], potential energy of the system [ 37 , 68 , 69 ], internal forces [ 18 , 40 , 70 ], and improving the traction/load sharing [ 71 ]. In essence, the driving forces/torques distribution of active overconstrained PMs under different optimization goals is just a constrained optimization problem, however, the existing methods have not formed a unified one. The pseudo-inverse method is widely applied when the minimum driving forces/torques are selected as the objective [ 30 , 33 , 72 ]. A weighted coefficient method was proposed by Huang et al. [ 73 ], to solve the load distribution of a redundantly actuated walking machine, in which the values of the weighted coefficients can be given arbitrarily according to different optimization goals. Afterwards, the weighted coefficient method [ 73 ] was further developed into a weighted generalized inverse method in Ref. [ 59 ]. In addition to the abovementioned methods, there are other approaches for force analysis of active overconstrained PMs [ 74 – 76 ].
The force analysis of overconstrained PMs presents research difficulties. At present, the approaches proposed to solve this problem have different characteristics. It is difficult to quickly find a suitable method for force analysis of the corresponding overconstrained system. In this paper, the methods for force analysis of both active overconstrained and passive overconstrained PMs are reviewed and discussed in detail, to provide an important reference for researchers and engineers who would like to solve the statically indeterminate problem of overconstrained systems.
2 Methods for Force Analysis of Passive Overconstrained PMs
The schematic of a general passive overconstrained PM with n DOFs is shown in Fig. 1. Assume that the t supporting limbs supply m constraint forces/moments to the moving platform in total. For a passive overconstrained PM, there exists m > 6– n. Let A υ , B υ , C υ , …, denote the joints of the υth ( υ = 1, 2, …, t) supporting limb from the moving platform to the base in sequence. Assume that the friction in the kinematic joints is ignored, and the stiffness of the moving platform is much greater than that of the supporting limbs.
Schematic of a general passive overconstrained PM
Owing to the existence of redundant constraints, the force and moment equilibrium equations of a passive overconstrained PM are insufficient to determine all the driving forces/torques and constraint forces/moments. Hence, a certain number of supplementary equations are required. The typical methods for force analysis of passive overconstrained PMs can be divided into six categories.
2.1 Traditional Method
Main ideas: The force and moment equilibrium equations of all movable bodies are established based on the Newton–Euler formulation in sequence. Then, a certain number of compatibility equations of deformation are supplemented to obtain a set of complete and solvable equations. Thus, the driving forces/torques and constraint forces/moments can be solved by combining the force and moment equilibrium equations and the compatibility equations of deformation [ 10 ], which is explained briefly in the following paragraphs.
Based on the Newton–Euler formulation the force and moment equilibrium equations of the moving platform of a passive overconstrained PM can be established as
$$\it \it \it \it \it \left\{ \begin{array}{l} {\varvec{F}} + \sum\limits_{{\upsilon = 1}}^{t} {{}_{{o\upsilon }}^{O} {\varvec{Rf}}_{\upsilon } } + m_{O} {}_{g}^{O} {\varvec{Rg}} = {\varvec{h}}_{O} , \hfill \\ {\varvec{M}} + \sum\limits_{{\upsilon = 1}}^{t} {\left( {{}_{{o\upsilon }}^{O} {\varvec{Rt}}_{\upsilon } + {\varvec{r}}_{{O\upsilon }} \times {}_{{o\upsilon }}^{O} {\varvec{Rf}}_{\upsilon } } \right)} = {\varvec{n}}_{O} , \hfill \\ \end{array} \right.$$
where F and M denote the three-dimensional external force and moment vectors exerted on the moving platform expressed in the coordinate system { O} attached at the moving platform, respectively, f υ ( υ = 1, 2, …, t) and t υ represent the three-dimensional reaction force and moment vectors of joint A υ connecting the moving platform and the υth limb, respectively, which are expressed in the local coordinate system { o υ } of the υ-th limb, \({}_{o\upsilon }^{O} {\varvec{R}}\) is the rotational transformation matrix of { o υ } with respect to { O}, g is the gravity vector expressed in the global coordinate system, \({}_{g}^{O} {\varvec{R}}\) is the rotational transformation matrix of the global system with respect to { O}, m O is the mass of the moving platform, r Oυ is the position vector from origin O to the center of joint A υ expressed in { O}, and h O and n O denote the inertia force and moment vectors of the moving platform expressed in { O}, respectively.
The force and moment equilibrium equations of the link A υ B υ close to the moving platform in the υth limb can be built as
$$\left\{ \begin{array}{l} {\varvec{f}}_{B\upsilon } - {\varvec{f}}_{\upsilon } + m_{\upsilon 1} {}_{g}^{o\upsilon } {\varvec{Rg}} = {\varvec{h}}_{\upsilon 1} , \hfill \\ {\varvec{t}}_{B\upsilon } - {\varvec{t}}_{\upsilon } + {\varvec{r}}_{oB} \times {\varvec{f}}_{B\upsilon } - {\varvec{r}}_{oA} \times {\varvec{f}}_{\upsilon } = {\varvec{n}}_{\upsilon 1} , \hfill \\ \end{array} \right.$$
where f Bυ and t Bυ represent the three-dimensional reaction force and moment vectors of joint B υ , respectively, \({}_{g}^{o\upsilon } {\varvec{R}}\) is the rotational transformation matrix of the global system with respect to { o υ }, m υ1 is the mass of the link A υ B υ , r oA and r oB are the position vectors from the origin o υ to the centers of the joints A υ and B υ , respectively, and h υ1 and n υ1 denote the inertia force and moment vectors of the link A υ B υ , respectively. f Bυ , t Bυ , r oA , r oB , h υ1 and n υ1 are expressed in the local coordinate system { o υ }.
Similarly, the force and moment equilibrium equations of other links of the t limbs can be formulated. It should be noted that, for different types of joints, the number of unknown reactions is different, for example, one of the three reaction moments of a rotational joint (R) is zero, while for a translational joint (P), one of the three reaction forces is zero.
Assuming that the moving platform is rigid, the deformations of supporting limbs have to be compatible with each other to satisfy the geometric constraints. Hence, the compatibility equations of the deformations generated in the axes of redundant constraint forces and moments can be expressed as [ 10 ]
$$\left\{ \begin{array}{l} \delta_{u,\upsilon } = \delta_{u,\upsilon + 1} , \hfill \\ \psi_{v,\upsilon } = \psi_{v,\upsilon + 1} , \hfill \\ \end{array} \right.$$
where δ u,υ and δ u,υ+1 denote the linear deformations generated at the ends of the υth and the ( υ + 1)th limbs in the axis of the uth redundant constraint force, respectively, and ψ v,υ and ψ v,υ+1 represent the angular deformations generated at the ends of the υth and ( υ + 1)th limbs in the axis of the vth redundant constraint moment, respectively.
Then all driving forces/torques and constraint forces/moments can be solved by combining Eqs. ( 1), ( 2), and ( 3).
Discussion: This is a traditional method applicable to the statically indeterminate problem of general passive overconstrained PMs. However, it is computationally intensive because of the high-rank coefficient matrix of the simultaneous equations. Furthermore, it is difficult to obtain the explicit expressions of the solutions by this method.
2.2 Method Based on the Judgment of Constraint Jacobian Matrix
Main ideas: The judgments of the independent and dependent rows of the constraint Jacobian matrix are used to find which joint reactions of a mechanism with redundant constraints can be uniquely determined [ 17 , 49 – 52 ].
Generally, a kinematic joint imposes a certain number of constraints on the relative motion between the two bodies it connects. If a mechanism is described by N coordinates, the constraint conditions imposed by the bth kinematic joint can be expressed as
$${\varvec{\Phi}}^{b} \left( {\varvec{q}} \right) = {\varvec{\Phi}}^{b} \left( {q_{1} ,q_{2} , \cdots ,q_{N} } \right) = \varvec{0},$$
where q 1, q 2, …, q N denote the N coordinates.
Then the equations describing the μ constraints imposed by all the joints of the mechanism can be arranged as
$${\varvec{\Phi}}\left( {\varvec{q}} \right) = \left( {\begin{array}{*{20}c} {\varPhi_{1} \left( {\varvec{q}} \right)} \\ {\varPhi_{2} \left( {\varvec{q}} \right)} \\ \vdots \\ {\varPhi_{\mu } \left( {\varvec{q}} \right)} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\varPhi_{1} \left( {q_{1} ,q_{2} , \cdots ,q_{N} } \right)} \\ {\varPhi_{2} \left( {q_{1} ,q_{2} , \cdots ,q_{N} } \right)} \\ \vdots \\ {\varPhi_{\mu } \left( {q_{1} ,q_{2} , \cdots ,q_{N} } \right)} \\ \end{array} } \right) = \varvec{0}_{\mu \times 1}.$$
The constraint Jacobian matrix of the constraint equations can be obtained on the basis of Eq. ( 5):
$${\varvec{\Phi}}_{{\varvec{q}}} \left( {\varvec{q}} \right) = \left( {\begin{array}{*{20}c} {\frac{{\partial \varPhi_{1} }}{{\partial q_{1} }}} & {\frac{{\partial \varPhi_{1} }}{{\partial q_{2} }}} & \cdots & {\frac{{\partial \varPhi_{1} }}{{\partial q_{N} }}} \\ {\frac{{\partial \varPhi_{2} }}{{\partial q_{1} }}} & {\frac{{\partial \varPhi_{2} }}{{\partial q_{2} }}} & \cdots & {\frac{{\partial \varPhi_{2} }}{{\partial q_{N} }}} \\ \vdots & \vdots & {} & \vdots \\ {\frac{{\partial \varPhi_{\mu } }}{{\partial q_{1} }}} & {\frac{{\partial \varPhi_{\mu } }}{{\partial q_{2} }}} & \cdots & {\frac{{\partial \varPhi_{\mu } }}{{\partial q_{N} }}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\left( {{\varvec{\Phi}}_{1} } \right)_{{\varvec{q}}} } \\ {\left( {{\varvec{\Phi}}_{2} } \right)_{{\varvec{q}}} } \\ \vdots \\ {\left( {{\varvec{\Phi}}_{\mu } } \right)_{{\varvec{q}}} } \\ \end{array} } \right).$$
For a mechanism with redundant constraints, the rank of matrix \({\varvec{\Phi}}_{{\varvec{q}}} \left( {\varvec{q}} \right)\) must be less than μ. That is to say, one or more rows of \({\varvec{\Phi}}_{{\varvec{q}}} \left( {\varvec{q}} \right)\) can be expressed as a linear combination of other rows. The independent rows of \({\varvec{\Phi}}_{q} \left( {\varvec{q}} \right)\) can be identified by a variety of mathematical methods [ 17 , 49 – 52 ], such as the concept of direct sum, the singular value decomposition, the QR decomposition. For an overconstrained rigid body mechanism, the reaction forces/moments corresponding to the independent constraint equations are unique, despite that all joint reactions cannot be uniquely determined. In order to obtain the unique solutions to all joint reactions, it is necessary to consider the flexibility of passive overconstrained mechanisms. Wojtyra et al. [ 52 ], discussed which parts should be modeled as flexible bodies to guarantee unique joint reactions in overconstrained mechanisms.
Discussion: Based on the constraint Jacobian matrix of a passive overconstrained mechanism, several methods were proposed to isolate the joint reactions that can be uniquely determined. Those methods were proposed from a purely mathematical perspective, i.e., the corresponding physical interpretation was not considered. Besides, the analytical expressions of joint reactions cannot be obtained by this kind of method.
2.3 Method under the Condition of Decoupled Deformations
Main ideas: Assuming that the υth ( υ = 1, 2, …, t) supporting limb of a passive overconstrained PM contains N υ driving forces/torques and constraint forces/moments in total, as shown in Fig. 1, the elastic deformations generated at the end of the υth limb by the N υ driving forces/torques and constraint forces/moments are considered to be decoupled to each other [ 53 – 56 ]. In this case, the stiffness of each supporting limb can be expressed as a scalar quantity or a diagonal matrix. The steps of this method can be summarized as follows:
The force and moment equilibrium equations of the moving platform of a passive overconstrained PM can be formulated as
$$\left( {\not\!{\varvec{S}}_{F} } \right)_{6 \times 1} = \varvec{G}_{{6 \times \left( {n + m} \right)}} \varvec{f}_{{\left( {n + m} \right) \times 1}},$$
where \({\not\!{\varvec{S}}}_{F}\) denotes the six-dimensional external load imposed on the moving platform, G is the coefficient matrix mapping the driving forces/torques and constraint forces/moments to the external loads, and f is the vector composed of the magnitudes of the n driving forces/torques and m constraint forces/moments.
Let k j be the stiffness between the jth driving force/torque or constraint force/moment and the elastic deformation generated at the end of the corresponding limb under the action of the jth driving force/torque or constraint force/moment. There exists
$$f_{j} = k_{j} \delta_{j} ,j = 1,2, \cdots ,n + m,$$
where f j denotes the magnitude of the jth driving force/torque or constraint force/moment, and δ j represents the elastic deformation generated at the end of the corresponding limb by f j .
Rearranging Eq. ( 8) in the form of matrix yields
$${\varvec{f}}_{{\left( {n + m} \right) \times 1}} = {\varvec{K}}_{{\left( {n + m} \right) \times \left( {n + m} \right)}} {\varvec{\updelta}}_{{\left( {n + m} \right) \times 1}},$$
$$\begin{aligned} {\varvec{f}}_{{\left( {n + m} \right) \times 1}} = \left( {\begin{array}{*{20}c} {f_{1} } & {f_{2} } & \cdots & {f_{n + m} } \\ \end{array} } \right)^{\text{T}} , \hfill \\ {\varvec{K}}_{{\left( {n + m} \right) \times \left( {n + m} \right)}} = {\text{diag}}\left( {\begin{array}{*{20}c} {k_{1} } & {k_{2} } & \cdots & {k_{n + m} } \\ \end{array} } \right), \hfill \\ {\varvec{\updelta}}_{{\left( {n + m} \right) \times 1}} = \left( {\begin{array}{*{20}c} {\delta_{1} } & {\delta_{2} } & \cdots & {\delta_{n + m} } \\ \end{array} } \right)^{\text{T}} . \hfill \\ \end{aligned}$$
The relationship between the elastic deformations generated at the end of supporting limbs and the six-dimensional micro-displacement X of the moving platform as the result of external loads can be derived as
$$\delta_{j} = {\varvec{G}}_{:,j}^{\text{T}} {\varvec{X}}.$$
Rearranging Eq. ( 10) in the form of matrix leads to
$${\varvec{\updelta}} = {\varvec{G}}^{\text{T}} {\varvec{X}}.$$
From Eqs. ( 7) to ( 11) we can get
$${\varvec{f}} = {\varvec{KG}}^{\text{T}} \left( {{\varvec{GKG}}^{\text{T}} } \right)^{ - 1} {\not\!{\varvec{S}}}_{{\varvec{F}}},$$
from which the n driving forces/torques and m constraint forces/moments can be obtained.
It should be noted that the driving force/torque or constraint force/moment along an arbitrary direction can be decomposed along or perpendicular to the axis of the corresponding limb.
Discussion: This method gives the analytical expression of the solutions of the driving forces/torques and the constraint forces/moments of passive overconstrained PMs. However, the coupled deformations generated at the ends of the supporting limbs by the driving forces/torques and constraint forces/moments are ignored.
2.4 Method Based on Resultant Constraint Wrenches
Main ideas: The resultant forces/moments of the collinear constraint forces or coaxial constraint moments are dealt with first. Then, the constraint forces/moments can be obtained by distributing the resultant forces/moments according to the stiffness proportion of the supporting limbs with collinear constraint forces or coaxial constraint moments [ 57 ].
Assume that a passive overconstrained PM has p collinear constraint forces and q coaxial constraint moments, and the remaining ( m– p– q) constraints are independent. Based on the screw theory, the force and moment equilibrium equations between the actuation wrenches, the resultant constraint wrench of the p collinear constraint forces and that of the q coaxial constraint moments, and the remaining constraint wrenches can be built as
$$\sum\limits_{{i = 1}}^{n} {w_{i} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{a,}}i}} } + \sum\limits_{{k = 1}}^{{m - p - q}} {f_{k} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}k}} } + f_{p} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}F}} + f_{q} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}M}} = \left( {{\varvec{G}}_{b} } \right)_{{6 \times 6}} \left( {{\varvec{f}}_{b} } \right)_{{6 \times 1}} = \left( {{\not\!{\varvec{S}}}_{F} } \right)_{{6 \times 1}},$$
$$\begin{aligned} & {\varvec{f}}_{b} = \left( {\begin{array}{*{20}c} {w_{1} } & \cdots & {w_{n} } & {f_{1} } & \cdots & {f_{m - p - q} } & {f_{p} } & {f_{q} } \\ \end{array} } \right)^{\text{T}} , \\ & {\varvec{G}}_{b} = \left( {\begin{array}{*{20}c} {{\not\!{\hat{\varvec{S}}}}_{\text{a,1}} } & \cdots & {{\not\!{{\hat{\varvec{S}}}}}_{{{\text{a,}}n}} } & {{\not\!{{\hat{\varvec{S}}}}}_{{{\text{r,}}1}} } & \cdots & {{\not\!{{\hat{\varvec{S}}}}}_{{{\text{r,}}\left( {m - p - q} \right)}} } & {{\not\!{{\hat{\varvec{S}}}}}_{{{\text{r,}}F}} } & {{\not\!{{\hat{\varvec{S}}}}}_{{{\text{r,}}M}} } \\ \end{array} } \right), \\ & {\not\!{{\hat{\varvec{S}}}}}_{{{\text{a,}}i}} \left( {i = 1,{ 2}, \, \ldots ,n} \right),{\not\!{{\hat{\varvec{S}}}}}_{{{\text{r,}}k}} \left( {k = 1,{ 2}, \, \ldots ,m{-}p{-}q} \right). \\ \end{aligned}$$
\({\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}F}}\) and \({\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}M}}\) denote the unit screws of the ith actuation wrench, the kth independent constraint wrench, the resultant constraint wrench and the resultant constraint couple, respectively, w i , f k , f p and f q represent the magnitudes of the ith actuation wrench, the kth independent constraint wrench, the resultant constraint wrench and the resultant constraint couple, respectively. All screws are expressed in the global system.
If G b is non-singular, the magnitudes of the actuation wrenches, the independent constraint wrenches, the resultant constraint wrench, and the resultant constraint couple can be solved from Eq. ( 13) as
$${\varvec{f}}_{b} = {\varvec{G}}_{b}^{ - 1} {\not\!{\varvec{S}}}_{F}.$$
According to the hypothesis given in Ref. [ 57 ], we assume that the stiffness proportion of the ( γ + 1)th and the γth supporting limbs with collinear constraint forces is η γ , and that of the ( λ + 1)th and the λth supporting limbs with coaxial constraint moments is η λ . In view that the constraint forces and moments are in direct proportion to the stiffness of the corresponding limbs, the complementary equations can be given as
$$\left\{ \begin{aligned} f_{p,\gamma + 1} = \eta_{\gamma } f_{p,\gamma } \left( {\gamma = 1,2, \cdots ,p - 1} \right), \hfill \\ f_{q,\lambda + 1} = \eta_{\lambda } f_{q,\lambda } \left( {\lambda = 1,2, \cdots ,q - 1} \right), \hfill \\ \end{aligned} \right.$$
where f p,γ and f q,λ are the magnitudes of the γth collinear constraint force and the λth collinear constraint moment, respectively.
The magnitudes of the resultant constraint forces and moments have been solved from Eq. ( 14) as
$$\left\{ \begin{aligned} f_{p} = \left( {{\varvec{G}}_{b} } \right)_{5,:}^{ - 1} {\not\!{\varvec{S}}}_{F} = \sum\limits_{\gamma = 1}^{p} {f_{p,\gamma } } , \hfill \\ f_{q} = \left( {{\varvec{G}}_{b} } \right)_{6,:}^{ - 1} {\not\!{\varvec{S}}}_{F} = \sum\limits_{\lambda = 1}^{q} {f_{q,\lambda } } . \hfill \\ \end{aligned} \right.$$
Combining Eqs. ( 15) and ( 16), the magnitudes of each collinear constraint force and coaxial constraint moment can be solved. Thus, the reactions of the joints connecting the moving platform and supporting limbs can be easily determined based on the relationship between them and the actuation and constraint wrenches. The reactions of other joints can be solved by establishing the force and moment equilibrium equations of the corresponding link one by one.
Discussion: In general, a kinematic joint possesses more than one constraint reaction, for example, there exist 5, 4, 4 and 3 constraint reactions for an R joint, universal joint (U), cylindrical joint (C), and spherical joint (S), respectively. If we adopt traditional methods to build the force and moment equilibrium equations of all movable bodies and complementary equations, the rank of the coefficient matrix of those equations will be very large. This method, which is based on resultant constraint wrenches, can avoid the high-rank matrix, reduce a certain number of unknowns, and ensure that the number of simultaneous equilibrium equations is not more than six each time. However, it is only suitable for solving the driving forces/torques and constraint forces/moments of passive overconstrained PMs with collinear constraint forces or coaxial constraint moments.
2.5 Method Based on the Stiffness Matrix of Limb's Overconstraint or Constraint Wrenches
Main ideas: Based on the characteristics of the elastic deformations generated at the ends of supporting limbs, the passive overconstrained PMs are classified into two classes: the limb stiffness decoupled and coupled overconstrained PMs. Stiffness matrices of the limb's overconstraint and constraint wrenches that correspond to the two types of mechanisms are defined, which help to establish the compatibility equations about the deformations generated at the ends of supporting limbs and the micro-displacements of the moving platform [ 58 ]. Then, the driving forces/torques and constraint forces/moments of the two kinds of overconstrained PMs are solved by combining the force and moment equilibrium equations and the compatibility equations of deformation.
A brief review of the methods for force analysis of the limb stiffness decoupled and coupled overconstrained PMs follows.
For a limb stiffness decoupled overconstrained PM, the force and moment equilibrium equations of the moving platform can be expressed as
$${\not\!{\varvec{S}}}_{{\varvec{F}}} = w_{{{\text{a}},1}} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{a}},1}} + \cdots w_{{{\text{a}},n}} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{a}},n}} + f_{\text{r,1}} {\not\!{\hat{{\varvec{S}}}}}_{\text{r,1}} + \cdots f_{{{\text{r,}}l}} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}l}} + f_{{{\text{r}},1}}^{\text{e}} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r}},1}}^{\text{e}} + \cdots + f_{{{\text{r}},d}}^{\text{e}} {\not\!{\hat{{\varvec{S}}}}}_{{{\text{r}},d}}^{\text{e}} = {\varvec{G}}_{c} {\varvec{f}}_{c},$$
$$\begin{aligned} {\varvec{G}}_{c} & = \left( {\begin{array}{*{20}c} {{\not\!{\hat{{\varvec{S}}}}}_{{{\text{a}},1}} } & \cdots & {{\not\!{\hat{{\varvec{S}}}}}_{{{\text{a}},n}} } & {{\not\!{\hat{{\varvec{S}}}}}_{\text{r,1}} } & \cdots & {{\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}l}} } & {{\not\!{\hat{{\varvec{S}}}}}_{{{\text{r}},1}}^{\text{e}} } & \cdots & {{\not\!{\hat{{\varvec{S}}}}}_{{{\text{r}},d}}^{\text{e}} } \\ \end{array} } \right), \\ {\varvec{f}}_{c} & = \left( {\begin{array}{*{20}c} {{\varvec{w}}_{\text{a}}^{\text{T}} } & {{\varvec{f}}_{\text{r,non}}^{\text{T}} } & {{\varvec{f}}_{\text{e}}^{\text{T}} } \\ \end{array} } \right)^{\text{T}} , \\ {\varvec{w}}_{\text{a}} & = \left( {\begin{array}{*{20}c} {w_{{{\text{a}},1}} } & \cdots & {w_{{{\text{a}},n}} } \\ \end{array} } \right)^{\text{T}} , \\ {\varvec{f}}_{\text{r,non}} & = \left( {\begin{array}{*{20}c} {f_{\text{r,1}} } & \cdots & {f_{{{\text{r,}}l}} } \\ \end{array} } \right)^{\text{T}} , \\ {\varvec{f}}_{\text{e}} & = \left( {\begin{array}{*{20}c} {f_{{{\text{r}},1}}^{\text{e}} } & \cdots & {f_{{{\text{r}},d}}^{\text{e}} } \\ \end{array} } \right)^{\text{T}} , \\ \end{aligned}$$
\({\not\!{\hat{{\varvec{S}}}}}_{{{\text{a}},i}}\)( i = 1, 2, …, n), \({\not\!{\hat{{\varvec{S}}}}}_{{{\text{r,}}\varepsilon }}\)( ε = 1, 2, …, l), and \({\not\!{\hat{{\varvec{S}}}}}_{{{\text{r}},\sigma }}^{{\text{e}}}\) ( σ = 1, 2, …, d) represent the unit screws of the ith actuation wrench, εth non-overconstraint wrench, and σth equivalent constraint wrench of the ( m– l) overconstraint wrenches, respectively. w a,i , f r,ε , and \(f_{{{\text{r,}}\sigma }}^{\text{e}}\) are the magnitudes of the ith actuation wrench, εth non-overconstraint wrench, and σth equivalent constraint wrench, respectively. Details about the non-overconstraint wrenches, overconstraint wrenches, and equivalent ones of overconstraint wrenches are given in Ref. [ 58 ].
Then the magnitudes of the actuation wrenches, the non-overconstraint wrenches, and the equivalent constraint wrenches can be solved from Eq. ( 17) as
$${\varvec{f}}_{c} = \left( {\begin{array}{*{20}c} {{\varvec{w}}_{\text{a}}^{\text{T}} } & {{\varvec{f}}_{\text{r,non}}^{\text{T}} } & {{\varvec{f}}_{\text{e}}^{\text{T}} } \\ \end{array} } \right)^{\text{T}} = {\varvec{G}}_{c}^{ - 1} {\not\!{\varvec{S}}}_{{\varvec{F}}}.$$
Assume that the ( m– l) overconstraint wrenches are distributed in ς supporting limbs. The relationship between the magnitudes of the equivalent constraint wrenches and those of the overconstraint wrenches can be expressed as
$${\varvec{f}}_{\text{e}} = \left( {\begin{array}{*{20}c} {f_{{{\text{r}},1}}^{\text{e}} } & \cdots & {f_{{{\text{r}},d}}^{\text{e}} } \\ \end{array} } \right)^{\text{T}} = {\varvec{J}}_{1} {\varvec{f}}_{\text{over}}^{1} + {\varvec{J}}_{2} {\varvec{f}}_{\text{over}}^{2} + \cdots + {\varvec{J}}_{\varsigma } {\varvec{f}}_{\text{over}}^{\varsigma }.$$
According to the definition of the stiffness matrix of the supporting limb's overconstraint wrenches [ 58 ], we can know that
$${\varvec{f}}_{\text{over}}^{s} = {\varvec{K}}_{s} {\varvec{\updelta}}_{s} ,s = 1,2, \cdots ,\varsigma.$$
The elastic deformations generated at the end of the sth supporting limb in the axes of overconstraint wrenches can be formulated as
$${\varvec{\updelta}}_{s} = {\varvec{J}}_{s}^{\text{T}} {\varvec{X}}_{\text{e}},$$
where X e is the vector composed of the elastic deformations in the axes of equivalent constraint wrenches.
Then, the magnitudes of the overconstraint wrenches can be solved by combining Eqs. ( 19), ( 20) and ( 21) as
$${\varvec{f}}_{\text{over}}^{s} = {\varvec{K}}_{s} {\varvec{J}}_{s}^{\text{T}} \left( {{\varvec{J}}_{1} {\varvec{K}}_{1} {\varvec{J}}_{1}^{\text{T}} + {\varvec{J}}_{2} {\varvec{K}}_{2} {\varvec{J}}_{2}^{\text{T}} + \cdots + {\varvec{J}}_{\varsigma } {\varvec{K}}_{\varsigma } {\varvec{J}}_{\varsigma }^{\text{T}} } \right)^{ - 1} {\varvec{f}}_{\text{e}}.$$
So far, Eqs. ( 18) and ( 22) give the analytical expressions of the magnitudes of all actuation wrenches, non-overconstraint wrenches, and overconstraint wrenches.
For a limb stiffness coupled overconstrained PM, assuming that the υth supporting limb supplies N υ constraint wrenches (including actuation wrenches) to the moving platform, the force and moment equilibrium equations of the moving platform can be expressed as
$${\not\!{\varvec{S}}}_{{\varvec{F}}} = f_{ 1}^{1} {\not\!{\hat{{\varvec{S}}}}}_{1}^{1} + f_{ 2}^{1} {\not\!{\hat{{\varvec{S}}}}}_{2}^{1} + \cdots f_{N 1}^{1} {\not\!{\hat{{\varvec{S}}}}}_{N 1}^{1} + f_{ 1}^{2} {\not\!{\hat{{\varvec{S}}}}}_{ 1}^{2} + f_{ 2}^{2} {\not\!{\hat{{\varvec{S}}}}}_{ 2}^{2} + \cdots f_{N2}^{2} {\not\!{\hat{{\varvec{S}}}}}_{N2}^{2} + \cdots f_{1}^{t} {\not\!{\hat{{\varvec{S}}}}}_{1}^{t} + f_{2}^{t} {\not\!{\hat{{\varvec{S}}}}}_{2}^{t} + \cdots f_{Nt}^{t} {\not\!{\hat{{\varvec{S}}}}}_{Nt}^{t} = {\varvec{G}}_{d} {\varvec{f}}_{d},$$
$$\begin{aligned} \;{\varvec{G}}_{d} & = \left( {\begin{array}{*{20}c} {{\varvec{G}}_{1} } & {{\varvec{G}}_{2} } & \cdots & {{\varvec{G}}_{t} } \\ \end{array} } \right), \\ {\varvec{G}}_{\upsilon } & = \left( {\begin{array}{*{20}c} {{\not\!{\hat{{\varvec{S}}}}}_{ 1}^{\upsilon } } & {{\not\!{\hat{{\varvec{S}}}}}_{2}^{\upsilon } } & \cdots & {{\not\!{\hat{{\varvec{S}}}}}_{N\upsilon }^{\upsilon } } \\ \end{array} } \right),\upsilon = 1,{ 2}, \, \ldots ,t, \\ {\varvec{f}}_{d} & = \left( {\begin{array}{*{20}c} {{\varvec{f}}_{1}^{\text{T}} } & {{\varvec{f}}_{2}^{\text{T}} } & \cdots & {{\varvec{f}}_{t}^{\text{T}} } \\ \end{array} } \right)^{\text{T}} , \\ {\varvec{f}}_{\upsilon } & = \left( {\begin{array}{*{20}c} {f_{1}^{\upsilon } } & {f_{2}^{\upsilon } } & \cdots & {f_{N\upsilon }^{\upsilon } } \\ \end{array} } \right)^{\text{T}} . \\ \end{aligned}$$
According to the definition of the stiffness matrix of the supporting limb's constraint wrenches [ 58 ] there exists
$${\varvec{f}}_{\upsilon } = {\varvec{K}}_{\upsilon } {\varvec{\updelta}}_{\upsilon }.$$
The compatibility equation about the elastic deformations generated at the end of each limb in the axes of constraint wrenches and the six-dimensional micro-displacement of the moving platform is
$${\varvec{\updelta}}_{\upsilon } = {\varvec{G}}_{\upsilon }^{\text{T}} {\varvec{X}}.$$
Thus, the magnitudes of all the constraint wrenches (including the actuation wrenches) can be solved by combining Eqs. ( 23), ( 24) and ( 25) as
$${\varvec{f}}_{\upsilon } = {\varvec{K}}_{\upsilon } {\varvec{G}}_{\upsilon }^{\text{T}} \left( {{\varvec{G}}_{1} {\varvec{K}}_{1} {\varvec{G}}_{1}^{\text{T}} + {\varvec{G}}_{2} {\varvec{K}}_{2} {\varvec{G}}_{2}^{\text{T}} + \cdots + {\varvec{G}}_{t} {\varvec{K}}_{t} {\varvec{G}}_{t}^{\text{T}} } \right)^{ - 1} {\not\!{\varvec{S}}}_{F},$$
which is just the general expression of the magnitudes of all actuation and constraint wrenches.
Then, the actual reactions of all kinematic joints can be easily obtained according to the relationship between them and the magnitudes of the actuation and constraint wrenches shown in Ref. [ 58 ].
Discussion: It can be seen from Eqs. ( 18), ( 22), and ( 26) that, for the statically indeterminate problem of the limb stiffness decoupled overconstrained PMs, only the elastic deformations generated at the end of supporting limbs in the axes of overconstraint wrenches need to be considered, while for that of the limb stiffness coupled overconstrained PMs, the elastic deformations generated at the end of supporting limbs in the axes of all constraint wrenches, including actuation wrenches, should be taken into account. This method has clear steps, few computational requirements, and gives the explicit analytical expressions of the solutions to the statically indeterminate problem of general passive overconstrained PMs.
2.6 Weighted Generalized Inverse Method
Main ideas: A simple method is proposed in Ref. [ 59 ] by resorting to the definition of the weighted generalized inverse of a non-square matrix [ 77 ], which is suitable for solving the statically indeterminate problem of both the limb stiffness decoupled and coupled passive overconstrained PMs.
Based on the weighted generalized inverse of the matrix mapping the driving forces/torques and constraint forces/moments to the external loads, the solutions of the statically indeterminate problem of a general passive overconstrained PM can be derived as [ 59 ]
$${\varvec{f}} = {\varvec{G}}_{{\varvec{B}}}^{ + } {\not\!{\varvec{S}}}_{F} = {\varvec{B}}^{ - 1} {\varvec{G}}^{\text{T}} \left( {{\varvec{GB}}^{ - 1} {\varvec{G}}^{\text{T}} } \right)^{ - 1} {\not\!{\varvec{S}}}_{F},$$
where the weighted matrix B is the inverse matrix of a block diagonal matrix composed of the stiffness matrices of each limb's constraint wrenches.
In the case that each supporting limb only supplies one driving force/torque or constraint force/moment, the stiffness of each limb is just a scalar quantity, and the weighted matrix B becomes a diagonal matrix, which is consistent with the work done in Refs. [ 53 , 54 ].
Discussion: The method based on the weighted generalized inverse supplies a simpler and more effective way to solve the statically indeterminate problem of passive overconstrained PMs. Moreover, it can be seen from Eq. ( 27) that the elements of the weighted matrix B are the stiffness matrices of the limbs' constraint wrenches, which shows that the solutions of the driving forces/torques and constraint forces/moments of passive overconstrained PMs are unique.
In addition to the above mentioned methods, there are other approaches of handling redundant constraints of a passive overconstrained PM, for example, the pseudo-inverse method [ 60 ] and the augmented Lagrangian formulation [ 61 , 62 ]. Furthermore, Zahariev et al. [ 63 ], proposed a method for dynamic analysis of multibody systems in overconstrained and singular configurations, in which some closed chains are transformed into open branches and the missing links are substituted by stiff forces.
3 Methods for Force Analysis of Active Overconstrained PMs
The schematic of a general active overconstrained PM with n DOFs and ζ actuated joints is shown in Fig. 2, where ζ > n. Assume that the active overconstrained PM shown in Fig. 2 contains t supporting limbs and the υth supporting limb contains l υ links. Without loss of generality, each limb can possess more than one actuator.
Schematic of a general active overconstrained PM
As there are theoretically infinite sets of solutions to the statically indeterminate problem of active overconstrained PMs, the key to the force analysis of this kind of overconstrained PMs is to find the optimal distribution of all driving forces/torques. The typical methods for solving this problem fall into four categories.
3.1 Pseudo-inverse Method
The force and moment equilibrium equations of an active overconstrained PM can be written in the form [ 40 , 42 , 72 ]
$${\varvec{G}}_{\text{act}} {\varvec{f}}_{\text{act}} = {\varvec{F}}_{\text{extr}},$$
where f act consists of ζ driving forces/torques, G act is the coefficient matrix, and F extr is the generalized external force vector composed of inertia forces/moments, weight, and external loads encountered in the components of the mechanism.
As the matrix G act is singular, the pseudo-inverse of G act is used to find the minimum norm of f act in some situations [ 27 , 30 , 72 ]:
$${\varvec{f}}_{\text{act}} = {\varvec{G}}_{\text{act}}^{ + } {\varvec{F}}_{\text{extr}}.$$
In this way, the minimum driving forces/torques of an active overconstrained PM can be obtained.
3.2 Weighted Coefficient Method
The distribution of the driving forces/torques of an active overconstrained PM under different optimization goals [ 65 – 69 ] can be viewed as a constrained optimization problem, i.e., the minimum set of solutions of the objective function under the constraints of force and moment balance need to be solved. The weighted coefficient method proposed by Huang et al. [ 73 ] and Zhao et al. [ 78 ], can achieve a variety of optimization goals. It is taken as representative of this method and is briefly reviewed in the following paragraphs:
Select n joints among the ζ actuated joints as the generalized coordinates. The driving forces/torques of the ( ζ– n) actuated joints, external loads, inertia forces/moments, and gravity applied on the moving platform and each supporting link can be expressed with respect to the generalized coordinates. Thus, the dynamic equilibrium equations of an active overconstrained PM can be rearranged as [ 73 , 78 ]
$${\varvec{\uptau}}_{\text{non}} = - \left( {\sum\limits_{\upsilon = 1}^{t} {\sum\limits_{h = 1}^{{l_{\upsilon } }} {\left( {{\varvec{G}}_{h}^{\text{non}} } \right)^{\upsilon } {\not\!{\varvec{S}}}_{F,h}^{\upsilon } } + {\varvec{G}}_{F}^{\text{non}} {\not\!{\varvec{S}}}_{F} + {\varvec{G}}_{\text{over}}^{\text{non}} {\varvec{\uptau}}_{\text{over}} } } \right),$$
where τ non is composed of the driving forces/torques of the generalized joints, i.e., the n non-redundant driving forces/torques, and τ over consists of the driving forces/torques of the non-generalized joints, i.e., the remaining ( ζ– n) redundant driving forces/torques. \({\not\!{\varvec{S}}}_{F,h}^{\upsilon }\) and \({\not\!{\varvec{S}}}_{F}\) denote the resultant force vectors of the external loads, gravity, and inertia force/moment acted on the h-link of the υth limb and the moving platform, respectively. They are expressed in the corresponding local coordinates. \(\left( {{\varvec{G}}_{h}^{\text{non}} } \right)^{\upsilon }\), \({\varvec{G}}_{F}^{\text{non}}\), and \({\varvec{G}}_{\text{over}}^{\text{non}}\) represent the transformation matrices of \({\not\!{\varvec{S}}}_{F,h}^{\upsilon }\), \({\not\!{\varvec{S}}}_{F,h}^{\upsilon }\), and τ over from the corresponding local coordinates to the generalized coordinates, respectively.
In order to obtain the optimal distribution of all driving forces/torques, the objective function of optimization can be constructed as [ 73 , 78 ]
$$f_{\text{obj}} = \sum\limits_{i = 1}^{n} {W_{i}^{2} \tau_{i}^{2} } + \sum\limits_{\xi = n + 1}^{\zeta } {W_{\xi }^{2} \tau_{\xi }^{2} } = {\varvec{\uptau}}_{\text{non}}^{\text{T}} {\varvec{W}}_{\text{non}} {\varvec{\uptau}}_{\text{non}} + {\varvec{\uptau}}_{\text{over}}^{\text{T}} {\varvec{W}}_{\text{over}} {\varvec{\uptau}}_{\text{over}},$$
$$\begin{aligned}\;{\varvec{W}}_{\text{non}} = {\text{diag}}\left( {W_{1}^{2} ,W_{2}^{2} , \cdots ,W_{n}^{2} } \right), \hfill \\ {\varvec{W}}_{\text{over}} = {\text{diag}}\left( {W_{n + 1}^{2} ,W_{n + 2}^{2} , \cdots ,W_{\zeta }^{2} } \right), \hfill \\ \end{aligned}$$
in which W i and W ξ ( ξ = n+1, n + 2, …, ζ) are weighted coefficients.
By solving the minimum values of the objective function shown in Eq. ( 31) under the constraint condition of Eq. ( 30) we get
$${\varvec{\uptau}}_{\text{over}} = - \left( {{\varvec{W}}_{\text{over}} + \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} {\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{ - 1} \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} {\not\!{\varvec{S}}}_{F,M},$$
where \({\not\!{\varvec{S}}}_{F,M} = \left( {\sum\limits_{\upsilon = 1}^{t} {\sum\limits_{h = 1}^{{M_{\upsilon } }} {\left( {{\varvec{G}}_{h}^{\text{non}} } \right)^{\upsilon } {\not\!{\varvec{S}}}_{F,h}^{\upsilon } } + {\varvec{G}}_{F}^{\text{non}} {\not\!{\varvec{S}}}_{F} } } \right)\).
Substituting Eq. ( 32) into Eq. ( 30) yields
$${\varvec{\uptau}}_{\text{non}} = - \left( {{\varvec{I}} - {\varvec{G}}_{\text{over}}^{\text{non}} \left( {{\varvec{W}}_{\text{over}} + \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} {\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{ - 1} \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} } \right){\not\!{\varvec{S}}}_{F,M}.$$
Rearranging Eqs. ( 32) and ( 33) yields
$$\left( {\begin{array}{*{20}c} {{\varvec{\uptau}}_{\text{non}} } \\ {{\varvec{\uptau}}_{\text{over}} } \\ \end{array} } \right) = {-}\left( {\begin{array}{*{20}c} {{\varvec{I}}{-}{\varvec{G}}_{\text{over}}^{\text{non}} \left( {{\varvec{W}}_{\text{over}} + \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} {\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{ - 1} \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} } \\ {\left( {{\varvec{W}}_{\text{over}} + \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} {\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{ - 1} \left( {{\varvec{G}}_{\text{over}}^{\text{non}} } \right)^{\text{T}} {\varvec{W}}_{\text{non}} } \\ \end{array} } \right){\not\!{\varvec{S}}}_{F,M},$$
which is the analytical expression of the optimal distribution of driving forces/torques of a general active overconstrained PM. Different optimization goals can be achieved by changing the values of the weighted matrices W non and W over, for example:
(1) Let W i and W ξ be the velocities of the ith and ξth actuated joints, respectively. The driving forces/torques are distributed with the minimum input energy of the actuators [ 36 , 66 , 67 ].
(2) Let W i = W ξ = 1. Then, the minimum driving forces/torques of the mechanism can be obtained [ 40 , 65 ].
(3) Let \(W_{i} = K_{i}^{ - 1}\) and \(W_{\xi } = K_{\xi }^{ - 1}\), where K i and K ξ represent the stiffness of the ith and ξth actuated joints, respectively. The driving forces/torques are distributed with the minimum elastic potential energy of the mechanism [ 37 , 68 , 69 ].
3.3 Method Based on the Optimal Internal Forces
For an active overconstrained PM, the solution to its statically indeterminate problem shown in Eq. ( 28) can be broken into a particular solution and a homogeneous solution [ 19 , 39 ], as follows:
$${\varvec{f}}_{\text{act}} = {\varvec{f}}_{\text{spcl}} + {\varvec{f}}_{\text{homo}},$$
where the special solution f spcl satisfies
$${\varvec{G}}_{\text{act}} {\varvec{f}}_{\text{spcl}} = {\varvec{F}}_{\text{extr}},$$
and the homogeneous solution f homo meets
$${\varvec{G}}_{\text{act}} {\varvec{f}}_{\text{homo}} = \varvec{0}.$$
It can be seen from Eq. ( 35) that the solutions of driving forces/torques may involve the components corresponding to the null space of the coefficient matrix G act, which are known as the internal forces. Hence, two kinds of methods are proposed for the statically indeterminate problem of active overconstrained PMs from the perspective of dealing with the internal forces. The first method is that the driving forces/torques are distributed without internal forces [ 18 , 35 , 70 ]. A number of studies have shown that the internal forces can be utilized to change the stiffness [ 79 ], improve the motion accuracy [ 32 ], increase the load-carrying capacity [ 19 ], and eliminate the backlash [ 34 ] of active overconstrained PMs, so the second method is that the driving forces/torques are distributed by utilizing the advantages of internal forces [ 19 , 32 , 34 ].
If \({\varvec{A}} \in {\varvec{C}}^{{x \times \left( {y + z} \right)}}\) and A = [ U V], where U and V are composed of the first y columns and the last z columns of A, respectively. The weighted generalized inverse of the matrix A can be expressed as [ 80 ]
$${\varvec{A}}_{{{\varvec{P}},{\varvec{Q}}}}^{ + } = \left( {\begin{array}{*{20}c} {{\varvec{U}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } \left( {{\varvec{I}} - {\varvec{VH}}} \right) - \left( {{\varvec{I}} - {\varvec{U}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } {\varvec{U}}} \right){\varvec{Q}}_{y}^{ - 1} {\varvec{LH}}} \\ {\varvec{H}} \\ \end{array} } \right),$$
\(\begin{aligned} & {\text{where}}\;{\varvec{H}} = {\varvec{C}}_{{{\varvec{P}},{\varvec{K}}1}}^{ + } + \left( {{\varvec{I}} - {\varvec{C}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } {\varvec{C}}} \right){\varvec{K}}_{1}^{{ - 1}} \left( {{\varvec{D}}^{*} {\varvec{Q}}_{y} - {\varvec{L}}^{*} } \right){\varvec{U}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } , \\ & {\varvec{K}}_{1} = {\varvec{Q}}_{z} + {\varvec{D}}^{*} {\varvec{Q}}_{y} {\varvec{D}} - \left( {{\varvec{D}}^{*} {\varvec{L}} + {\varvec{L}}^{*} {\varvec{D}}} \right) - {\varvec{L}}^{*} \left( {{\varvec{I}} - {\varvec{U}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } {\varvec{U}}} \right){\varvec{Q}}_{y}^{{ - 1}} {\varvec{L}}, \\ & {\varvec{C}} = \left( {{\varvec{I}} - {\varvec{UU}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } } \right){\varvec{V}}, \\ & {\varvec{D}} = {\varvec{U}}_{{{\varvec{P}},{\varvec{Q}}y}}^{ + } {\varvec{V}}, \\ \end{aligned}\)
\({\varvec{A}}_{{{\varvec{P}},{\varvec{Q}}}}^{ + }\) represents the weighted generalized inverse matrix of A with P and Q as the weighted factors, I is an identity matrix, P is a x × x positive definite matrix, and Q is a ( y + z) × ( y + z) positive definite matrix that can be partitioned as
\({\varvec{Q}} = \left( {\begin{array}{*{20}c} {{\varvec{Q}}_{y} } & {\mathbf{L}} \\ {{\varvec{L}}^{*} } & {{\varvec{Q}}_{z} } \\ \end{array} } \right)\), in which the sign "*" denotes the conjugate and transpose operations.
Using Eq. ( 36), we found that the solution of the driving forces/torques shown in Eq. ( 34) can be obtained directly using the weighted generalized inverse of the matrix mapping the driving forces/torques to the generalized external force [ 59 ]. Therefore, the weighted generalized inverse can be applied to distribute the driving forces/torques of active overconstrained PMs:
$${\varvec{f}}_{\text{act}} = \left( {{\varvec{G}}_{\text{act}} } \right)_{{\varvec{S}}}^{ + } {\not\!{\varvec{S}}}_{F,M} = {\varvec{S}}^{ - 1} {\varvec{G}}_{\text{act}}^{\text{T}} \left( {{\varvec{G}}_{\text{act}} {\varvec{S}}^{ - 1} {\varvec{G}}_{\text{act}}^{\text{T}} } \right)^{ - 1} {\not\!{\varvec{S}}}_{F,M},$$
where S is the weighted diagonal matrix whose elements are determined by the specific optimization goal.
Moreover, there are other approaches for the force distribution problem of active overconstrained PMs. A partitioned actuator set control method was proposed by Gardner et al. [ 71 ], to improve the traction or load sharing among the actuators. A scaling factor method and an analytical method were presented in Ref. [ 74 ] to determine the wrench capabilities of active overconstrained PMs. Nahon et al. [ 75 ], summarized three methods for solving the optimal force distribution problem of this kind of PMs: the weighted pseudo-inverse, explicit Lagrange multipliers, and direct substitution.
The force analyses of both active and passive overconstrained PMs belong to the statically indeterminate problem. A large number of methods have been proposed for the force analysis of these two kinds of overconstrained PMs, among which the weighted generalized inverse method is the simplest and most common. That is to say, the solutions of the statically indeterminate problems of both active and passive overconstrained PMs can be solved by
$${\varvec{f}} = \left( {{\varvec{G}}_{f}^{F} } \right)_{{\varvec{W}}}^{ + } {\not\!{\varvec{S}}}_{F} = {\varvec{W}}^{ - 1} \left( {{\varvec{G}}_{f}^{F} } \right)^{\text{T}} \left( {{\varvec{G}}_{f}^{F} {\varvec{W}}^{ - 1} \left( {{\varvec{G}}_{f}^{F} } \right)^{\text{T}} } \right)^{ - 1} {\not\!{\varvec{S}}}_{F}.$$
For the passive overconstrained PMs, f consists of the magnitudes of all constraint wrenches (including actuation wrenches), \({\varvec{G}}_{f}^{F}\) is the coefficient matrix mapping the driving forces/torques and constraint forces/moments to the external loads, and the weighted matrix W is composed of the stiffness matrices of each limb's constraint wrenches, which cannot be actively selected. As a result, the solutions of the driving forces/torques and constraint forces/moments of passive overconstrained PMs are unique. For the active overconstrained PMs, f is composed of the magnitudes of all driving forces/torques, \({\varvec{G}}_{f}^{F}\) is the coefficient matrix mapping the driving forces/torques to the generalized external loads, and the elements of the weighted matrix W can be actively given according to different optimization goals. Therefore, there are an infinite number of solutions of the driving forces/torques of active overconstrained PMs.
5 Conclusions and Outlook
The existence of redundant constraints or actuations makes the force analysis of both passive overconstrained PMs (i.e., the PMs with redundant or common constraints) and active ones (i.e., the redundantly actuated PMs) belong to a statically indeterminate problem, which is extremely difficult and particularly complicated to solve. The various approaches proposed for the force analysis of these two kinds of overconstrained PMs are divided into six categories and four categories, respectively, in this paper, among which:
(1) The pseudo-inverse method was used to solve the force analysis of these two kinds of overconstrained PMs. However, for the passive overconstrained PMs, the solutions are obtained without physical meaning, and for the active overconstrained PMs, the driving forces/torques are solved with the minimum values.
(2) The common method used to solve the statically indeterminate problem of passive overconstrained PMs involves combining the force and moment equilibrium equations and the compatibility equations of deformation of the mechanisms, and that used to solve the driving forces/torques of active overconstrained PMs involves establishing a specific optimization goal and then solving the minimum values of the objective function under the constraint of force and moment equilibrium equations.
(3) The weighted generalized inverse method can be applied to solve the statically indeterminate problem of both passive and active overconstrained PMs. For the passive overconstrained PMs, the weighted matrix consists of the stiffness matrices of each limb's constraint wrenches, and for the active overconstrained PMs, it is determined by the optimization goals.
In recent decades, sustained efforts have been made to find a simple and general method to solve the driving forces/torques and constraint forces/moments of passive overconstrained PMs, and to distribute the driving forces/torques of active overconstrained PMs. It can be seen from this paper that the weighted generalized inverse method is the simplest and most universal one at present.
However, the existing theoretical methods for force analysis of overconstrained PMs are basically proposed without considering the actual characteristics of the mechanisms, such as joint clearance and friction, the real stiffness models of supporting limbs. Therefore, the force analysis of typical overconstrained systems (for example, the parallel machine tool XT 700) under the condition of the actual characteristics will become a research hotspot. In addition, the establishment of systematic experimental platforms to verify the theoretical methods will be another important research direction.
Zurück zum Zitat S A Joshi, L W Tsai. Jacobian analysis of limited-DOF parallel manipulators. Journal of Mechanical Design, Transactions of the ASME, 2002, 124(2): 254–258. CrossRef S A Joshi, L W Tsai. Jacobian analysis of limited-DOF parallel manipulators. Journal of Mechanical Design, Transactions of the ASME, 2002, 124(2): 254–258. CrossRef
Zurück zum Zitat A Sokolov, P Xirouchakis. Dynamics analysis of a 3-DOF parallel manipulator with R-P-S joint structure. Mechanism and Machine Theory, 2007, 42(5): 541–557. MathSciNetCrossRefMATH A Sokolov, P Xirouchakis. Dynamics analysis of a 3-DOF parallel manipulator with R-P-S joint structure. Mechanism and Machine Theory, 2007, 42(5): 541–557. MathSciNetCrossRefMATH
Zurück zum Zitat L W Tsai, S Joshi. Kinematics and optimization of a spatial 3-UPU parallel manipulator. Journal of Mechanical Design, Transactions of the ASME, 2000, 122(4): 439–446. CrossRef L W Tsai, S Joshi. Kinematics and optimization of a spatial 3-UPU parallel manipulator. Journal of Mechanical Design, Transactions of the ASME, 2000, 122(4): 439–446. CrossRef
Zurück zum Zitat R D Gregorio. Kinematics of the 3-UPU wrist. Mechanism and Machine Theory, 2003, 38(3): 253–263. CrossRefMATH R D Gregorio. Kinematics of the 3-UPU wrist. Mechanism and Machine Theory, 2003, 38(3): 253–263. CrossRefMATH
Zurück zum Zitat Y LU, Y Shi, B Hu. Kinematic analysis of two novel 3UPU I and 3UPU II PKMs. Robotics and Autonomous Systems, 2008, 56(4): 296–305. CrossRef Y LU, Y Shi, B Hu. Kinematic analysis of two novel 3UPU I and 3UPU II PKMs. Robotics and Autonomous Systems, 2008, 56(4): 296–305. CrossRef
Zurück zum Zitat M Callegari, M C Palpacelli, M Principi. Dynamics modelling and control of the 3-RCC translational platform. Mechatronics, 2006, 16(10): 589–605. CrossRef M Callegari, M C Palpacelli, M Principi. Dynamics modelling and control of the 3-RCC translational platform. Mechatronics, 2006, 16(10): 589–605. CrossRef
Zurück zum Zitat J F Li, J S Wang. Inverse kinematic and dynamic analysis of a 3-DOF parallel mechanism. Chinese Journal of Mechanical Engineering, 2003, 16(1): 54–58. CrossRef J F Li, J S Wang. Inverse kinematic and dynamic analysis of a 3-DOF parallel mechanism. Chinese Journal of Mechanical Engineering, 2003, 16(1): 54–58. CrossRef
Zurück zum Zitat Y M Qian. Position equation establishment and kinematics analysis of 3-RRC parallel mechanism. Applied Mechanics and Materials, 2014, 664: 349–354. CrossRef Y M Qian. Position equation establishment and kinematics analysis of 3-RRC parallel mechanism. Applied Mechanics and Materials, 2014, 664: 349–354. CrossRef
Zurück zum Zitat T Bonnemains, H Chanal, B C Bouzgarrou, et al. Dynamic model of an overconstrained PKM with compliances: the Tripteor X7. Robotics and Computer- Integrated Manufacturing, 2013, 29(1): 180–191. CrossRef T Bonnemains, H Chanal, B C Bouzgarrou, et al. Dynamic model of an overconstrained PKM with compliances: the Tripteor X7. Robotics and Computer- Integrated Manufacturing, 2013, 29(1): 180–191. CrossRef
Zurück zum Zitat Z M Bi, B Kang. An inverse dynamic model of over-constrained parallel kinematic machine based on Newton–Euler formulation. Journal of Dynamic Systems, Measurement, and Control, Transactions of the ASME, 2014, 136(4): 041001-(1–9). Z M Bi, B Kang. An inverse dynamic model of over-constrained parallel kinematic machine based on Newton–Euler formulation. Journal of Dynamic Systems, Measurement, and Control, Transactions of the ASME, 2014, 136(4): 041001-(1–9).
Zurück zum Zitat Z M Bi. Kinetostatic modeling of Exechon parallel kinematic machine for stiffness analysis. International Journal of Advanced Manufacturing Technology, 2014, 71(1): 325–335. CrossRef Z M Bi. Kinetostatic modeling of Exechon parallel kinematic machine for stiffness analysis. International Journal of Advanced Manufacturing Technology, 2014, 71(1): 325–335. CrossRef
Zurück zum Zitat Y M Li, Q S Xu. Dynamic modeling and robust control of a 3-PRC translational parallel kinematic machine. Robotics and Computer- Integrated Manufacturing, 2009, 25(3): 630–640. CrossRef Y M Li, Q S Xu. Dynamic modeling and robust control of a 3-PRC translational parallel kinematic machine. Robotics and Computer- Integrated Manufacturing, 2009, 25(3): 630–640. CrossRef
Zurück zum Zitat Y M Li, S Staicu. Inverse dynamics of a 3-PRC parallel kinematic machine. Nonlinear Dynamic, 2012, 67(2): 1031–1041. MathSciNetCrossRefMATH Y M Li, S Staicu. Inverse dynamics of a 3-PRC parallel kinematic machine. Nonlinear Dynamic, 2012, 67(2): 1031–1041. MathSciNetCrossRefMATH
Zurück zum Zitat C Mavroidis, B Roth. Analysis of overconstrained mechanisms. Journal of Mechanical Design, Transactions of the ASME, 1995, 117(1): 69–74. CrossRef C Mavroidis, B Roth. Analysis of overconstrained mechanisms. Journal of Mechanical Design, Transactions of the ASME, 1995, 117(1): 69–74. CrossRef
Zurück zum Zitat Y F Fang, L W Tsai. Enumeration of a class of overconstrained mechanisms using the theory of reciprocal screws. Mechanism and Machine Theory, 2004, 39(11): 1175–1187. MathSciNetCrossRefMATH Y F Fang, L W Tsai. Enumeration of a class of overconstrained mechanisms using the theory of reciprocal screws. Mechanism and Machine Theory, 2004, 39(11): 1175–1187. MathSciNetCrossRefMATH
Zurück zum Zitat E J Haug. Computer- aided kinematics and dynamics of mechanical systems. vol. 1: basic methods. Allyn and Bacon, 1989. E J Haug. Computer- aided kinematics and dynamics of mechanical systems. vol. 1: basic methods. Allyn and Bacon, 1989.
Zurück zum Zitat M Wojtyra. Joint reaction forces in multibody systems with redundant constraints. Multibody System Dynamics, 2005, 14(1): 23–46. MathSciNetCrossRefMATH M Wojtyra. Joint reaction forces in multibody systems with redundant constraints. Multibody System Dynamics, 2005, 14(1): 23–46. MathSciNetCrossRefMATH
Zurück zum Zitat Y D Xu, J T Yao, Y S Zhao. Internal forces analysis of the active overconstrained parallel manipulators. International Journal of Robotics and Automation, 2015, 30(5): 511–518. CrossRef Y D Xu, J T Yao, Y S Zhao. Internal forces analysis of the active overconstrained parallel manipulators. International Journal of Robotics and Automation, 2015, 30(5): 511–518. CrossRef
Zurück zum Zitat J Wu, X L Chen, L P Wang, et al. Dynamic load-carrying capacity of a novel redundantly actuated parallel conveyor. Nonlinear Dynamics, 2014, 78(1): 241–250. CrossRef J Wu, X L Chen, L P Wang, et al. Dynamic load-carrying capacity of a novel redundantly actuated parallel conveyor. Nonlinear Dynamics, 2014, 78(1): 241–250. CrossRef
Zurück zum Zitat C Z Wang, Y F Fang, S Guo. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715. CrossRef C Z Wang, Y F Fang, S Guo. Multi-objective optimization of a parallel ankle rehabilitation robot using modified differential evolution algorithm. Chinese Journal of Mechanical Engineering, 2015, 28(4): 702–715. CrossRef
Zurück zum Zitat Y D Xu, J T Yao, Y S Zhao. Inverse dynamics and internal forces of the redundantly actuated parallel manipulators. Mechanism and Machine Theory, 2012, 51: 172–184. CrossRef Y D Xu, J T Yao, Y S Zhao. Inverse dynamics and internal forces of the redundantly actuated parallel manipulators. Mechanism and Machine Theory, 2012, 51: 172–184. CrossRef
Zurück zum Zitat F Firmani, R P Podhorodeski. Force-unconstrained poses for a redundantly-actuated planar parallel manipulator. Mechanism and Machine Theory, 2004, 39(5): 459–476. CrossRefMATH F Firmani, R P Podhorodeski. Force-unconstrained poses for a redundantly-actuated planar parallel manipulator. Mechanism and Machine Theory, 2004, 39(5): 459–476. CrossRefMATH
Zurück zum Zitat B Dasgupta, T S Mruthyunjaya. Force redundancy in parallel manipulators: theoretical and practical issues. Mechanism and Machine Theory, 1998, 33(6): 727–742. MathSciNetCrossRefMATH B Dasgupta, T S Mruthyunjaya. Force redundancy in parallel manipulators: theoretical and practical issues. Mechanism and Machine Theory, 1998, 33(6): 727–742. MathSciNetCrossRefMATH
Zurück zum Zitat J A Saglia, J S Dai, D G Caldwell. Geometry and kinematic analysis of a redundantly actuated parallel mechanism that eliminates singularities and improves dexterity. Journal of Mechanical Design, Transactions of the ASME, 2008, 130(12): 1786–1787. CrossRef J A Saglia, J S Dai, D G Caldwell. Geometry and kinematic analysis of a redundantly actuated parallel mechanism that eliminates singularities and improves dexterity. Journal of Mechanical Design, Transactions of the ASME, 2008, 130(12): 1786–1787. CrossRef
Zurück zum Zitat S H Li, Y M Liu, H L Cui, et al. Synthesis of branched chains with actuation redundancy for eliminating interior singularities of 3T1R parallel mechanisms. Chinese Journal of Mechanical Engineering, 2016, 29(2): 250–259. CrossRef S H Li, Y M Liu, H L Cui, et al. Synthesis of branched chains with actuation redundancy for eliminating interior singularities of 3T1R parallel mechanisms. Chinese Journal of Mechanical Engineering, 2016, 29(2): 250–259. CrossRef
Zurück zum Zitat J Kim, F C Park, J R Sun, et al. Design and analysis of a redundantly actuated parallel mechanism for rapid machining. IEEE Transactions on Robotics and Automation, 2001, 17(4): 423–434. CrossRef J Kim, F C Park, J R Sun, et al. Design and analysis of a redundantly actuated parallel mechanism for rapid machining. IEEE Transactions on Robotics and Automation, 2001, 17(4): 423–434. CrossRef
Zurück zum Zitat H Cheng, Y K Yiu, Z X Li. Dynamics and control of redundantly actuated parallel manipulators. IEEE/ASME Transactions on Mechatronics, 2003, 8(4): 483–491. CrossRef H Cheng, Y K Yiu, Z X Li. Dynamics and control of redundantly actuated parallel manipulators. IEEE/ASME Transactions on Mechatronics, 2003, 8(4): 483–491. CrossRef
Zurück zum Zitat Y J Zhao, F Gao, W M Li, et al. Development of a 6-DOF parallel seismic simulator with novel redundant actuation. Mechatronics, 2009, 19(3):422–427. CrossRef Y J Zhao, F Gao, W M Li, et al. Development of a 6-DOF parallel seismic simulator with novel redundant actuation. Mechatronics, 2009, 19(3):422–427. CrossRef
Zurück zum Zitat J M Tao, J Y S Luh. Coordination of two redundant robots. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 425–430. J M Tao, J Y S Luh. Coordination of two redundant robots. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 425–430.
Zurück zum Zitat S B Nokleby, R Fisher, R P Podhorodeski, et al. Force capabilities of redundantly-actuated parallel manipulators. Mechanism and Machine Theory, 2005, 40(5): 578–599. CrossRefMATH S B Nokleby, R Fisher, R P Podhorodeski, et al. Force capabilities of redundantly-actuated parallel manipulators. Mechanism and Machine Theory, 2005, 40(5): 578–599. CrossRefMATH
Zurück zum Zitat Y J Zhao, F Gao. Dynamic performance comparison of the 8PSS redundant parallel manipulator and its non-redundant counterpart—the 6PSS parallel manipulator. Mechanism and Machine Theory, 2009, 44(5): 991–1008. CrossRefMATH Y J Zhao, F Gao. Dynamic performance comparison of the 8PSS redundant parallel manipulator and its non-redundant counterpart—the 6PSS parallel manipulator. Mechanism and Machine Theory, 2009, 44(5): 991–1008. CrossRefMATH
Zurück zum Zitat H Liang, W Y Chen, H F Li. Research on the method of improving accuracy of parallel machine tools. Key Engineering Materials, 2009, 407–408: 85–88. CrossRef H Liang, W Y Chen, H F Li. Research on the method of improving accuracy of parallel machine tools. Key Engineering Materials, 2009, 407–408: 85–88. CrossRef
Zurück zum Zitat J Wu, J S Wang, L P Wang, et al. Dynamics and control of a planar 3-DOF parallel manipulator with actuation redundancy. Mechanism and Machine Theory, 2009, 44(4): 835–849. MathSciNetCrossRefMATH J Wu, J S Wang, L P Wang, et al. Dynamics and control of a planar 3-DOF parallel manipulator with actuation redundancy. Mechanism and Machine Theory, 2009, 44(4): 835–849. MathSciNetCrossRefMATH
Zurück zum Zitat A Müller. Internal preload control of redundantly actuated parallel manipulators–its application to backlash avoiding control. IEEE Transactions on Robotics, 2005, 21(4): 668–677. CrossRef A Müller. Internal preload control of redundantly actuated parallel manipulators–its application to backlash avoiding control. IEEE Transactions on Robotics, 2005, 21(4): 668–677. CrossRef
Zurück zum Zitat I D Walker, R A Freeman, S I Marcus. Internal object loading for multiple cooperating robot manipulators. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 606–611. I D Walker, R A Freeman, S I Marcus. Internal object loading for multiple cooperating robot manipulators. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 606–611.
Zurück zum Zitat Y F Zheng, J Y S Luh. Optimal load distribution for two industrial robots handling a single object. Proceedings of the 1988 IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, April 24–29, 1988: 344–349. Y F Zheng, J Y S Luh. Optimal load distribution for two industrial robots handling a single object. Proceedings of the 1988 IEEE International Conference on Robotics and Automation, Philadelphia, PA, USA, April 24–29, 1988: 344–349.
Zurück zum Zitat X C Gao, S M Song, C Q Zheng. Generalized stiffness matrix method for force distribution of robotic systems with indeterminacy. Journal of Mechanical Design, Transactions of the ASME, 1993, 115(3): 585–591. CrossRef X C Gao, S M Song, C Q Zheng. Generalized stiffness matrix method for force distribution of robotic systems with indeterminacy. Journal of Mechanical Design, Transactions of the ASME, 1993, 115(3): 585–591. CrossRef
Zurück zum Zitat Y S Zhao, L Lu, T S Zhao, et al. Dynamic performance analysis of six-legged walking machines. Mechanism and Machine Theory, 2000, 35(1): 155–163. CrossRefMATH Y S Zhao, L Lu, T S Zhao, et al. Dynamic performance analysis of six-legged walking machines. Mechanism and Machine Theory, 2000, 35(1): 155–163. CrossRefMATH
Zurück zum Zitat T Yoshikawa, K Nagai. Manipulating and grasping forces in manipulation by multifingered hands. IEEE Transactions on Robotics and Automation, 1991, 7(1): 67–77. CrossRef T Yoshikawa, K Nagai. Manipulating and grasping forces in manipulation by multifingered hands. IEEE Transactions on Robotics and Automation, 1991, 7(1): 67–77. CrossRef
Zurück zum Zitat M A Nahon, J Angeles. Optimization of dynamic forces in mechanical hands. Journal of Mechanisms, Transmissions, and Automation in Design, 1991, 113(2): 167–173. CrossRef M A Nahon, J Angeles. Optimization of dynamic forces in mechanical hands. Journal of Mechanisms, Transmissions, and Automation in Design, 1991, 113(2): 167–173. CrossRef
Zurück zum Zitat X W Kong. Standing on the shoulders of giants: a brief note from the perspective of kinematics. Chinese Journal of Mechanical Engineering, 2017, 30(1): 1–2. MathSciNetCrossRef X W Kong. Standing on the shoulders of giants: a brief note from the perspective of kinematics. Chinese Journal of Mechanical Engineering, 2017, 30(1): 1–2. MathSciNetCrossRef
Zurück zum Zitat J Wu, J S Wang, T M Li, et al. Dynamic analysis of the 2-DOF planar parallel manipulator of a heavy duty hybrid machine tool. International Journal of Advanced Manufacturing Technology, 2007, 34(3): 413–420. CrossRef J Wu, J S Wang, T M Li, et al. Dynamic analysis of the 2-DOF planar parallel manipulator of a heavy duty hybrid machine tool. International Journal of Advanced Manufacturing Technology, 2007, 34(3): 413–420. CrossRef
Zurück zum Zitat Y W Li, J S Wang, L P Wang, et al. Inverse dynamics and simulation of a 3-DOF spatial parallel manipulator. 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, China, September 14–19, 2003: 4092–4097. Y W Li, J S Wang, L P Wang, et al. Inverse dynamics and simulation of a 3-DOF spatial parallel manipulator. 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, China, September 14–19, 2003: 4092–4097.
Zurück zum Zitat Y Lu, Y Shi, Z Huang, et al. Kinematics/statics of a 4-DOF over-constrained parallel manipulator with 3 legs. Mechanism and Machine Theory, 2009, 44(8): 1497–1506. CrossRefMATH Y Lu, Y Shi, Z Huang, et al. Kinematics/statics of a 4-DOF over-constrained parallel manipulator with 3 legs. Mechanism and Machine Theory, 2009, 44(8): 1497–1506. CrossRefMATH
Zurück zum Zitat J Gallardo, J M Rico, A Frisoli, et al. Dynamics of parallel manipulators by means of screw theory. Mechanism and Machine Theory, 2003, 38(11): 1113–1131. MathSciNetCrossRefMATH J Gallardo, J M Rico, A Frisoli, et al. Dynamics of parallel manipulators by means of screw theory. Mechanism and Machine Theory, 2003, 38(11): 1113–1131. MathSciNetCrossRefMATH
Zurück zum Zitat D G Raffaele, P C Vicenzo. Dynamics of a class of parallel wrists. Journal of Mechanical Design, Transactions of the ASME, 2004, 126(3): 436–441. CrossRef D G Raffaele, P C Vicenzo. Dynamics of a class of parallel wrists. Journal of Mechanical Design, Transactions of the ASME, 2004, 126(3): 436–441. CrossRef
Zurück zum Zitat S Staicu, D Zhang. A novel dynamic modelling approach for parallel mechanisms analysis. Robotics and Computer-Integrated Manufacturing, 2008, 24(1): 167–172. CrossRef S Staicu, D Zhang. A novel dynamic modelling approach for parallel mechanisms analysis. Robotics and Computer-Integrated Manufacturing, 2008, 24(1): 167–172. CrossRef
Zurück zum Zitat S Staicu, D Zhang, R Rugescu. Dynamic modelling of a 3-DOF parallel manipulator using recursive matrix relations. Robotica, 2006, 24(1): 125–130. CrossRef S Staicu, D Zhang, R Rugescu. Dynamic modelling of a 3-DOF parallel manipulator using recursive matrix relations. Robotica, 2006, 24(1): 125–130. CrossRef
Zurück zum Zitat M Wojtyra. Joint reactions in rigid body mechanisms with dependent constraints. Mechanism and Machine Theory, 2009, 44(12): 2265–2278. CrossRefMATH M Wojtyra. Joint reactions in rigid body mechanisms with dependent constraints. Mechanism and Machine Theory, 2009, 44(12): 2265–2278. CrossRefMATH
Zurück zum Zitat J Fraczek, M Wojtyra. On the unique solvability of a direct dynamics problem for mechanisms with redundant constraints and coulomb friction in joints. Mechanism and Machine Theory, 2011, 46(3): 312–334. CrossRefMATH J Fraczek, M Wojtyra. On the unique solvability of a direct dynamics problem for mechanisms with redundant constraints and coulomb friction in joints. Mechanism and Machine Theory, 2011, 46(3): 312–334. CrossRefMATH
Zurück zum Zitat M Wojtyra, J Fraczek. Solvability of reactions in rigid multibody systems with redundant nonholonomic constraints. Multibody System Dynamics, 2013, 30(2): 153–171. MathSciNetCrossRef M Wojtyra, J Fraczek. Solvability of reactions in rigid multibody systems with redundant nonholonomic constraints. Multibody System Dynamics, 2013, 30(2): 153–171. MathSciNetCrossRef
Zurück zum Zitat M Wojtyra, J Fraczek. Joint reactions in rigid or flexible body mechanisms with redundant constraints. Bulletin of the Polish Academy of Sciences: Technical Sciences, 2012, 60(3): 617–626. M Wojtyra, J Fraczek. Joint reactions in rigid or flexible body mechanisms with redundant constraints. Bulletin of the Polish Academy of Sciences: Technical Sciences, 2012, 60(3): 617–626.
Zurück zum Zitat R Vertechy, V Parenti-Castelli. Static and stiffness analyses of a class of over-constrained parallel manipulators with legs of type US and UPS. 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, April 10–14, 2007: 561–567. R Vertechy, V Parenti-Castelli. Static and stiffness analyses of a class of over-constrained parallel manipulators with legs of type US and UPS. 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, April 10–14, 2007: 561–567.
Zurück zum Zitat Z J Wang, J T Yao, Y D Xu, et al. Hyperstatic analysis of a fully pre-stressed six-axis force/torque sensor. Mechanism and Machine Theory, 2012, 57: 84–94. CrossRef Z J Wang, J T Yao, Y D Xu, et al. Hyperstatic analysis of a fully pre-stressed six-axis force/torque sensor. Mechanism and Machine Theory, 2012, 57: 84–94. CrossRef
Zurück zum Zitat J T Yao, Y L Hou, J Chen, et al. Theoretical analysis and experiment research of a statically indeterminate pre-stressed six-axis force sensor. Sensors and Actuators A: Physical, 2009, 150(1): 1–11. CrossRef J T Yao, Y L Hou, J Chen, et al. Theoretical analysis and experiment research of a statically indeterminate pre-stressed six-axis force sensor. Sensors and Actuators A: Physical, 2009, 150(1): 1–11. CrossRef
Zurück zum Zitat B Hu, Z Huang. Kinetostatic model of overconstrained lower mobility parallel manipulators. Nonlinear Dynamics, 2016, 86(1): 309–322. CrossRef B Hu, Z Huang. Kinetostatic model of overconstrained lower mobility parallel manipulators. Nonlinear Dynamics, 2016, 86(1): 309–322. CrossRef
Zurück zum Zitat Z Huang, Y Zhao, J F Liu. Kinetostatic analysis of 4-R(CRR) parallel manipulator with overconstraints via reciprocal-screw theory. Advances in Mechanical Engineering, 2010, 2010(2): 1652–1660. Z Huang, Y Zhao, J F Liu. Kinetostatic analysis of 4-R(CRR) parallel manipulator with overconstraints via reciprocal-screw theory. Advances in Mechanical Engineering, 2010, 2010(2): 1652–1660.
Zurück zum Zitat Y D Xu, W L Liu, J T Yao, et al. A method for force analysis of the overconstrained lower mobility parallel mechanism. Mechanism and Machine Theory, 2015, 88: 31–48. CrossRef Y D Xu, W L Liu, J T Yao, et al. A method for force analysis of the overconstrained lower mobility parallel mechanism. Mechanism and Machine Theory, 2015, 88: 31–48. CrossRef
Zurück zum Zitat W L Liu, Y D Xu, J T Yao, et al. The weighted Moore-Penrose generalized inverse and the force analysis of overconstrained parallel mechanisms. Multibody System Dynamics, 2017, 39(4): 363–383. MathSciNetCrossRefMATH W L Liu, Y D Xu, J T Yao, et al. The weighted Moore-Penrose generalized inverse and the force analysis of overconstrained parallel mechanisms. Multibody System Dynamics, 2017, 39(4): 363–383. MathSciNetCrossRefMATH
Zurück zum Zitat M Wojtyra, J Fraczek. Comparison of selected methods of handling redundant constraints in multibody systems simulations. Journal of Computational and Nonlinear Dynamics, 2013, 8(2): 021007-(1–9). M Wojtyra, J Fraczek. Comparison of selected methods of handling redundant constraints in multibody systems simulations. Journal of Computational and Nonlinear Dynamics, 2013, 8(2): 021007-(1–9).
Zurück zum Zitat E Bayo, R Ledesma. Augmented lagrangian and mass-orthogonal projection methods for constrained multibody dynamics. Nonlinear Dynamics, 1996, 9: 113–130. MathSciNetCrossRef E Bayo, R Ledesma. Augmented lagrangian and mass-orthogonal projection methods for constrained multibody dynamics. Nonlinear Dynamics, 1996, 9: 113–130. MathSciNetCrossRef
Zurück zum Zitat W Blajer. Augmented Lagrangian formulation: geometrical interpretation and application to systems with singularities and redundancy. Multibody Systems Dynamics, 2002, 8(2): 141–159. MathSciNetCrossRefMATH W Blajer. Augmented Lagrangian formulation: geometrical interpretation and application to systems with singularities and redundancy. Multibody Systems Dynamics, 2002, 8(2): 141–159. MathSciNetCrossRefMATH
Zurück zum Zitat E Zahariev, J Cuadrado. Dynamics of mechanisms in overconstrained and singular configurations. Journal of Theoretical and Applied Mechanics, 2011, 41(1): 3–18. MathSciNet E Zahariev, J Cuadrado. Dynamics of mechanisms in overconstrained and singular configurations. Journal of Theoretical and Applied Mechanics, 2011, 41(1): 3–18. MathSciNet
Zurück zum Zitat S M Song, X C Gao. Mobility equation and the solvability of joint forces/torques in dynamic analysis. American Society of Mechanical Engineers, Design Engineering Division (Publication) DE, 1990, 24: 191–197. S M Song, X C Gao. Mobility equation and the solvability of joint forces/torques in dynamic analysis. American Society of Mechanical Engineers, Design Engineering Division (Publication) DE, 1990, 24: 191–197.
Zurück zum Zitat J P Merlet. Redundant parallel manipulators. Laboratory Robotics and Automation, 1996, 8(1): 17–24. CrossRef J P Merlet. Redundant parallel manipulators. Laboratory Robotics and Automation, 1996, 8(1): 17–24. CrossRef
Zurück zum Zitat D E Orin, S Y Oh. Control of force distribution in robotic mechanisms containing closed kinematic chains. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1981, 103(2): 134–141. CrossRef D E Orin, S Y Oh. Control of force distribution in robotic mechanisms containing closed kinematic chains. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1981, 103(2): 134–141. CrossRef
Zurück zum Zitat M Nahon, J Angeles. Minimization of power losses in cooperating manipulators. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1992, 114(2): 213–219. CrossRefMATH M Nahon, J Angeles. Minimization of power losses in cooperating manipulators. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1992, 114(2): 213–219. CrossRefMATH
Zurück zum Zitat D R Kerr, M Griffis, D J Sanger, et al. Redundant grasps, redundant manipulators, and their dual relationships. Journal of Robotic Systems, 1992, 9(7): 973–1000. CrossRef D R Kerr, M Griffis, D J Sanger, et al. Redundant grasps, redundant manipulators, and their dual relationships. Journal of Robotic Systems, 1992, 9(7): 973–1000. CrossRef
Zurück zum Zitat J Wu, T M Li, B Q Xu. Force optimization of planar 2-DOF parallel manipulators with actuation redundancy considering deformation. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2013, 227(6): 1371–1377. J Wu, T M Li, B Q Xu. Force optimization of planar 2-DOF parallel manipulators with actuation redundancy considering deformation. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2013, 227(6): 1371–1377.
Zurück zum Zitat I D Walker, R A Freeman, S I Marcus. Analysis of motion and internal loading of objects grasped by multiple cooperating manipulators. International Journal of Robotics Research, 1991, 10(4): 396–409. CrossRef I D Walker, R A Freeman, S I Marcus. Analysis of motion and internal loading of objects grasped by multiple cooperating manipulators. International Journal of Robotics Research, 1991, 10(4): 396–409. CrossRef
Zurück zum Zitat J F Gardner, K Srinivasan, K J Waldron. A solution for the force distribution problem in redundantly actuated closed kinematic chains. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1990, 112(3): 523–526. CrossRef J F Gardner, K Srinivasan, K J Waldron. A solution for the force distribution problem in redundantly actuated closed kinematic chains. Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME, 1990, 112(3): 523–526. CrossRef
Zurück zum Zitat V R Kumar, K J Waldron. Force distribution in closed kinematic chains. IEEE Journal of Robotics and Automation, 1988, 4(6): 657–664. CrossRef V R Kumar, K J Waldron. Force distribution in closed kinematic chains. IEEE Journal of Robotics and Automation, 1988, 4(6): 657–664. CrossRef
Zurück zum Zitat Z Huang, Y S Zhao. Accordance and optimization-distribution equations of the over-determinate inputs of walking machines. Mechanism and Machine Theory, 1994, 29(2): 327–332. MathSciNetCrossRef Z Huang, Y S Zhao. Accordance and optimization-distribution equations of the over-determinate inputs of walking machines. Mechanism and Machine Theory, 1994, 29(2): 327–332. MathSciNetCrossRef
Zurück zum Zitat V Garg, S B Nokleby, J A Carretero. Wrench capability analysis of redundantly actuated spatial parallel manipulators. Mechanism and Machine Theory, 2009, 44(5): 1070–1081. CrossRefMATH V Garg, S B Nokleby, J A Carretero. Wrench capability analysis of redundantly actuated spatial parallel manipulators. Mechanism and Machine Theory, 2009, 44(5): 1070–1081. CrossRefMATH
Zurück zum Zitat M A Nahon, J Angeles J. Force optimization in redundantly-actuated closed kinematic chains. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 951–956. M A Nahon, J Angeles J. Force optimization in redundantly-actuated closed kinematic chains. IEEE International Conference on Robotics and Automation, Scottsdale, AZ, USA, May 14–19, 1989: 951–956.
Zurück zum Zitat M Nahon, J Angeles. Real-time force optimization in parallel kinematic chains under inequality constraints. Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, April 9–11, 1991: 2198–2203. M Nahon, J Angeles. Real-time force optimization in parallel kinematic chains under inequality constraints. Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, April 9–11, 1991: 2198–2203.
Zurück zum Zitat Z P Xiong, Y Y Qin. Note on the weighted generalized inverse of the product of matrices. Journal of Applied Mathematics and Computing, 2011, 35(1): 469–474. MathSciNetCrossRefMATH Z P Xiong, Y Y Qin. Note on the weighted generalized inverse of the product of matrices. Journal of Applied Mathematics and Computing, 2011, 35(1): 469–474. MathSciNetCrossRefMATH
Zurück zum Zitat Y S Zhao, J Y Ren, Z Huang. Dynamic loads coordination for multiple cooperating robot manipulators. Mechanism and Machine Theory, 2000, 35(7): 985–995. CrossRefMATH Y S Zhao, J Y Ren, Z Huang. Dynamic loads coordination for multiple cooperating robot manipulators. Mechanism and Machine Theory, 2000, 35(7): 985–995. CrossRefMATH
Zurück zum Zitat M A Adli, H Hanafusa. Contribution of internal forces to the dynamics of closed chain mechanisms. Robotica, 1995, 13(5): 507–514. CrossRef M A Adli, H Hanafusa. Contribution of internal forces to the dynamics of closed chain mechanisms. Robotica, 1995, 13(5): 507–514. CrossRef
Zurück zum Zitat G R Wang, B Zheng. The weighted generalized inverses of a partitioned matrix. Applied Mathematics and Computation (New York), 2004, 155(1): 221–233. MathSciNetCrossRefMATH G R Wang, B Zheng. The weighted generalized inverses of a partitioned matrix. Applied Mathematics and Computation (New York), 2004, 155(1): 221–233. MathSciNetCrossRefMATH
Wen-Lan Liu
Yun-Dou Xu
Jian-Tao Yao
Yong-Sheng Zhao
Special Issue on Machine Fault Diagnostics and Prognostics
Composite Configuration Interventional Therapy Robot for the Microwave Ablation of Liver Tumors
Asymmetric Fuzzy Control of a Positive and Negative Pneumatic Pressure Servo System
Erratum to: Estimation of Road Friction Coefficient in Different Road Conditions Based on Vehicle Braking Dynamics
Springback Mechanism Analysis and Experiments on Robotic Bending of Rectangular Orthodontic Archwire
Spur Gear Tooth Pitting Propagation Assessment Using Model-based Analysis | CommonCrawl |
Why is it important for functions to be anonymous in lambda calculus?
I was watching the lecture by Jim Weirich, titled 'Adventures in Functional Programming'. In this lecture, he introduces the concept of Y-combinators, which essentially finds the fixed point for higher order functions.
One of the motivations, as he mentions it, is to be able to express recursive functions using lambda calculus so that the theory by Church (anything that is effectively computable can be computed using lambda calculus) stays.
The problem is that a function cannot call itself simply so, because lambda calculus does not allow named functions, i.e.,
$$n(x, y) = x + y$$
cannot bear the name '$n$', it must be defined anonymously:
$$(x, y) \rightarrow x + y $$
Why is it important for lambda calculus to have functions that are not named? What principle is violated if there are named functions? Or is it that I just misunderstood jim's video?
lambda-calculus functional-programming
Vitaly Olegovitch
Rohan PrabhuRohan Prabhu
$\begingroup$ This doesn't sound important at all. You can assign $(x,t) \mapsto x+y$ to a variable $n$ and then you have given a name to the function. $\endgroup$
– Yuval Filmus
$\begingroup$ @YuvalFilmus yes, you can bind a name to a function. I think the real question here, the puzzlement, is why (in lambda calculus) can't a function call itself by such a name? Why do we need a technique like the Y operator in order to do recursive functions? I hope my answer below helps. $\endgroup$
– Jerry101
$\begingroup$ @Jerry101 The historical reason for the absence of self-application is that $\lambda$-calculus was intended to be a foundation of mathematics, and the ability to self-apply makes such a foundation immediately inconsistent. So this apparent inability (which we know now can be circumvented) is a design feature of $\lambda$-calculus. $\endgroup$
– Martin Berger
$\begingroup$ @MartinBerger please say more. Inconsistent for the reason in my answer? Or for another reason? $\endgroup$
$\begingroup$ @Jerry101 Inconsistent in the sense that you can prove 0 = 1 in such a foundation of mathematics. After Kleene and Rosser showed the inconsistency of the pure, untyped $\lambda$-calculus, the simply-typed $\lambda$-calculus was developed as an alternative that does not allow us to define fix-point combintors such as $Y$. But if you add recursion to the simply-typed $\lambda$-calculus it again becomes inconsistent, because every type is inhabited by a non-terminating program. $\endgroup$
The main theorem regarding this issue is due to a British mathematician from the end of the 16th century, called William Shakespeare. His best known paper on the subject is entitled "Romeo and Juliet" was published in 1597, though the research work was conducted a few years earlier, inspired but such precursors as Arthur Brooke and William Painter.
His main result, stated in Act II. Scene II, is the famous theorem:
What's in a name? that which we call a rose
By any other name would smell as sweet;
This theorem can be intuively understood as "names do not contribute to meaning".
The greater part of the paper is devoted to an example complementing the theorem and showing that, even though names contribute no meaning, they are the source of endless problems.
As pointed out by Shakespeare, names can be changed without changing meaning, an operation that was later called $\alpha$-conversion by Alonzo Church and his followers. As a consequence, it is not necessarily simple to determine what is denoted by a name. This raises a variety of issues such as developing a concept of environment where the name-meaning association are specified, and rules to know what is the current environment when you try to determine the meaning associated with a name. This baffled computer scientists for a while, giving rise to technical difficulties such as the infamous Funarg problem. Environments remain an issue in some popular programming languages, but it is generally considered physically unsafe to be more specific, almost as lethal as the example worked out by Shakespeare in his paper.
This issue is also close to the problems raised in formal language theory, when alphabets and formal systems have to be defined up to an isomorphism, so as to underscore that the symbols of the alphabets are abstract entities, independent of how they "materialize" as elements from some set.
This major result by Shakespeare shows also that science was then diverging from magic and religion, where a being or a meaning may have a true name.
The conclusion of all this is that for theoretical work, it is often more convenient not to be encumbered by names, even though it may feel simpler for practical work and everyday life. But recall that not everyone called Mom is your mother.
The issue was addressed more recently by the 20th century American logician Gertrude Stein. However, her mathematician colleagues are still pondering the precise technical implications of her main theorem:
Rose is a rose is a rose is a rose.
published in 1913 in a short communication entitled "Sacred Emily".
baboubabou
$\begingroup$ Additional note: In recent decades "rose" has (in computer science) been mostly replaced by "foobar" (and parts thereof) as the canonical example for a name that is as good as any other. This preference has apparently been introduced by American railroad engineers. $\endgroup$
– FrankW
$\begingroup$ That said, canonical names for often used concepts are important for efficient communication. $\endgroup$
$\begingroup$ @Raphael Agreed, but I would put that in the everyday life category. And how do we know the boundaries of what is really canonical? Still, I often feel concern when I see students taking all terminology, notation and definitions (or even the way some theorems are stated) for a God-given immutable truth. Even here, on SE, students ask questions, not realizing that we may not know their notations, or the definitions they use in class. The magic of true names does not die easily. $\endgroup$
– babou
I would like to venture an opinion that is different from those of @babou and @YuvalFilmus: It is vital for pure $\lambda$-calculus to have anonymous functions. The problem with having only named functions is that you need to know in advance how many names you will need. But in the pure $\lambda$-calculus you have no a priori bound on the number of functions used (think about recursion), so you either use (1) anonymous functions, or (2) you go the $\pi$-calculus route and provide a fresh name combinator ($\nu x.P$ in $\pi$-calculus) that gives an inexhaustible supply of fresh names at run-time.
The reason pure $\lambda$-calculus does not have an explicit mechanism for recursion is that pure $\lambda$-calculus was originally intended to be a foundation of mathematics by A. Church, and recursion makes such a foundation trivially unsound. So it came as a shock when Stephen Kleene and J. B. Rosser discovered that pure $\lambda$-calculus is unsuitable as a foundation of mathematics (Kleene–Rosser paradox). Haskell Curry analysed the Kleene-Rosser paradox and realised that its essence is what we now know as Y-Combinator.
Added after @babou's comment: there is nothing wrong with having named functions. You can do this as follows: $\mathsf{let} f = M \mathsf{in} N$ is a shorthand for $ (\lambda f.N)M $ in the call-by-value $\lambda$-calculus.
Martin BergerMartin Berger
$\begingroup$ I think the OP wanted the ability to name functions, not to forbid anonymous ones. This said, I would think that any requirement of λ-calculus regarding the need for anonymous functions would show as well in languages like Lisp/Scheme or ML. In the case of Lisp/Scheme, the meta-circularity of evaluators should make it possible to create new names as needed, though I am not sure I would want it that way in a formal system. The use of unbounded number of functions is not necessarily a problem when recursion allows local reuse of already used names. $\endgroup$
$\begingroup$ @babou Scheme and ML have letrec, so they can easily live with a finite number of named functions. I'd be interested to see a presentation of the pure $\lambda$-calculus with an explicit scheme for reusing names. And yes, the ability to name functions (and other terms) is perfectly compatible with the pure $\lambda$-calculus. $\endgroup$
$\begingroup$ Should the last line read (lambda f. N) M? $\endgroup$
– Joe the Person
$\begingroup$ @JoethePerson Yes, well spotted. Fixed. Thanks. $\endgroup$
I believe the idea is that names are not necessary. Anything that appears to require names can be written as anonymous functions.
You can think of the lambda calculus like assembly language. Someone in a lecture on assembly might say "There are no object oriented inheritance trees in assembly language." You might then think up a clever way to implement inheritance trees, but thats not the point. The point is that inheritance trees are not required at the most basic level of how a physical computer is programed.
In the lambda calculus the point is that names are not required to describe an algorithm at the most basic level.
Real John ConnorReal John Connor
I'm enjoying the 3 answers here so far -- most especially @babou's Shakespearen analysis -- but they don't shed light on what I think is the essence of the question.
λ-calculus does bind names to functions whenever you apply a function to a function. The issue is not the lack of names.
"The problem is that a function cannot call itself simply" by referring to its name.
(In pure Lisp, the name --> function binding is not in scope within the function's body. For a function to call itself by its name, the function would have to refer to an environment that refers to the function. Pure Lisp has no cyclic data structures. Impure Lisp does it by mutating the environment that the function refers to.)
As @MartinBerger pointed out, the historical reason that λ-calculus doesn't let a function call itself by name was an attempt to rule out Curry's paradox when trying to use λ-calculus as a foundation of mathematics, including deductive logic. This didn't work since techniques like the Y combinator allow recursion even without self-reference.
If we can define function r = (λ.x x x ⇒ y) then r r = (r r ⇒ y).
If r r is true then y is true. If r r is false then r r ⇒ y is true, which is a contradiction. So y is true and as y can be any statement, any statement may be proved true.
r r is a non-terminating computation. Considered as logic r r is an expression for a value that does not exist.
Jerry101Jerry101
$\begingroup$ I am pretty new to lambda calculus, so I had a question, which I have had for pretty much forever by now. What does $\lambda.x \space x x$ mean? I'm pretty sure it doesn't mean multiply $x$ by $x$. Does it mean apply the expression $x$ to itself? Also, what does the part => y signify? $\endgroup$
– Rohan Prabhu
$\begingroup$ @RohanPrabhu λ.x x x translates to Lisp as (lambda (x) (x x)) and to JavaScript as function (x) {return x(x);}. x⇒y means x implies y, about the same as (NOT x) OR y. See en.wikipedia.org/wiki/Lambda_calculus $\endgroup$
$\begingroup$ Thank you for answering that embarrassing rookie question! $\endgroup$
Not the answer you're looking for? Browse other questions tagged lambda-calculus functional-programming or ask your own question.
Are turing machine really countable?
Problem understanding DFA & NFA equivalence in Theory of Computation
Why are pushdown automata countable?
Simple explanation as to why certain computable functions cannot be represented by a typed term?
Lambda calculus outside functional programming?
anonymous lambda functions (functional programming)
Why is Church-Rosser so important for basing programming languages on lamdba-calculus?
Why are combinators important in lambda calculus?
Understanding church numerals
Solving functional equations for unknown functions in lambda calculus
How do I arrive at the multiplication function in lambda calculus?
Is Lambda Calculus purely syntactic? | CommonCrawl |
Closed sets of finitary functions between products of finite fields of coprime order
Stefano Fioravanti1
Algebra universalis volume 82, Article number: 61 (2021) Cite this article
We investigate the finitary functions from a finite product of finite fields \(\prod _{j =1}^m\mathbb {F}_{q_j} = {\mathbb K}\) to a finite product of finite fields \(\prod _{i =1}^n\mathbb {F}_{p_i} = {\mathbb {F}}\), where \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. An \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid is a subset of these functions which is closed under composition from the right and from the left with linear mappings. We give a characterization of these subsets of functions through the \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_p^{{\mathbb K}}\), where \({\mathbb K}^{\times }\) is the multiplicative monoid of \({\mathbb K}= \prod _{i=1}^m {\mathbb {F}}_{q_i}\). Furthermore we prove that each of these subsets of functions is generated by a set of unary functions and we provide an upper bound for the number of distinct \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids.
Since P. Hall's abstract definition of a clone the problem to describe sets of finitary functions from a set A to a set B which satisfy some closure properties has been a fruitful branch of research. E. Post's characterization of all clones on a two-element set [12] can be considered as a foundational result in this field, which was developed further, e.g., in [10, 11, 13, 15]. Starting from [9], clones are used to study the complexity of certain constrain satisfaction problems (CSPs).
The aim of this paper is to describe sets of functions from a finite product of finite fields \(\prod _{j =1}^m\mathbb {F}_{q_j} = {\mathbb K}\) to a finite product of finite fields \(\prod _{i =1}^n\mathbb {F}_{p_i} = {\mathbb {F}}\), where \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. The sets of functions we are interested in are closed under composition from the left and from the right with linear mappings. Thus we consider sets of functions with different domains and codomains; such sets are called clonoids and are investigated, e.g., in [2]. Let \(\mathbf {B}\) be an algebra, and let A be a non-empty set. For a subset C of \(\bigcup _{n \in {\mathbb {N}}} B^{A^n}\) and \(k\in {\mathbb {N}}\), we let \(C^{[k]} :=C \cap B^{A^k}\). According to Definition 4.1 of [2] we call C a clonoid with source set A and target algebra \(\mathbf {B}\) if
for all \(k \in {\mathbb {N}}\): \(C^{[k]}\) is a subuniverse of \(\mathbf {B}^{A^k}\), and
for all \(k,n \in {\mathbb {N}}\), for all \((i_1,\dots ,i_k) \in \{1,\dots ,n\}^k\), and for all \(c \in C^{[k]}\), the function \(c' :A^n \rightarrow B\) with \(c'(a_1,\dots ,a_n) := c(a_{i_1},\dots ,a_{i_k})\) lies in \(C^{[n]}\).
By (1) every clonoid is closed under composition with operations of \(\mathbf {B}\) on the left. In particular we are dealing with those clonoids whose target algebra is the ring \(\prod _{i=1}^m\mathbb {F}_{p_i}\) that are closed under composition with linear mappings from the right side.
Definition 1.1
Let \(m,s \in {\mathbb {N}}\) and let \({\mathbb K}= \prod _{j=1}^m\mathbb {K}_{j}\), \({\mathbb {F}}= \prod _{i=1}^s\mathbb {F}_{i}\) be products of fields. An \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid is a non-empty subset C of \(\bigcup _{k \in \mathbb {N}} \prod _{i=1}^s\mathbb {F}_{i}^{{\prod _{j=1}^m\mathbb {K}_{j}^k}}\) with the following properties:
for all \(n \in {\mathbb {N}}\), \({\varvec{a}}, {\varvec{b}} \in \prod _{i=1}^s\mathbb {F}_i\), and \(f,g \in C^{[n]}\):
$$\begin{aligned} {\varvec{a}}f + {\varvec{b}}g \in C^{[n]}; \end{aligned}$$
for all \(l,n \in {\mathbb {N}}\), \(f \in C^{[n]}\), \(({\varvec{x}}_1,\dots ,{\varvec{x}}_m) \in \prod _{j=1}^m\mathbb {K}_{j}^l\), and \(A_j\in \mathbb {K}^{n \times l}_{j}\):
$$\begin{aligned} g:({\varvec{x}}_1,\dots ,{\varvec{x}}_m) \mapsto f(A_1\cdot {\varvec{x}}_1^t,\cdots ,A_m\cdot {\varvec{x}}_m^t) \text { is in } C^{[l]}, \end{aligned}$$
where the juxtaposition \({\varvec{a}}f\) denotes the Hadamard product of the two vectors (i.e. the component-wise product \((a_1,\dots ,a_n)\cdot (b_1,\dots ,b_n) = (a_1b_1,\dots ,\) \(a_nb_n)\)).
Clonoids naturally appear in the study of promise constraint satisfaction problems (PCSPs). These problems are investigated, e.g., in [4], and in [5] clonoid theory has been used to provide an algebraic approach to PCSPs. In [14] A. Sparks investigate the number of clonoids for a finite set A and finite algebra \(\mathbf {B}\) closed under the operations of \(\mathbf {B}\). In [8] S. Kreinecker characterized linearly closed clonoids on \(\mathbb {Z}_p\), where p is a prime. Furthermore, a description of the set of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids is a useful tool to investigate (polynomial) clones on \({\mathbb {Z}}_n\), where n is a product of distinct primes or to represent polynomial functions of semidirect products of groups.
In [6] there is a complete description of the structure of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids in case \({\mathbb {F}}\) and \({\mathbb K}\) are fields and the results we will present are a generalization of this description.
The main result of this paper (Theorem 1.2) states that every \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid is generated by its subset of unary functions.
Let \({\mathbb K}= \prod _{i=1}^m\mathbb {F}_{q_i}\), \({\mathbb {F}}= \prod _{i=1}^s\mathbb {F}_{p_i}\) be products of fields such that \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. Then every \((\mathbb {F},\mathbb {K})\)-linearly closed clonoid is generated by a set of unary functions and thus there are finitely many distinct \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids.
The proof of this result is given in Section 3. From this follows that under the assumptions of Theorem 1.2 we can bound the cardinality of the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids.
Furthermore, in Section 4 we find a description of the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids as the direct product of the lattices of all \({\mathbb {F}}_{p_i}[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_{p_i}^{{\mathbb K}}\), where \({\mathbb K}^{\times }\) is the multiplicative monoid of \({\mathbb K}= \prod _{i=1}^m {\mathbb {F}}_{q_i}\). Moreover, we provide a concrete bound for the cardinality of the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids.
Let \({\mathbb {F}}= \prod _{i=1}^s{\mathbb {F}}_{p_i}\) and \({\mathbb K}= \prod _{j=1}^m{\mathbb {F}}_{q_j}\) be products of finite fields such that \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. Then the cardinality of the lattice of all \((\mathbb {F},\mathbb {K})\)-linearly closed clonoids \(\mathcal {L}({\mathbb {F}},{\mathbb K})\) is bounded by:
$$\begin{aligned} |\mathcal {L}({\mathbb {F}},{\mathbb K})| \le \prod _{i=1}^s\sum _{1 \le r \le n}{{ n}\atopwithdelims (){r}}_{p_i}, \end{aligned}$$
where \(n = \prod _{j = 1}^mq_i\) and
$$\begin{aligned} {{n}\atopwithdelims (){h}}_q = \prod _{i=1}^h \frac{q^{n-h+i}-1}{q^i-1} \end{aligned}$$
with \(q \in {\mathbb {N}}\backslash \{1\}\).
Preliminaries and notation
We use boldface letters for vectors, e.g., \({\varvec{u}} = (u_1,\dots ,u_n)\) for some \(n \in {\mathbb {N}}\). Moreover, we will use \(\langle {\varvec{v}}, {\varvec{u}}\rangle \) for the scalar product of the vectors \({\varvec{v}}\) and \({\varvec{u}}\). Let f be an n-ary function from an additive group \(\mathbf {G}_1\) to a group \(\mathbf {G}_2\). We say that f is 0-preserving if \(f(0_{\mathbf {G}_1},\dots ,0_{\mathbf {G}_1}) = 0_{\mathbf {G}_2}\). A non-trivial \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids is the set of all 0-preserving finitary functions from \({\mathbb K}\) to \({\mathbb {F}}\). The \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids form a lattice with the intersection as meet and the \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid generated by the union as join. The top element of the lattice is the \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid of all functions and the bottom element consists of only the constant zero functions. We write \(\mathrm {Clg}(S)\) for the \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoid generated by a set of functions S.
In order to prove Theorem 1.2 we introduce the definition of 0-absorbing function. This concept is slightly different from the one in [1] since we consider the source set to be split into a product of sets. Nevertheless, some of the techniques in [1] can be used also with our definition of 0-absorbing function.
Let \(A_1, \dots , A_m\) be sets, let \(0_{A_i} \in A_i\), and let \(J \subseteq [m]\). For all \({\varvec{a}} = (a_1,\dots ,a_m) \in \prod _{i=1}^mA_i\) we define \({\varvec{a}}^{(J)}\in \prod _{i=1}^mA_i\) by \({\varvec{a}}_i^{(J)} =a_i\) for \(i \in J\) and \(({\varvec{a}}^{(J)})_i = 0_{A_i}\) for \(i \in [m] \backslash J\).
Let \(A_1, \dots , A_m\) be sets, let \(0_{A_i} \in A_i\), let \(\mathbf {G} = \langle G, +, -, 0_G \rangle \) be an abelian group, let \(f :\prod _{i=1}^mA_i \rightarrow G\), and let \(I \subseteq [m]\). By \(\mathrm {Dep}(f)\) we denote \(\{i \in [m] \mid f \text { depends on its ith set argument}\}\). We say that f is \(0_{A_j}\)-absorbing in its jth argument if for all \({\varvec{a}} = (a_1, \dots , a_m) \in \prod _{i=1}A_i\) with \(a_j = 0_{A_j}\) we have \(f({\varvec{a}}) = 0_G\). We say that f is 0-absorbing in I if \(\mathrm {Dep}(f) \subseteq I\) and for every \(i \in I\) f is \(0_{A_i}\)-absorbing in its ith argument.
Using the same proof of [1, Lemma 3] we can find an interesting property of 0-absorbing functions.
Let \(A_1, \dots , A_m\) be sets, let \(0_{A_i}\) be an element of \(A_i\) for all \(i \in [m]\). Let \(B = \langle B, +, -, 0_G\rangle \) be an abelian group, and let \(f :\prod _{i=1}^mA_{i} \rightarrow B\). Then there is exactly one sequence \(\{f_I\}_{I \subseteq [m]}\) of functions from \(\prod _{i=1}^mA_{i} \) to B such that for each \(I \subseteq [m]\), \(f_I\) is 0-absorbing in I and \(f = \sum _{I \subseteq [m]} f_I\). Furthermore, each function \(f_I\) lies in the subgroup \(\mathbf {F}\) of \(\mathbf {B}^{\prod _{i=1}^mA_{i}}\) that is generated by the functions \({\varvec{x}} \rightarrow f({\varvec{x}}^{(J)})\), where \(J\subseteq [m]\).
The proof is essentially the same of [1, Lemma 3] substituting \(A^m\) with \(\prod _{i =1}^mA_i\). We define \(f_I\) by recursion on |I|. We define \(f_{\emptyset } ({\varvec{a}}) := f(0_{A_1}, \dots , 0_{A_m})\) and for all \(I \not =\emptyset \) and \({\varvec{a}} \in \prod _{i=1}^mA_i\) and \(f_{I}\) by:
$$\begin{aligned} f_I({\varvec{a}}) := f({\varvec{a}}^{(I)}) - \sum _{J \subset I} f_J ({\varvec{a}}), \end{aligned}$$
for all \({\varvec{a}} \in \prod _{i=1}^mA_i\). \(\square \)
Furthermore, as in [3], we can see that the component \(f_I\) satisfies \(f_I({\varvec{a}})\) \( = \sum _{J\subseteq I} (-1)^{|I|+|J|}\) \(f({\varvec{a}}^{(J)} )\). From now on we will not specify the element that the functions absorb since it will always be the 0 of a finite field.
Unary generators of \((\mathbb {F},\mathbb {K})\)-linearly closed clonoid
In this section our aim is to find an analogon of [6, Theorem 4.2] for a generic \((\mathbb {F},\mathbb {K})\)-linearly closed clonoid C, which will allow us to generate C with a set of unary functions. In general we will see that it is the unary part of an \((\mathbb {F},\mathbb {K})\)-linearly closed clonoid that determines the clonoid. To this end we shall show the following lemmata. We denote by \({\varvec{e}}_1^{{\mathbb {F}}^n_{q_i}} = (1,0,\dots ,0)\) the first member of the canonical basis of \({\mathbb {F}}^n_{q_i}\) as a vector-space over \({\mathbb {F}}_{q_i}\). Let \(f: \prod _{i=1}^m {\mathbb {F}}^k_{q_i} \rightarrow {\mathbb {F}}_{p}\). Let \(s \le m\) and let \({\mathbb K}= \prod _{i=1}^s {\mathbb {F}}_{q_i}\). Then we denote by \(f\mid _{{\mathbb K}}: \prod _{i=1}^s {\mathbb {F}}^k_{q_i} \rightarrow {\mathbb {F}}_{p}\) the function such that \(f\mid _{{\mathbb K}}({\varvec{x}}_1,\dots ,{\varvec{x}}_s) = f({\varvec{x}}_1,\dots ,{\varvec{x}}_s,0,\dots ,0)\).
Let \(f,g:\prod _{i=1}^m\mathbb {F}_{q_i}^n \rightarrow \mathbb {F}_p\) be functions, and let \({\varvec{b}}_1, \dots , {\varvec{b}}_m\) be such that \({\varvec{b}}_i \in \mathbb {F}_{q_i}^n \backslash \) \(\{(0,\dots ,0)\}\) for all \(i \in [m]\). Assume that \(f(\lambda _1{\varvec{b}}_1,\dots ,\lambda _m{\varvec{b}}_m) = g(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^n_{q_1}},\dots ,\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^n_{q_m}})\), for all \(\lambda _1 \in \mathbb {F}_{q_1},\dots ,\lambda _m \in \mathbb {F}_{q_m}\), and \(f({\varvec{x}}) = g({\varvec{y}}) = 0\) for all \({\varvec{x}} \in \prod _{i=1}^m\mathbb {F}_{q_i}^n \backslash \{(\lambda _1{\varvec{b}}_1,\dots \) \(,\lambda _m{\varvec{b}}_m) \mid (\lambda _1,\dots ,\lambda _m) \in \prod _{i=1}^m\mathbb {F}_{q_i}\}\) and \({\varvec{y}} \in \prod _{i=1}^m\mathbb {F}_{q_i}^n \backslash \{(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^n_{q_1}},\dots ,\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^n_{q_m}})\mid (\lambda _1,\dots ,\) \(\lambda _m) \in \prod _{i=1}^m\mathbb {F}_{q_i}\}\). Then \(f \in \mathrm {Clg}(\{g\})\).
For \(j \le m\) let \(A_j\) be any invertible \(n \times n\)-matrix over \({\mathbb K}_j\) such that \(A_j{\varvec{b}}_j = {\varvec{e}}^{{\mathbb {F}}^n_{q_j}}_1\). Then is straightforward to check that \(f({\varvec{x}}_1,\dots ,{\varvec{x}}_m) = g(A{\varvec{x}}_1,\) \(\dots ,A{\varvec{x}}_m)\). \(\square \)
Let \(q_1,\dots ,q_m\) and p be powers of primes and let \({\mathbb K}= \prod _{i=1}^m{\mathbb {F}}_{q_i}\). Let \(h \le m\) and let \({\mathbb K}_1 = \prod _{i=1}^h{\mathbb {F}}_{q_i}\). Let C be an \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoid and let
$$\begin{aligned} C\mid _{{\mathbb K}_1} := \{g \mid \exists g' \in C :g'\mid _{{\mathbb K}_1} = g\}. \end{aligned}$$
Let \(\mathrm {Dep}(f) = [h]\) and let \(f: \prod _{i=1}^m{\mathbb {F}}^s_{q_i} \rightarrow {\mathbb {F}}_p\). Then \(f \in \mathrm {Clg}(C^{[1]})^{[s]}\) if and only if \(f \mid _{{\mathbb K}_1} \in \mathrm {Clg}(C\mid _{{\mathbb K}_1}^{[1]})^{[s]}\).
It is clear that if \(f \in \mathrm {Clg}(C^{[1]})\) then \(f \mid _{{\mathbb K}_1} \in \mathrm {Clg}(C\mid _{{\mathbb K}_1}^{[1]})\), simply restricting to \({\mathbb K}_1\) all the unary generators of f. Conversely, let \(S'\) be a set of unary generators of \(f\mid _{{\mathbb K}_1}\). Let \(S \subseteq C^{[1]}\) be defined by
$$\begin{aligned} S&:= \left\{ g \mid \exists g' \in S' : g(x_1,\dots ,x_h,0,\dots ,0) = g'(x_1,\dots ,x_h),\right. \\&\left. \text { for all } (x_1,\dots ,x_h) \in \prod _{i=1}^h{\mathbb {F}}_{q_i}\right\} . \end{aligned}$$
From \(\mathrm {Dep}(f) = [h]\) follows that S is a set of unary generators of f. \(\square \)
Let \(q_1,\dots ,q_m\) and p be powers of primes with \(\prod _{i=1}^mq_i\) and p coprime. Let \({\mathbb K}= \prod _{i=1}^m{\mathbb {F}}_{q_i}\). Let C be an \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoid, let \(g \in C^{[1]}\) be 0-absorbing in [m], and let \(t_k:\prod _{i=1}^m\mathbb {F}_{q_i}^k \rightarrow \mathbb {F}_p\) be defined by:
$$\begin{aligned}&t_k(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}},\dots ,\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}}) = g(\lambda _1,\dots ,\lambda _m) \text { for all } (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i} \\ {}&t_k({\varvec{x}}) = 0 \\ {}&\text {for all } {\varvec{x}} \in \prod _{i=1}^m\mathbb {F}^k_{q_i}\backslash \left\{ ( \lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}}, \dots ,\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}}) \mid (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}\right\} . \end{aligned}$$
Then \(t_k\) is 0-absorbing in [m], with \(A_i = {\mathbb {F}}_{q_i}^k\) and \(0_{A_i} = (0_{{\mathbb {F}}_{q_i}},\dots ,0_{{\mathbb {F}}_{q_i}})\). Furthermore, \(t_k \in \mathrm {Clg}(C^{[1]})\) for all \(k \in {\mathbb {N}}\).
Since g is 0-absorbing in [m] then also \(t_k\) is 0-absorbing in [m]. Moreover ,we prove that \(t_k \in \mathrm {Clg}(C^{[1]})\) by induction on k.
Case \(k =1\): if \(k = 1\), then \(t_1 = g\) is a unary function of \(C^{[1]}\).
Case \(k>1\): we assume that \(t_{k-1} \in \mathrm {Clg}(C^{[1]})\). For all \(1 \le i \le m\) we define the two sets of mappings \(T_i^{[k]} \) and \(R_i^{[k]}\) from \({\mathbb {F}}_{q_i}^k\) to \({\mathbb {F}}_{q_i}^{k-1}\) by:
$$\begin{aligned} T_i^{[k]}&:=\{u_{a}:(x_1,\dots ,x_k) \mapsto (x_1-ax_2,x_3\dots ,x_k)\mid a \in \mathbb {F}_{q_i}\} \\R_i^{[k]}&:=\{w_{a}:(x_1,\dots ,x_k) \mapsto (ax_2,x_3\dots ,x_k)\mid a \in \mathbb {F}_{q_i}\backslash \{0\}\}. \end{aligned}$$
Let \(P_i^{[k]} := T_i^{[k]} \cup R_i^{[k]}\). Furthermore, we define the function \(c ^{[k]}:\bigcup _{i=1}^m P_i^{[k]} \rightarrow {\mathbb {N}}\) by:
$$\begin{aligned} c^{[k]}(h)= {\left\{ \begin{array}{ll} 0 &{}\quad \text {if } h \in \bigcup _{i=1}^mT_i^{[k]}, \\ 1 &{}\quad \text {if } h \in \bigcup _{i=1}^mR_i^{[k]}. \end{array}\right. } \end{aligned}$$
Let us define the function \(r_k:\prod _{i=1}^m\mathbb {F}_{q_i}^k \rightarrow \mathbb {F}_p\) by:
$$\begin{aligned}&r_k({\varvec{x}}_1,\dots , {\varvec{x}}_m) \nonumber \\&\quad = \sum _{h_1 \in P_1^{[k]},\dots ,h_m \in P_m^{[k]}} (-1)^{\sum _{i=1}^m \text {c}^{[k]}(h_i)} t_{k-1}(h_1({\varvec{x}}_1), ,\dots ,h_m({\varvec{x}}_m)), \end{aligned}$$
for all \({\varvec{x}}_i \in \mathbb {F}_{q_i}^k\).
Claim: \(r_k({\varvec{x}}_1,\dots , {\varvec{x}}_m) = \prod _{i =1}^mq_i \cdot t_{k}({\varvec{x}}_1,\dots , {\varvec{x}}_m)\) for all \(({\varvec{x}}_1,\dots , {\varvec{x}}_m) \in \prod _{i=1}^m {\mathbb {F}}_{q_i}^k\)
Subcase \(\exists i \in [m], 3 \le j \le k\) with \(({\varvec{x}}_i)_j \not =0\):
By definition of \(t_{k-1}\), we can see that in (3.1) every summand vanishes if there exist \(i \in [m]\) and \(3 \le j \le k\) with \(({\varvec{x}}_i)_j \not =0\). Thus \(r_k({\varvec{x}}_1,\dots , {\varvec{x}}_m) = \prod _{i =1}^mq_i \cdot t_{k}({\varvec{x}}_1,\dots , {\varvec{x}}_m) = 0\) in this case.
Subcase \(\exists l \in [m] \) with \(({\varvec{x}}_l)_2 \not =0\) and \(({\varvec{x}}_i)_j =0\) for all \( i \in [m], 3 \le j \le k\):
We prove that \(r_k({\varvec{x}}_1,\dots , {\varvec{x}}_m) = 0\). We can see that for all \((x_1,x_2) \in {\mathbb {F}}_{q_l} \times {\mathbb {F}}_{q_l}\backslash \{0\}\) and for all \(b \in {\mathbb {F}}_{q_l}\backslash \{0\}\), there exists \(a \in {\mathbb {F}}_{q_l}\) such that \(bx_2 = x_1-ax_2\), and clearly \(a = x_1x_2^{-1}-b\). Conversely, for all \((x_1,x_2) \in {\mathbb {F}}_{q_l} \times {\mathbb {F}}_{q_l}\backslash \{0\}\) and for all \(a \in {\mathbb {F}}_{q_l}\backslash \{x_1x_2^{-1}\}\) there exists \(b \in {\mathbb {F}}_{q_l}\backslash \{0\}\) such \(bx_2 = x_1-ax_2\), and clearly \(b = x_1x_2^{-1}-a\).
With this observation we can see that for all \(h_i \in P_i^{[k]}\) with \(i \in [m]\backslash \{l\}\) and for all \(({\varvec{x}}_1,\dots ,{\varvec{x}}_m) \in \prod _{i=1}^m{\mathbb {F}}^k_{q_i}\) with \(({\varvec{x}}_l)_1 = x_1\) and \(({\varvec{x}}_l)_2 = x_2\) we have that if \(a \not = x_1x_2^{-1}\) then:
$$\begin{aligned}&t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),u_{a}({\varvec{x}}_l),h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) \\&\quad =t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),w_{x_1x_2^{-1}-a}({\varvec{x}}_l),h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) \end{aligned}$$
where \(u_{a} \in T_l^{[k]}\) and \(w_{x_1x_2^{-1}-a}\in R_l^{[k]}\). Thus they produce summands with different signs in (3.1). Moreover, if \(a = x_1x_2^{-1}\), then
$$\begin{aligned}&t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),u_{a}({\varvec{x}}_l),h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) \\ {}&\quad = t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),{\varvec{0}}_{{\mathbb {F}}^{k-1}_{q_l}},h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) = 0, \end{aligned}$$
since \(t_{k-1}\) is 0-absorbing in [m]. This implies that all the summands of \(r_k\) are cancelling if \(({\varvec{x}}_l)_2 \not = 0\). Thus \(r_k({\varvec{x}}_1,\dots , {\varvec{x}}_m) = \prod _{i =1}^mq_i \cdot t_{k}({\varvec{x}}_1,\dots , {\varvec{x}}_m) = 0\) in this case.
Subcase \(({\varvec{x}}_1,\dots ,{\varvec{x}}_m) = (\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^{k}_{q_1}},\dots , \lambda _m{\varvec{e}}_1^{{\mathbb {F}}^{k}_{q_m}})\) for some \((\lambda _1,\dots ,\lambda _m) \in \prod _{i=1}^m {\mathbb {F}}_{q_i}\):
We can observe that:
$$\begin{aligned}&t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),h_l(\lambda _l{\varvec{e}}_1^{{\mathbb {F}}_{q_l}}),h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) = 0 \\ {}&=t_{k-1} (h_1({\varvec{x}}_1),\dots ,h_{l-1}({\varvec{x}}_{l-1}),{\varvec{0}}_{{\mathbb {F}}^{k-1}_{q_l}},h_{l+1}({\varvec{x}}_{l+1}),\dots ,h_m({\varvec{x}}_{m})) = 0, \end{aligned}$$
for all \(h_i \in P_i^{[k]}\) with \(i \in [m]\backslash \{l\}\), for all \(l \le m\), \(\lambda _l \in {\mathbb {F}}_{q_l}\), \({\varvec{x}}_i \in {\mathbb {F}}^k_{q_i}\), and \(h_{l} \in R^{[k]}_l\), since \(t_{k-1}\) is 0-absorbing in [n]. Thus we can observe that:
$$\begin{aligned} \begin{aligned}&r_k(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}},\dots , \lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}})\\&\quad =\sum _{h_i \in P_i^{[k]}} (-1)^{\sum _{i=1}^m \text {c}^{[k]}(h_i)}t_{k-1} (h_1(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}}),\dots , h_m(\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}}))\\&\quad = \sum _{h_i \in T_i^{[k]}} (-1)^{\sum _{i=1}^m \text {c}^{[k]}(h_i)}t_{k-1} (h_1(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}}),\dots , h_m(\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}}))\\&\quad = \sum _{h_i \in T_i^{[k]}} t_{k-1} (h_1(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^k_{q_1}}),\dots , h_m(\lambda _m{\varvec{e}}_1^{{\mathbb {F}}^k_{q_m}}))\\&\quad = \sum _{h_i \in T_i^{[k]}} t_{k-1} (\lambda _1{\varvec{e}}_1^{{\mathbb {F}}_{q_1}^{k-1}},\dots , \lambda _m{\varvec{e}}_1^{{\mathbb {F}}_{q_m}^{k-1}})\\&\quad =\prod _{i=1}^mq_i\cdot t_{k-1} (\lambda _1{\varvec{e}}_1^{{\mathbb {F}}_{q_1}^{k-1}},\dots , \lambda _m{\varvec{e}}_1^{{\mathbb {F}}_{q_m}^{k-1}})\\&\quad = \prod _{i=1}^mq_i\cdot t_k(\lambda _1{\varvec{e}}_1^{{\mathbb {F}}^{k}_{q_1}},\dots , \lambda _m{\varvec{e}}_1^{{\mathbb {F}}^{k}_{q_m}}). \end{aligned} \end{aligned}$$
Thus \(r_k = \prod _{i =1}^mq_i \cdot t_{k}\).
Because of (3.1) and the inductive hypothesis, we have \(r_k \in \mathrm {Clg}(\{t_{k-1}\})\) \( \subseteq \mathrm {Clg}(C^{[1]})\). Thus \(\prod _{i = 1}^mq_it_{k} \in \mathrm {Clg}(C^{[1]})\). Since \(\prod _{i = 1}^mq_i \not = 0\) modulo p we have that \(t_{k} \in \mathrm {Clg}(C^{[1]})\) and this concludes the induction proof. \(\square \)
Let \(q_1,\dots ,q_m\) and p be powers of primes with \(\prod _{i=1}^mq_i\) and p coprime and let \({\mathbb K}= \prod _{i=1}^m{\mathbb {F}}_{q_i}\). Let C be an \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoid, let \(I \subseteq [m]\) and let \(f \in C\) be 0-absorbing in I. Then \(f \in \mathrm {Clg}(C^{[1]})\).
Let \({\mathbb K}_1 = \prod _{i \in I}{\mathbb {F}}_{q_i}\) and let \(C_1 := \{g \mid \exists g' \in C :g'\mid _{{\mathbb K}_1} = g\}\). By Lemma 3.2\(f \in \mathrm {Clg}(C^{[1]})\) if and only if \(f \mid _{{\mathbb K}_1} \in \mathrm {Clg}(C\mid _{{\mathbb K}_1}^{[1]})\) and we observe that \(f\mid _{{\mathbb K}_1}\) is 0-absorbing in I . Thus without loss of generality we fix \(I = [m]\). The strategy is to interpolate f in all the distinct products of lines of the form \(\{(\lambda _{1}{\varvec{b}}_{1},\dots ,\lambda _m{\varvec{b}}_{m}) \mid (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}, {\varvec{b}}_i \in {\mathbb {F}}_{q_i}^n\backslash \{(0,\dots ,0)\}\). To this end let \(R = \{L_j \mid 1 \le j \le \prod _{i = 1}^m(q_i^{n} - 1)/(q_i-1) = s\}\) be the set of all s distinct products of lines of \(\prod _{i = 1}^m{\mathbb {F}}_{q_i}\) and let \({\varvec{l}}_{(i,j)} \in \mathbb {F}_{q_i}^n\) be such that \(({\varvec{l}}_{(1,j)},\dots ,{\varvec{l}}_{(m,j)})\) generates the products of m lines \(L_j\), for \(1 \le j \le s\), \(1 \le i \le m\). For all \(1 \le j \le s\), let \(f_{L_j}:\prod _{i=1}^m\mathbb {F}_{q_i}^n \rightarrow \mathbb {F}_p\) be defined by:
$$\begin{aligned} f_{L_j}(\lambda _1{\varvec{l}}_{(1,j)},\dots ,\lambda _m{\varvec{l}}_{(m,j)}) = f(\lambda _1{\varvec{l}}_{(1,j)},\dots ,\lambda _m{\varvec{l}}_{(m,j)}) \end{aligned}$$
for \( (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}\) and \(f_{L_j}({\varvec{x}}) = 0\) for all \({\varvec{x}} \in \prod _{i=1}^m\mathbb {F}^n_{q_i}\backslash \{(\lambda _1{\varvec{l}}_{(1,j)},\) \(\dots ,\lambda _m{\varvec{l}}_{(m,j)}) \mid (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}\}\).
Claim 1: \({f= \sum _{j = 1}^{s} f_{L_j}}\).
Since f is 0-absorbing in [m] we have that:
$$\begin{aligned} \sum _{j = 1}^{s} f_{L_j}(\lambda _1{\varvec{l}}_{(1,z)},\dots ,\lambda _m{\varvec{l}}_{(m,z)})&= f_{L_z}(\lambda _1{\varvec{l}}_{(1,z)},\dots ,\lambda _m{\varvec{l}}_{(m,z)}) \\ {}&=f(\lambda _1{\varvec{l}}_{(1,z)},\dots ,\lambda _m{\varvec{l}}_{(m,z)}) \end{aligned}$$
for all \( (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}\) and \(z \in [s]\), since for all \(j_1,j_2 \in [s]\), \(L_{j_1}\) and \(L_{j_2}\) intersect only in points of the form \(({\varvec{x}}_1,\dots ,{\varvec{x}}_m)\) \( \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}^n\) with \({\varvec{x}}_i = (0,\dots ,0)\) for some \(i \in [m]\).
Let \(1 \le j \le s\) and let \(g:\prod _{i=1}^m\mathbb {F}_{q_i} \rightarrow \mathbb {F}_p\) be a function such that:
$$\begin{aligned} f_{L_j}(\lambda _1{\varvec{l}}_{(1,j)},\dots ,\lambda _m{\varvec{l}}_{(m,j)}) = g(\lambda _1,\dots ,\lambda _m) = f(\lambda _1{\varvec{l}}_{(1,j)},\dots ,\lambda _m{\varvec{l}}_{(m,j)}) \end{aligned}$$
for all \( (\lambda _1,\dots ,\lambda _m) \in \prod _{i = 1}^m{\mathbb {F}}_{q_i}\). Then \(g \in C^{[1]}\).
Claim 2: \(f_{L_j} \in \mathrm {Clg}(C^{[1]})\) for all \(L_j \in R\).
We can observe that \(f_{L_j}(\lambda _1{\varvec{l}}_{(1,j)},\dots , \lambda _m{\varvec{l}}_{(m,j)}) = g(\lambda _1,\dots ,\lambda _m)\) for all \((\lambda _1,\dots ,\) \(\lambda _m) \in \prod _{i=1}^m\mathbb {F}_{q_i}\), and \(f_{L_j}({\varvec{x}}_1,\dots ,{\varvec{x}}_m) = 0\) for all \(({\varvec{x}}_1,\dots ,{\varvec{x}}_m) \in \prod _{i=1}^m\mathbb {F}_{q_i}^n \backslash \{(\lambda _1{\varvec{l}}_{(1,j)},\) \(\dots , \lambda _m{\varvec{l}}_{(m,j)})\mid (\lambda _1,\dots ,\lambda _m) \in \prod _{i=1}^m\mathbb {F}_{q_i} \}\). Furthermore, g is 0-absorbing in [m]. By Lemmata 3.1 and 3.3, \(f_{L_j} \in \mathrm {Clg}(C^{[1]})\), which concludes the proof of \(f \in \mathrm {Clg}(C^{[1]})\). \(\square \)
We are now ready to prove that an \((\mathbb {F},\mathbb {K})\)-linearly closed clonoid C is generated by its unary part.
Let \(q_1,\dots ,q_m\) and p be powers of primes with \(\prod _{i=1}^mq_i\) and p coprime and let \({\mathbb K}= \prod _{i=1}^m{\mathbb {F}}_{q_i}\). Then every \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoid C is generated by its unary functions. Thus \(C = \mathrm {Clg}(C^{[1]})\).
The inclusion \(\supseteq \) is obvious. For the other inclusion let C be an \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoid and let f be an n-ary function in C. By Lemma 2.1 with \(A_i = {\mathbb {F}}_{q_i}^n\) and \(0_{A_i} = (0_{{\mathbb {F}}_{q_i}},\dots ,0_{{\mathbb {F}}_{q_i}})\), f can be split in the sum of n-ary functions \(\sum _{I \subseteq [m]}f_I\) such that for each \(I \subseteq [m]\), \(f_I\) is 0-absorbing in I. Furthermore, each function \(f_I\) lies in the subgroup \(\mathbf {F}\) of \(\mathbb {F}_p^{{\mathbb K}^n}\) that is generated by the functions \({\varvec{x}} \rightarrow f({\varvec{x}}^{(I)})\), where \(I\subseteq [m]\) and thus each summand \(f_I\) is in C. By Lemma 3.4 each of these summands is in \(\mathrm {Clg}(C^{[1]})\). and thus \(f \in \mathrm {Clg}(C^{[1]})\). \(\square \)
The next corollary of Theorem 3.5 and the following theorem tell us that there are only finitely many distinct \((\mathbb {F},\mathbb {K})\)-linearly closed clonoids.
Corollary 3.6
Let \(q_1,\dots ,q_m\) and p be powers of primes with \(\prod _{i=1}^mq_i\) and p coprime and let \({\mathbb K}=\prod _{i=1}^m{\mathbb {F}}_{q_i}\). Let C and D be two \(({\mathbb {F}}_p,{\mathbb K})\)-linearly closed clonoids. Then \(C = D\) if and only if \(C^{[1]} = D^{[1]}\).
Let us denote by \(\mathcal {L}({\mathbb {F}},{\mathbb K})\) the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids. We define the functions \(\rho _i:\mathcal {L}({\mathbb {F}},{\mathbb K}) \rightarrow \mathcal {L}({\mathbb {F}}_{p_i},{\mathbb K})\) such that for all \(1 \le i \le s\) and for all \(C \in \mathcal {L}({\mathbb {F}},{\mathbb K})\):
$$\begin{aligned} \rho _i(C) := \{f \mid \text { there exists } g \in C :f = \pi _i^{{\mathbb {F}}} \circ g\}, \end{aligned}$$
where with \(\pi _i^{{\mathbb {F}}}\) we denote the projection over the i-th component of the product of fields \({\mathbb {F}}\).
Let \(\mathbb {F} = \prod _{i =1}^s{\mathbb {F}}_{p_i}\) and \(\mathbb {K} = \prod _{i =1}^m{\mathbb {F}}_{q_i}\) be products of finite fields. Then the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids is isomorphic to the direct product of the lattices of all \(({\mathbb {F}}_{p_i},{\mathbb K})\)-linearly closed clonoids with \(1 \le i \le s\).
Let us define the function \(\rho :\mathcal {L}({\mathbb {F}},{\mathbb K}) \rightarrow \prod _{i=1}^s\mathcal {L}({\mathbb {F}}_{p_i},{\mathbb K})\) such that \(\rho (C) := (\rho _1(C),\dots ,\rho _s(C))\). Clearly \(\rho \) is well-defined. Conversely, let \(\psi :\) \(\prod _{i=1}^s\mathcal {L}\) \(({\mathbb {F}}_{p_i},{\mathbb K}) \rightarrow \mathcal {L}({\mathbb {F}},{\mathbb K})\) be defined by:
$$\begin{aligned} \psi (C_1,\dots ,C_s) = \bigcup _{k \in {\mathbb {N}}}\{f :{\varvec{x}} \mapsto (f_1({\varvec{x}}),\dots ,f_s({\varvec{x}}))\mid f_1\in C_1^{[k]}, \dots , f_s\in C_s^{[k]}\}. \end{aligned}$$
From this definition it is clear that \(\psi \) is well defined. Furthermore,
$$\begin{aligned} \rho \psi (C_1,\dots ,C_s) = (C_1,\dots ,C_s) \end{aligned}$$
and \(C\subseteq \psi \rho (C)\) for all \((C_1,\dots ,C_s) \in \prod _{i=1}^s\mathcal {L}({\mathbb {F}}_{p_i},{\mathbb K})\) and \(C \in \mathcal {L}({\mathbb {F}},{\mathbb K})\).
To prove that \(C \supseteq \psi \rho (C)\) let \(f\in \psi \rho (C)\). Then there exists \((f_1,\dots ,f_s) \in \rho (C)\) such that \(f_i = \pi _i^{{\mathbb {F}}} \circ f\) for all \(i \in [s]\). By definition of \(\rho \), there exist \(g_1,\dots ,g_s \in C\) such that \(f_i = \pi _i^{{\mathbb {F}}} \circ g_i\) for all \(i \in [s]\). Let \({\varvec{a}}_i \in {\mathbb {F}}\) be such that \({\varvec{a}}_i(j) = 0\) for \(j \not = i\) and \({\varvec{a}}_i(i) = 1\). It is easy to check that the function \(f =\sum _{i=1}^s {\varvec{a}}_ig_i =f\) and thus \(f \in C\).
Hence \(\rho \) is a lattice isomorphism. \(\square \)
Proof of Theorem 1.2
Let \({\mathbb {F}}= \prod _{i=1}^s{\mathbb {F}}_{p_i}\) and \({\mathbb K}= \prod _{i=1}^m{\mathbb {F}}_{q_i}\) be products of finite with \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) coprime. Let \(C \in \mathcal {L}({\mathbb {F}},{\mathbb K})\). By Theorem 3.7C is uniquely determined by its projections \(C_1 = \rho _1(C),\dots , C_s = \rho _s(C)\) where \(\rho _i\) is defined in (3.2). By Theorem 3.5 we have that for all \(i \in [s]\) every \(({\mathbb {F}}_{p_i},{\mathbb K})\)-linearly closed clonoid \(C_i\) is uniquely determined by its unary part \(C_i^{[1]}\). Thus C is uniquely determined by its unary part \(C^{[1]}\). \(\square \)
The lattice of all \((\mathbb {F},\mathbb {K})\)-linearly closed clonoids
In this section we characterize the structure of the lattice \(\mathcal {L}({\mathbb {F}},{\mathbb K})\) of all \((\mathbb {F},\mathbb {K})\)-linearly closed clonoids through a description of their unary parts. Let \(\mathbb {F} = \prod _{i=1}^s{\mathbb {F}}_{p_i}\) and \(\mathbb {K} = \prod _{j=1}^m{\mathbb {F}}_{q_j}\) be products of finite fields such that \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime numbers.
We will see that \(\mathcal {L}({\mathbb {F}},{\mathbb K})\) is isomorphic to the product of the lattices of all \({\mathbb {F}}_{p_i}[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_{p_i}^{{\mathbb K}}\), where \({\mathbb K}^{\times }\) is the multiplicative monoid of \({\mathbb K}= \prod _{i=1}^m {\mathbb {F}}_{q_i}\). In order to characterize the lattice of all \(({\mathbb {F}}, {\mathbb K})\)-linearly closed clonoids we need the definition of monoid ring.
Let \(\langle M, \cdot \rangle \) be a commutative monoid and let \(\langle R, +, \odot \rangle \) be a commutative ring with identity. Let
$$\begin{aligned} S := \{f \in R^M \mid f(a) \not = 0 \text { for only finitely many } a \in M\}. \end{aligned}$$
We define the monoid ring of M over R as the ring \((S, +, \cdot )\), where \(+\) is the point-wise addition of functions and the multiplication is defined as \(f\cdot g : R \rightarrow M \) which maps each \(m \in M\) into:
$$\begin{aligned} \sum _{m_1,m_2 \in M,m_1m_2=m}f(m_1)g(m_2). \end{aligned}$$
We denote by R[M] the monoid ring of M over R. Following the notation in [3] for all \(a \in A\) we define \(\tau _a\) to be the element of \(R^M\) with \(\tau _a(a) = 1\) and \(\tau _a (M\backslash \{a\}) = \{0\}\). We observe that for all \(f \in R[M]\) there is an \({\varvec{r}} \in R^M\) such that \(f = \sum _{a\in M} r_a\tau _a \) and that we can multiply such expressions with the rule \(\tau _a \cdot \tau _b = \tau _{ab }\).
Let M be a commutative monoid and let R be a commutative ring. We denote by \(R^M\) the R[M]-module with the action \(*\) defined by:
$$\begin{aligned} (\tau _{a} *f)(x) = f(ax), \end{aligned}$$
for all \(a \in M\) and \(f \in R^M\).
Let \({\mathbb K}^{\times }\) be the multiplicative monoid of \({\mathbb K}= \prod _{i=1}^m {\mathbb {F}}_{q_i}\). We can observe that V is an \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodule of \(\mathbb {F}_p^{{\mathbb K}}\) if and only if it is a subspace of \(\mathbb {F}_p^{{\mathbb K}}\) satisfying
$$\begin{aligned} (x_1,\dots ,x_m) \mapsto f(a_1x_1,\dots ,a_mx_m) \in V, \end{aligned}$$
for all \(f \in V\) and \((a_1,\dots ,a_m) \in \prod _{i=1}^m{\mathbb {F}}_{q_i}\). Clearly the following lemma holds.
Let \(p, q_1,\dots q_m\) be powers of primes and let \({\mathbb K}= \prod _{i =1}^m {\mathbb {F}}_{q_i}\). Let \(V \subseteq \mathbb {F}_p^{{\mathbb K}}\). Then V is the unary part of an \((\mathbb {F}_p , {\mathbb K})\)-linearly closed clonoid if and only if is an \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodule of \(\mathbb {F}_p^{{\mathbb K}}\).
Together with Theorem 3.7 this immediately yields the following.
Let \({\mathbb K}= \prod _{i =1}^m {\mathbb {F}}_{q_i}\) and \({\mathbb {F}}= \prod _{i =1}^s {\mathbb {F}}_{p_i}\) be products of finite fields such that \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. Then the function \(\pi ^{[1]}\) that sends an \(({\mathbb {F}}, {\mathbb K})\)-linearly closed clonoid to its unary part is an isomorphism between the lattice of all \(({\mathbb {F}}, {\mathbb K})\)-linearly closed clonoids and the direct product of the lattices of all \({\mathbb {F}}_{p_i}[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_{p_i}^{{\mathbb K}}\).
With the same strategy of [6, Lemma 5.6] we obtain the following Lemma.
Let \({\mathbb K}= \prod _{i =1}^m {\mathbb {F}}_{q_i}\) and \({\mathbb {F}}= \prod _{i =1}^s {\mathbb {F}}_{p_i}\) be products of finite fields such that \(|{\mathbb K}|\) and \(|{\mathbb {F}}|\) are coprime. Then every \(({\mathbb {F}}, {\mathbb K})\)-linearly closed clonoid is finitely related.
The next step is to characterize the lattice of all \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_p^{{\mathbb K}}\). To this end we observe that V is an \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodule of \(\mathbb {F}_p^{{\mathbb K}}\) if and only if is a subspace of \(\mathbb {F}_p^{{\mathbb K}}\) satisfying (4.1).
We can provide a bound for the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids given by the number of subspaces of \({\mathbb {F}}_{p_i}^{{\mathbb K}}\).
Remark 4.6
It is a well-known fact in linear algebra that the number of k-dimensional subspaces of an n-dimensional vector space V over a finite field \({\mathbb {F}}_q\) is the Gaussian binomial coefficient:
$$\begin{aligned} {{n}\atopwithdelims (){k}}_q = \prod _{i=1}^k \frac{q^{n-k+i}-1}{q^i-1}. \end{aligned}$$
From this remark we directly obtain the bound of Theorem 1.3. In order to determine the exact cardinality of the lattice of all \(({\mathbb {F}},{\mathbb K})\)-linearly closed clonoids we have to deal with the problem to find the \({\mathbb {F}}_p[{\mathbb K}^{\times }]\)-submodules of \(\mathbb {F}_p^{{\mathbb K}}\). We will not study this problem here because we think that this is an interesting problem that deserves an own research.
Aichinger, E.: Solving systems of equations in supernilpotent algebras. In: 44th International Symposium on Mathematical Foundations of Computer Science, LIPIcs. Leibniz Int. Proc. Inform, 138, Art. No. 72, 9, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern (2019)
Aichinger, E., Mayr, P.: Finitely generated equational classes. J. Pure Appl. Algebra 220(8), 2816–2827 (2016)
Aichinger, E., Moosbauer, J.: Chevalley-Warning type results on abelian groups. J. Algebra 569, 30–66 (2021)
Brakensiek, J., Guruswami, V.: Promise constraint satisfaction structure theory and a symmetric Boolean dichotomy. In: Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms ( SODA'18), pp. 1782–1801. SIAM, Philadelphia (2018)
Bulín, J., Krokhin, A., Opršal, J.: Algebraic approach to promise constraint satisfaction. In: Proceedings of the Annual ACM Symposium on Theory of Computing (STOC '19), pp. 602–613. ACM, New York (2019)
Fioravanti, S.: Closed sets of finitary functions between finite fields of coprime order. Algebra Universalis 81, 52 (2020)
Harnau, W.: Ein verallgemeinerter Relationenbegriff für die Algebra der mehrwertigen Logik. I. Grundlagen Rostock. Math. Kolloq. 28, 5–17 (1985). (German)
MathSciNet MATH Google Scholar
Kreinecker, S.: Closed function sets on groups of prime order. J. Multiple-Valued Logic Soft Comput. 33(1–2), 51–74 (2019)
Krokhin, A., Bulatov, A.A., Jeavons, P.: The complexity of constraint satisfaction: an algebraic approach. In: Structural Theory of Automata, Semigroups, and Universal Algebra. NATO Sci. Ser. II Math. Phys. Chem., vol. 207, pp. 181–213. Springer, Dordrecht (2005)
Lehtonen, E.: Closed classes of functions, generalized constraints, and clusters. Algebra Universalis 63(2–3), 203–234 (2010)
Pöschel, R., Kalužnin, L.A.: Funktionen- und Relationenalgebren. Mathematische Monographien [Mathematical Monographs], vol. 15. VEB Deutscher Verlag der Wissenschaften, Berlin (1979)
Post, E.L.: The two-valued iterative systems of mathematical logic. Ann. of Math. Stud., vol. 5. Princeton University Press, Princeton (1941)
Rosenberg, I.: Maximal clones on algebras \(A\) and \(A^{r}\). Rend. Circ. Mat. Palermo 2(18), 329–333 (1969)
Sparks, A.: On the number of clonoids. Algebra Universalis 80(4), 53 (2019)
Szendrei, Á.: Clones in universal algebra. In: Séminaire de Mathématiques Supérieures. Seminar on Higher Mathematics, vol. 99. Presses de l'Université de Montréal, Montreal (1986)
The author thanks Erhard Aichinger, who inspired this paper, Erkko Lehtonen who reviewed my Ph.D. thesis, and Sebastian Kreinecker for many hours of fruitful discussions. The author thanks the referees for their useful suggestions.
Open access funding provided by Johannes Kepler University Linz.
Institut für Algebra, Johannes Kepler Universität Linz, 4040, Linz, Austria
Stefano Fioravanti
Correspondence to Stefano Fioravanti.
Presented by R. Pöschel.
The research was supported by the Austrian Science Fund (FWF): P29931.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Fioravanti, S. Closed sets of finitary functions between products of finite fields of coprime order. Algebra Univers. 82, 61 (2021). https://doi.org/10.1007/s00012-021-00748-z
Clonoids
Mathematics Subject Classification | CommonCrawl |
Skip to main content Skip to sections
Journal of Biomolecular NMR
pp 1–13 | Cite as
NMR characterization of solvent accessibility and transient structure in intrinsically disordered proteins
Christoph Hartlmüller
Emil Spreitzer
Christoph Göbl
Fabio Falsone
Tobias Madl
In order to understand the conformational behavior of intrinsically disordered proteins (IDPs) and their biological interaction networks, the detection of residual structure and long-range interactions is required. However, the large number of degrees of conformational freedom of disordered proteins require the integration of extensive sets of experimental data, which are difficult to obtain. Here, we provide a straightforward approach for the detection of residual structure and long-range interactions in IDPs under near-native conditions using solvent paramagnetic relaxation enhancement (sPRE). Our data indicate that for the general case of an unfolded chain, with a local flexibility described by the overwhelming majority of available combinations, sPREs of non-exchangeable protons can be accurately predicted through an ensemble-based fragment approach. We show for the disordered protein α-synuclein and disordered regions of the proteins FOXO4 and p53 that deviation from random coil behavior can be interpreted in terms of intrinsic propensity to populate local structure in interaction sites of these proteins and to adopt transient long-range structure. The presented modification-free approach promises to be applicable to study conformational dynamics of IDPs and other dynamic biomolecules in an integrative approach.
Solvent paramagnetic relaxation enhancement Intrinsically disordered proteins Residual structure p53 FOXO4 α-Synuclein
Christoph Hartlmüller and Emil Spreitzer are the shared first authors.
The online version of this article ( https://doi.org/10.1007/s10858-019-00248-2) contains supplementary material, which is available to authorized users.
The well-established structure–function paradigm has been challenged by the discovery of intrinsically disordered proteins (IDPs) (Dyson and Wright 2005). It is suggested that about 40% of all proteins have disordered regions of 40 or more residues, with many proteins existing solely in the unfolded state (Tompa 2012; Romero et al. 1998). Although they lack stable secondary or tertiary structure elements, this large class of proteins plays a crucial role in various cellular processes (Theillet et al. 2014; Wright and Dyson 2015; van der Lee et al. 2014; Uversky et al. 2014). Disorder serves a biological role, where conformational heterogeneity granted by disordered regions enables proteins to exert diverse functions in response to various stimuli. Unlike structured proteins, which are essential for catalysis and transport, disordered proteins are crucial for regulation and signaling. Due to their intrinsic flexibility they can act as network hubs interacting with a wide range of biomolecules forming dynamic regulatory networks (Dyson and Wright 2005; Tompa 2012; Babu et al. 2011; Flock et al. 2014; Wright and Dyson 1999; Uversky 2011; Habchi et al. 2014). Given the plethora of potential interaction partners, it is not surprising that the interaction of IDPs with binding partners are often tightly regulated via and intricate 'code' of post-translational modifications, including phosphorylation, methylation, acetylation, and various others (Wright and Dyson 2015; Bah and Forman-Kay 2016). These proteins, and distortions in their interaction networks, for example by mutations and aberrant post-translational modifications (PTMs), are closely linked to a range of human diseases, including cancers, neurodegeneration, cardiovascular disorders and diabetes, they are currently considered difficult to study (Dyson and Wright 2005; Tompa 2012; Babu et al. 2011; Habchi et al. 2014; Metallo 2010; Uversky et al. 2008; Dyson and Wright 2004). Complications arise from the following factors: these proteins lack well-defined stable structure, they exist in a dynamic equilibrium of distinct conformational states, and the number of experimental techniques and observables renders IDP conformational characterization underdetermined (Mittag and Forman-Kay 2007; Eliezer 2009). Thus, an integration of new sets of experimental and analytical techniques are required to characterize the conformational behavior of IDPs.
Although IDPs are highly dynamic, they often contain transiently-folded regions, such as transiently populated secondary or tertiary structure, transient long-range interactions or transient aggregation (Marsh et al. 2007; Shortle and Ackerman 2001; Bernado et al. 2005; Mukrasch et al. 2007; Wells et al. 2008). These transiently-structured regions are of particular interest to study the biological function of IDPs as they can report on biologically-relevant interactions and encode biological function. Examples are aggregation, liquid–liquid phase separation, binding to folded co-factors, or modifying enzymes (Yuwen et al. 2018; Brady et al. 2017; Choy et al. 2012; Maji et al. 2009; Putker et al. 2013).
NMR spectroscopy is exceptionally well-suited to study IDPs, and in particular to detect transiently folded regions (Meier et al. 2008; Wright and Dyson 2009; Jensen et al. 2009). Several NMR observables provide atomic resolution, and ensemble-averaged information reporting on the conformational energy landscape sampled by each amino acid, including chemical shifts, residual dipolar couplings (RDCs), and paramagnetic relaxation enhancement (PRE) (Dyson and Wright 2004; Eliezer 2009; Marsh et al. 2007; Shortle and Ackerman 2001; Meier et al. 2008; Gobl et al. 2014; Gillespie and Shortle 1997; Clore et al. 2007; Huang et al. 2014; Ozenne et al. 2012; Clore and Iwahara 2009; Otting 2010; Hass and Ubbink 2014; Gobl et al. 2016). RDCs, and PREs, either alone or in combination have been used successfully in recent years to characterize the conformations and long-range interactions of IDPs (Bernado et al. 2005; Ozenne et al. 2012; Dedmon et al. 2005; Bertoncini et al. 2005; Parigi et al. 2014; Rezaei-Ghaleh et al. 2018). However, both techniques rely on a modification of the IDP of interest, either by external alignment media in case of RDCs or the covalent incorporation of paramagnetic tags in the case of PREs.
We and others have proposed applications of soluble paramagnetic agents to obtain structural information by NMR without any modifications of the molecules of interest (Gobl et al. 2014; Guttler et al. 2010; Hartlmuller et al. 2016; Hocking et al. 2013; Madl et al. 2009, 2011; Respondek et al. 2007; Zangger et al. 2009; Pintacuda and Otting 2002; Bernini et al. 2009; Wang et al. 2012; Sun et al. 2011; Gong et al. 2017; Gu et al. 2014; Hartlmuller et al. 2017). The addition of soluble paramagnetic compounds leads to a concentration-dependent and therefore tunable increase of relaxation rates, the so-called paramagnetic relaxation enhancement (here denoted as solvent PRE, sPRE; also known as co-solute PRE, Fig. 1a). This effect depends on the distance of the spins of interest (e.g. 1H, 13C) to the biomolecular surface. The nuclei on the surface are affected the strongest by the sPRE effect, and this approach has been shown to correlate well with biomolecular structure in the case of proteins and RNA (Madl et al. 2009; Pintacuda and Otting 2002; Bernini et al. 2009; Hartlmuller et al. 2017). sPREs have gained popularity for structural studies of biomolecules, including in the structure determination of proteins (Madl et al. 2009; Wang et al. 2012), docking of protein complexes (Madl et al. 2011), and qualitative detection of dynamics (Hocking et al. 2013; Sun et al. 2011; Gong et al. 2017; Gu et al. 2014).
Principle and workflow for solvent PRE. a Transient secondary structures of IDPs are characteristic for protein–protein interaction sites and are therefore crucial for various cellular functions. NMR sPRE data provide quantitative and residue specific information on the solvent accessibility as the effect of paramagnetic probes such as Gd(DTPA-BMA) is distance dependent, which can be used to detect secondary structures within otherwise unfolded regions and long-range contacts within a protein. b Prediction of sPRE is based on an ensemble approach of a library of peptides. Each peptide has a length of 5 residues, and is flanked by triple-Ala on both termini (e.g. AAAXXXXXAAA, where XXXXX is a 5-mer fragment of the target primary sequence). Following water refinement using ARIA/CNS, sPRE values of all conformations are calculated and the average solvent PRE value of the ensemble is returned. c Predicted Cα sPRE (blue) and standard deviation (red) of AAAVVAVVAAA ensembles consisting of 99,000 down to 48 structural conformations. The green-dotted line indicates 5% deviation from the ensemble with 99,000 conformations. d Histograms of different ensemble sizes showing the distribution of predicted sPRE values
The most commonly used paramagnetic agent for measuring sPRE data is the inert complex Gd(DTPA-BMA) (gadolinium diethylenetriaminepenta-acetic acid bismethylamide, commercially available as 'Omniscan'), that is known to not specifically interact on protein surfaces (Guttler et al. 2010; Madl et al. 2009, 2011; Pintacuda and Otting 2002; Wang et al. 2012; Respondek et al. 2007; Zangger et al. 2009; Göbl et al. 2010). Previously, we and others could show that sPRE data provide in-depth structural and dynamic data for IDP analysis (Madl et al. 2009; Sun et al. 2011; Gong et al. 2017; Emmanouilidis et al. 2017; Johansson et al. 2014). For example, sPRE data helped to characterize α-helical propensity in a previously postulated flexible region in the folded 42 kDa maltodextrin binding protein (Madl et al. 2009), and dynamic ligand binding to the human "survival of motor neuron" protein (Emmanouilidis et al. 2017). While writing this manuscript, and based on sPRE data for exchangeable amide protons, the Tjandra lab has shown that sPREs can detect native-like structure in denatured ubiquitin (Kooshapur et al. 2018).
Here, we present an integrative ensemble approach to predict the sPREs of IDPs. This ensemble representation is used to calculate conformationally averaged sPREs, which fit remarkably well to the experimentally-measured sPREs. We show for the disordered protein α-synuclein, and disordered regions of the proteins FOXO4 and p53 that deviation from random coil behavior can indicate intrinsic propensity to populate transient local structures and long-range interactions. In summary, this method provides a unique modification-free approach for studying IDPs, that is compatible with a wide range of NMR pulse sequences and biomolecules.
Protein expression and purification
For expression of human FOXO4TAD (residues 198–505), p53TAD (residues 1–94), pETM11-His6-ZZ cDNA and including an N-terminal TEV protease cleavage site coding for the respective proteins were transformed into E. coli BL21-DE3. To obtain 13C/15N isotope labeled protein, cells were grown for 1 day at 37 °C in minimal medium (100 mM KH2PO4, 50 mM K2HPO4, 60 mM Na2HPO4, 14 mM K2SO4, 5 mM MgCl2; pH 7.2 adjusted with HCl and NaOH with 0.1 dilution of trace element solution (41 mM CaCl2, 22 mM FeSO4, 6 mM MnCl2, 3 mM CoCl1, 2 mM ZnSO4, 0.1 mM CuCl2, 0.2 mM (NH4)6Mo7O17, 24 mM EDTA) supplemented with 2 g of 13C6H12O6 (Cambridge isotope) and 1 g of 15NH4Cl (Sigma). At an OD (600 nm) of 0.8, cells were induced with 0.5 mM isopropyl-β-d-thiogalactopyranosid (IPTG) for 16 h at 20 °C. Cell pellets were harvested and sonicated in denaturing buffer containing 50 mM Tris–HCl pH 7.5, 150 mM NaCl, 20 mM imidazole, 2 mM tris(2-carboxyethyl)phosphine (TCEP), 20% glycerol and 6 M urea. His6-ZZ proteins were purified using Ni–NTA agarose (QIAGEN) and eluted in 50 mM Tris–HCl pH 7.5, 150 mM NaCl, 200 mM imidazole, 2 mM TCEP and subjected to TEV protease cleavage. Untagged proteins were then isolated by performing a second affinity purification step using Ni–NTA beads for removal of TEV and uncleaved substrate. A final exclusion chromatography purification step was performed in the buffer of interest on a gel filtration column (Superdex peptide (10/300) for p53 and Superdex 75 (16/600) for FOXO4, GE Healthcare).
α-Synuclein was expressed and purified as described (Falsone et al. 2011). Briefly, pRSETB vector containing the human AS gene was transformed into BL21 (DE3) Star Cells. 13C/15N-labeled α-synuclein was expressed in minimal medium (6.8 g/l Na2HPO3, 4 g/l KH2PO4, 0.5 g/l NaCl, 1.5 g/l (15NH4)2SO2, 4 g/l 13C glucose, 1 μg/l biotin, 1 μg/l thiamin, 100 μg/ml ampicillin, and 1 ml 1000 × microsalts). Cells were grown to an OD (600 nm) of 0.7. Protein was expressed by addition of 1 mM IPTG for 4 h. After harvesting cells were resuspended in 20 mM Tris–HCl, 50 mM NaCl, pH 7.4, supplemented with a Complete® protease inhibitor mix (Roche, Basel, Switzerland). Protein purification was then achieved using a Resource Q column and gel filtration using a Superdex 75 gel filtration column (GE Healthcare, Uppsala, Sweden).
Generation of conformational ensembles
Conformational ensembles were generated using the ARIA/CNS software-package, comprising of 1500 random backbone conformations of all possible 5-mer peptides of the protein of interest, and flanked by triple-alanine. Every backbone conformation served as starting structure in a full-atom water refinement using ARIA (Bardiaux et al. 2012). For every refined structure the solvent PRE is calculated and the averaged solvent PRE of the central residue is stored in the database. To predict sPRE data, a previously published grid-based approach was used (Hartlmuller et al. 2016; Pintacuda and Otting 2002). Briefly, the structural model was placed in a regularly-spaced grid representing the uniformly distributed paramagnetic compound and the grid was built with a point-to-point distance of 0.5 Å and a minimum distance of 10 Å between the protein model and the outer border of the grid. Next, grid points that overlap with the protein model were removed assuming a molecular radius of 3.5 Å for the paramagnetic compound. To compute the sPRE for a given protein proton \({\text{sPRE}}_{\text{predicted}}^{i}\), the distance-dependent paramagnetic effect (Hartlmuller et al. 2016; Hocking et al. 2013; Pintacuda and Otting 2002) was numerically integrated over all remaining grid points according to Eq. (1):
$${\text{sPRE}}_{\text{predicted}}^{i} = c \cdot \mathop \sum \limits_{{d_{i,j} < 10 \AA}} \frac{1}{{d_{i,j}^{6} }}$$
where i is the index of a proton of the protein, j is the index of the grid point, di, j is the distance between the ith proton and the jth grid point and c is an arbitrary constant to scale the sPRE values (1000). Theoretical sPRE values were normalized by calculating the linear fit of experimental and predicted sPRE followed by shifting and scaling of the theoretical sPRE. To predict the solvent PRE of the entire IDP sequence, each peptide with the five matching amino acids is searched and the corresponding solvent PRE values are combined. sPRE data of the two N- and C-terminal residues were not predicted in this setup. All scripts and sample runs can be obtained downloaded from the homepage of the authors (https://mbbc.medunigraz.at/forschung/forschungseinheiten-und-gruppen/forschungsgruppe-tobias-madl/software/).
NMR experiments
The setup of sPRE measurements using NMR spectroscopy was performed as described previously (Hartlmuller et al. 2016, 2017). To obtain sPRE data, a saturation-based approach was used. The 1H-R1 relaxation rates were determined by a saturation-recovery scheme followed by a read-out experiment such as a 1H, 15N HSQC, 1H, 13C HSQC or a 3D CBCA(CO)NH experiment. The read-out experiments were combined with the saturation-recovery scheme in a Pseudo-3D (HSQCs) or Pseudo-4D [CBCA(CO)NH] experiment, with the recovery time as an additional dimension. The CBCA(CO)NH was recorded using non-uniform sampling. Alternatively, 1H-R2 relaxation rates can be as described (Clore and Iwahara 2009).
A 7.5 ms 1H trim pulse followed by a gradient was applied for proton saturation. During the recovery, ranging from several milliseconds up to several seconds, z-magnetization is built up. The individual recovery delays are applied in an interleaved manner, with short and long delays occurring in alternating fashion. For every 1H-R1 measurement 10 delay times were recorded and for error estimation, at 1 delay time was recorded as a duplicate.
Measurement of 1H-R1 rates were repeated for increasing concentrations of the relaxation-enhancing Gd(DTPA-BMA)/Omniscan (GE Healthcare, Vienna, Austria) and the solvent PRE was obtained as the average change of the proton R1 rate per concentration of the paramagnetic agent. After each addition of Gd(DTPA-BMA), the recovery delays were shortened such that for the longest delay all NMR signals were sufficiently recovered. The interscan delay was set to 50 ms, as the saturation-recovery scheme does not rely on an equilibrium z-magnetization at the start of each scan. All NMR samples contained 10% 2H2O. Spectra were processed using NMRPipe and analyzed with the NMRView and CcpNmr Analysis software packages (Johnson 2004; Delaglio et al. 1995; Skinner et al. 2016).
Measurement of sPRE data used in this study
Assignment of p53TAD was achieved using HNCACB, CBCA(CO)NH and HCAN spectra and analyzed using ccpNMR (Skinner et al. 2016). sPRE data of 300 µM samples of uniformly 13C/15N labeled p53TAD was measured on a 600 MHz Bruker Avance Neo NMR spectrometer equipped with a TXI probehead at 298 K in a buffer containing 50 mM sodium phosphate buffer, 0.04% sodium azide, pH 7.5. 1H-R1 rates of 1HN, Hα and Hβ were determined using 1H,13C HSQC and 1H, 15N HSQC as read-out spectra (4/4 scans, 200/128 complex points in F2). For assignment of α-synuclein, previously reported chemical shifts were obtained from the BMRB (accession code 6968) and the assignment was confirmed using HNCACB and CBCA(CO)NH spectra. 1H-R1 rates of aliphatic protons and amide protons of a 100 µM sample (50 mM bis(2-hydroxyethyl)amino-tris(hydroxymethyl)methane (bis–Tris), 20 mM NaCl, 3 mM sodium azide, pH 6.8) were determined using 1H, 13C HSQC and 1H, 15N HSQC read-out spectra, respectively, at 282 K in the presence of 0, 1, 2, 3, 4 and 5 mM Gd(DTPA-BMA). For assignment of FOXO4TAD HNCACB, CBCA(CO)NH and HCAN spectra were recorded and assigned using ccpNMR (Skinner et al. 2016). Measurements of 13C, 15N labeled FOXO4TAD at 390 µM in 20 mM sodium phosphate buffer, pH 6.8, 50 mM NaCl, 1 mM DTT were performed on a 600 MHz magnet (Oxford Instruments) equipped with an AV III console and cryo TCI probe head (Bruker Biospin). Pseudo-4D CBCA(CO)NH spectra served as read-out for 1H-R1 rates and were recorded on a 250 µM sample on a 900 MHz Bruker Avance III spectrometer equipped with a TCI cryoprobe using non-uniform sampling (4 scans, 168/104 complex points in F1 (13C)/F2 (15N) sampled with 13.7% resulting a total number of 600 complex points). Spectra were processed using hmsIST/NMRPipe (Hyberts et al. 2014).
Analysis of NMR data
sPRE data of the model proteins was analyzed as described previously. Briefly, peak intensities were extracted using nmrglue python package and fitted to a mono-exponential build up curve the SciPy python package and Eq. (2).
$$I\left( t \right) = - A \cdot e^{{ - R_{1} *t}} + C$$
where I(t) is the peak intensity of the saturation-recovery experiment, t is the recovery delay, A is the amplitude of the z-magnetization build-up, C is the plateau of the curve and R1 is the longitudinal relaxation rate. Duplicate recovery delays were used to determine the error for the fitted rates R1.
$$\varepsilon_{ \exp } = \sqrt {\frac{1}{2N} \cdot \mathop \sum \limits_{i = 1}^{N} \delta_{i} }$$
where N is the number of peaks in the spectrum, i is the index of the peak, and δi is the difference of the duplicates for the ith peak. The error of the rates R1 was then obtained using a Monte Carlo-type resampling strategy. The solvent PRE is obtained by performing a weighted linear regression using the equation
$$R_{1} \left( c \right) = sPRE \cdot c + R_{1}^{0}$$
where c is the concentration of Gd(DTPA-BMA), R1(c) is the fitted R1 rate at the present of Gd(DTPA-BMA) with a concentration c, R 1 0 is the R1 in the absence of Gd(DTPA-BMA) and sPRE is the slope and the desired sPRE value. For the weighted linear regression, the previously determined errors ∆ R1 for R1 was used, and the error on the concentration c was neglected.
To detect transient structural elements in IDPs, an efficient back-calculation of sPREs of IDPs is essential. Whereas back-calculation of sPREs is relatively straightforward for folded rigid structures and can be carried out efficiently using a grid-based approach by integration of the solvent environment (Hartlmuller et al. 2016, 2017), this approach fails in the case of highly conformationally heterogeneous IDPs. In our approach, sPREs are best represented as an average sPRE of an ensemble. NMR observables and nuclear spin relaxation phenomena, including sPREs, directly sense chemical exchange through the distinct magnetic environments that nuclear spins experience while undergoing those exchange processes. The effects of the dynamic exchange on the NMR signals can be described by the McConnell Equations (Mcconnell 1958) In the case of a two-site exchange process, and assuming that the exchange rate is faster than the difference in the sPREs observed in both states, the observed sPRE is a linear, population-weighted average of the sPRE observed in both states, as seen for covalent paramagnetic labels (Clore and Iwahara 2009). Moreover, the correlation time for relaxation is assumed to be faster than the exchange time among different conformations within the IDP (Jensen et al. 2014; Iwahara and Clore 2010). The effective correlation time for longitudinal relaxation depends on the rotational correlation time of the biomolecule, the electron relaxation time and the lifetime of the rotationally correlated complex of the biomolecule and the paramagnetic agent (Madl et al. 2009; Eletsky et al. 2003). For ubiquitin, the effective correlation time for longitudinal relaxation was found to be in the sub-ns time scale (Pintacuda and Otting 2002), whereas that conformational exchange in IDPs typically appears at slower timescales (Jensen et al. 2014).
Calculating the average of sPREs over an ensemble of protein conformations presents serious practical difficulties that affect both the accuracy and the portability of the calculation. For RDCs it has been shown that convergence to the average requires an unmanageably large number of structures (e.g. 100,000 models for a protein with 100 amino acids), and that the convergence strictly depends on the length of the protein (Bernado et al. 2005; Nodet et al. 2009). To simplify the back-calculation of sPREs we use a strategy proposed for RDCs by the Forman-Kay and Blackledge groups (Marsh et al. 2008; Huang et al. 2013).
To back-calculate the sPRE from a given primary sequence of an IDP we generated fragments of five amino acids of the sequence of interest and flanked them with triple-alanine sequences at the N- and C-termini to simulate the presence of upstream/downstream amino acids (Fig. 1b). An ensemble of structures for these sequences is then generated using ARIA/CNS including water refinement (Bardiaux et al. 2012). To predict the solvent PRE of the entire IDP, the peptide with the five matching residues is searched and the corresponding solvent PREs averaged for the entire conformational ensemble are returned. This approach is highly parallelizable and dramatically reduces the computational effort compared to simulating the conformations of the full-length IDP.
To determine the number of conformers necessary to converge the back-calculated sPRE of the defined 11-mers, we generated an ensemble of 100,000 structures for a 11-mer AAAVVAVVAAA peptide using ARIA/CNS (Bardiaux et al. 2012) and back-calculated the sPRE for subsets with decreasing number of structures. We find that 1500 conformers are sufficient to reproduce the sPRE with a deviation compared to the maximum ensemble below 5% (Fig. 1c, d).
Back-calculation of the sPRE by fast grid-based integration has some advantages compared to alternative approaches relying on surface accessibility (Kooshapur et al. 2018). First, sPREs can be obtained for atoms without any surface accessibility in grid-based integration approaches as they still take into account the distance-dependent paramagnetic effect. This is expected to provide more accurate predictions for regions with a high degree of bulky side chains or transient folding.
To validate our computational approach, we recorded several sets of experimental 1H-sPREs for the disordered regions of the human proteins FOXO4, p53, and α-synuclein. Similar to many other transcription factors, p53 and FOXO4 are largely disordered outside their DNA binding domains.
In order to demonstrate that surface accessibility data can be obtained for a challenging IDP, we recorded sPRE data for the 307 residue transactivation domain of FOXO4. The FOXO4 transcription factor is a member of the forkhead box O family of proteins that share a highly conserved DNA-binding motif, the forkhead box domain (FH). The FH domain is surrounded by large N- and C-terminal intrinsically disordered regions which are essential for the regulation of FOXO function (Weigel and Jackle 1990). FOXOs control a plethora of cellular functions, such as cell growth, survival, metabolism and oxidative stress, by regulating the expression of hundreds of target genes (Burgering and Medema 2003; Hornsveld et al. 2018). Expression and activity of FOXOs are tightly controlled by PTMs such as phosphorylation, acetylation, methylation and ubiquitination, and these modifications impact on FOXO stability, sub-cellular localization and transcriptional activity (Essers et al. 2004; de Keizer et al. 2010; van den Berg et al. 2013). Because of their anti-proliferative and pro-apoptotic functions, FOXOs have been considered as bona fide tumor suppressors. However, FOXOs can also support tumor development and progression by maintaining cellular homeostasis, facilitating metastasis and inducing therapeutic resistance (Hornsveld et al. 2018). Thus, targeting FOXO activity might hold promise in cancer therapy.
The C-terminal FOXO4 transactivation domain has been suggested to be largely disordered and to be the binding site for many cofactors. Because it also harbors most of the post-translational modifications (Putker et al. 2013; Burgering and Medema 2003; Hornsveld et al. 2018; Bourgeois and Madl 2018), we set off to study this biologically important domain using our sPRE approach. 1H,15N and 1H, 13C HSQC NMR spectra of FOXO4TAD are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of Gd(DTPA-BMA) (Fig. 2a). sPRE data of FOXO4 had to be recorded in pseudo-4D saturation-recovery CBCA(CO)NH spectra due to the severe signal overlap observed in the 2D HSQC spectra. It should be noted that any kind of NMR experiment could be combined in principle with a sPRE saturation recovery measurement block in order to obtain 1H- or 13C sPRE data. The sPRE data of FOXO4TAD yield differential solvent accessibilities in a residue-specific manner (Fig. 2b, c). Hα atoms located in regions rich in bulky residues are showing lower sPREs and Hα atoms located in more exposed glycine-rich regions display higher sPREs. Hβ sPRE data was obtained for a limited number of residues and shows overall elevated sPREs due to the higher degree of exposure and a reasonable agreement of predicted and experimental data (Supporting Fig. 1). A comparison of the predicted sPRE data with a bioinformatics bulkiness prediction shows that some features are reproduced by the bioinformatics prediction (Supporting Fig. 2A). However, the experimental sPRE is better described by our approach. Strikingly, the predicted sPRE pattern reproduces the experimental sPRE pattern exceptionally well, indicating that the FOXO4TAD is largely disordered and does not adopt any stable or transient tertiary structure in the regions for which sPRE data could be obtained.
Comparison of predicted and measured solvent PRE of FOXO4TAD. a Overlay of 1H,13C HSQC spectra, with full recovery time of a 390 µM 13C,15N labeled FOXO4TAD sample in the absence (blue) and presence of 3.25 mM Gd(DTPA-BMA) (orange). b1H-R1 rates of two selected residues of FOXO4TAD at different Gd(DTPA-BMA) concentrations. c Predicted (red) and experimentally-determined (blue) solvent PRE values using CBCA(CO)NH as readout spectrum, of assigned Hα peaks of FOXO4TAD. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2A for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars
In order to demonstrate that surface accessibility data can be obtained for a IDP with potential formation of transient local secondary structure we recorded sPRE data for the 94-residue transactivation domain of p53. p53 is a homo-tetrameric transcription factor composed of an N-terminal trans-activation domain, a proline-rich domain, a central DNA-binding domain followed by a tetramerization domain and the C-terminal negative regulatory domain. p53 is involved in the regulation of more than 500 target genes and thereby controls a broad range of cellular processes, including apoptosis, metabolic adaptation, DNA repair, cell cycle arrest, and senescence (Vousden and Prives 2009). The disordered N-terminal p53 transactivation domain (p53TAD) is a key interaction motif for regulatory protein–protein interactions (Fernandez-Fernandez and Sot 2011): it possesses two binding motifs with α-helical propensity, named p53TAD1 (residues 17–29) and p53TAD2 (residues 40–57). These two motifs act independently or in combination in order to allow p53 to bind to several proteins regulating either p53 stability or transcriptional activity (Shan et al. 2012; Jenkins et al. 2009; Rowell et al. 2012). Because of its pro-apoptotic function, p53 is recognized as tumor suppressor, and is found mutated in more than half of all human cancers affecting a wide variety of tissues (Olivier et al. 2010). Within this biological and disease context the N-terminal p53-TAD plays a key role: it mediates the interaction with folded co-factors, and comprises most of the regulatory phosphorylation sites.
1H, 15N and 1H, 13C HSQC NMR spectra recorded of p53TAD are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of Gd(DTPA-BMA) (Fig. 3a, Supporting Fig. 3A). The sPRE data of p53TAD display differential solvent accessibilities in a residue-specific manner: due to different excluded volumes for the paramagnetic agent Hα atoms located in regions rich in bulky residues show lower sPREs and Hα atoms located in more exposed regions show higher sPREs (Fig. 3b, c, Supporting Fig. 2B).
Comparison of predicted and measured solvent PRE of p53TAD. a Overlay of 1H, 13C HSQC read-out spectra, with full recovery time of a 300 µM 13C, 15N labeled p53TAD in absence (black) and presence of 5 mM Gd(DTPA-BMA) (orange). b Gd(DTPA-BMA)-concentration-dependent R1 rates of two selected residues. c Diagram showing predicted (red) and measured (blue) solvent PRE values of each Hα atom of p53TAD. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Regions binding to co-factors (TAD1, TAD2) and the proline rich region are labeled. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2B for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars
sPRE data of structured proteins are often recorded for amide protons. However, chemical exchange of the amide proton with fast-relaxing water solvent protons might lead to an increase of the experimental sPRE, as has been observed for the disordered linker regions in folded proteins and in RNA (Hartlmuller et al. 2017; Gobl et al. 2017). For imino and amino protons of the UUCG tetraloop RNA and a GTP class II aptamer, for example, the increase of 1H-R1 rates is larger at small concentrations of the paramagnetic compound, and becomes linear at higher concentrations. Thus, we decided to focus here on experimental and back-calculated sPRE data of Hα protons. Nevertheless, 1HN-sPREs are shown for comparison in the supporting information (Supporting Fig. 4A).
Comparison of the back-calculated and experimental p53TAD-sPREs shows that several regions within p53TAD yield lower sPREs than predicted, indicating that p53TAD populates residual local structure or shows long-range tertiary interactions. In line with this, 15N NMR relaxation data and 13C secondary chemical shift data display reduced flexibility of p53TAD and transient α-helical structure (Supporting Fig. 5). This is in line with previous studies which found that the p53TAD1 domain adopts a transiently populated α-helical structure formed by residues Phe19-Leu26 and that the p53TAD2 domain adopts a transiently populated turn-like structure formed by residues Met40-Met44 and Asp48-Trp53 (Lee et al. 2000). Given that p53TAD has been reported to interact with several co-factors, our data indicate that sPRE data can indeed provide important insight into the residual structure of this key interaction motif (Bourgeois and Madl 2018; Raj and Attardi 2017).
In order to address the question of whether sPREs can be used to detect transient long-range interactions in disordered proteins we recorded 1H sPRE data for the 141-residue IDP α-synuclein using 1H, 13C and 1H, 15N, HSQC-based saturation recovery experiments at increasing concentrations of Gd(DTPA-BMA). α-Synuclein controls the assembly of presynaptic vesicles in neurons and is required for the release of the neurotransmitter dopamine (Burre et al. 2010). The aggregation of α-synuclein into intracellular amyloid inclusions coincides with the death of dopaminergic neurons, and therefore constitutes a pathologic signature of synucleinopathies such as Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy (Alafuzoff and Hartikainen 2017). Formation of transient long-range interactions has been proposed to protect α-synuclein from aggregation.
1H, 15N and 1H, 13C HSQC NMR spectra of α-synuclein are of high quality and showed no detectable 1H, 13C, or 15N chemical shift changes between the spectra recorded in the absence or presence of 5 mM Gd(DTPA-BMA) (Fig. 4a). The sPRE data of α-synuclein display variable solvent accessibilities in a residue-specific manner (Fig. 4b), with Hα atoms located in regions rich in bulky residues showing lower sPREs and Hα atoms located in more exposed regions showing higher sPREs (see also Supporting Fig. 2C for a comparison with the bioinformatics bulkiness profile and Supporting Fig. 4B for the 1HN sPRE data). Thus, the sPRE value provides local structural information about the disordered ensemble. Strikingly, we observed decreased sPREs, and therefore lower surface accessibility, in several regions, such as between residues 15–20, 26–30, 52–57, 74–79, 87–92, 102–110, and 112–121, respectively (Fig. 4c). Comparison of these regions with recently–published ensemble modeling using extensive sets of RDC and PRE data (Salmon et al. 2010) shows that the previously–observed transient intra-molecular long-range contacts involving mainly the regions 1–40, 70–90, and 120–140 within α-synuclein are reproduced by the sPRE data. Thus, sPRE data are highly sensitive to low populations of residual structure in disordered proteins.
Comparison of predicted and measured solvent PRE of α-synuclein. a Overlay of 1H, 13C HSQC Read-out spectra, with full recovery time of 100 µM 13C,15N labeled α-synuclein in absence (violet) and presence of 5 mM Gd(DTPA-BMA) (orange). b Linear fit of relaxation rate 1H-R1 and Gd(DTPA-BMA) concentration of two selected residues of α-synuclein. c Predicted (red) and experimentally determined (blue) sPRE values from 1H,13C HSQC read-out spectra. Regions of strong variations between predicted and measured sPRE values are highlighted by grey boxes. Experimental sPRE values are calculated by fitting the data with a linear regression equation. Predicted sPRE values are based on the previously described ensemble approach. Residues with bulky side chains (Phe, Trp, Tyr) are labeled with #, and exposed glycine residues are labeled with * (see Supporting Fig. 2C for a bulkiness profile). Errors of the measured 1H-R1 rates were calculated using a Monte Carlo-type resampling strategy and are shown in the diagram as error bars
In order to understand the conformational behavior of IDPs and their biological interaction networks, the detection of residual structure and long-range interactions is required. The large number of degrees of conformational freedom of IDPs require extensive sets of experimental data. Here, we provide a straightforward approach for the detection of residual structure and long-range interactions in IDPs and show that sPRE data contribute important and easily-accessible restraints for the investigation of IDPs. Our data indicate that for the general case of an unfolded chain with a local flexibility described by the overwhelming majority of available combinations, sPREs can be accurately predicted through our approach. It can be envisaged that a database of all potential combinations of the 20 amino acids within the central 5-mer peptide can be generated in the future. Nevertheless, generation of sPRE datasets for the entire 3.2 million possible combinations is beyond the current computing capabilities.
Our approach promises to be a straightforward screening tool to exclude potential specific interactions of the soluble paramagnetic agent with IDPs and to guide positioning of covalent paramagnetic spin labels which are often used to detect long-range interactions within IDPs (Gobl et al. 2014; Clore and Iwahara 2009; Otting 2010; Jensen et al. 2014). Paramagnetic spin labels are preferable placed close to, but not within regions involved in transient interactions in order to avoid potential interference of the spin label with weak and dynamic interactions.
In summary, we used three highly disease-relevant biological model systems for determining the solvent accessibility information provided by sPREs. This information can be easily determined experimentally and agrees well with the sPREs predicted for non-exchangeable protons using our grid-based approach. Our method proves to be highly sensitive to low populations of residual structure and long-range contacts in disordered proteins. This approach can be easily combined with ensemble-based calculations such as implemented in flexible-meccano/ASTEROIDS (Mukrasch et al. 2007; Nodet et al. 2009), Xplor-NIH (Kooshapur et al. 2018), or other programs (Estana et al. 2019) to interpret residual structure of IDPs quantitatively and in combination with complementary restraints obtained from RDCs and PREs. In particular for IDP ensemble calculations relying on sPRE data it is essential to exclude specific interactions of the paramagnetic agent with the IDP of interest which would lead to an enhanced experimental sPRE compared to the predicted sPRE.
Open access funding provided by Austrian Science Fund (FWF). This research was supported by the Austrian Science Foundation (P28854, I3792, DK-MCD W1226 to TM), the President's International Fellowship Initiative of CAS (No. 2015VBB045, to TM), the National Natural Science Foundation of China (No. 31450110423, to TM), the Austrian Research Promotion Agency (FFG: 864690, 870454), the Integrative Metabolism Research Center Graz, the Austrian infrastructure program 2016/2017, the Styrian government (Zukunftsfonds) and BioTechMed/Graz. E.S. was trained within frame of the PhD program Molecular Medicine. We thank Dr. Vanessa Morris for carefully reading the manuscript.
10858_2019_248_MOESM1_ESM.pdf (909 kb)
Supporting material 1 (PDF 908 kb)
Alafuzoff I, Hartikainen P (2017) Alpha-synucleinopathies. Handb Clin Neurol 145:339–353CrossRefGoogle Scholar
Babu MM, van der Lee R, de Groot NS, Gsponer J (2011) Intrinsically disordered proteins: regulation and disease. Curr Opin Struct Biol 21:432–440CrossRefGoogle Scholar
Bah A, Forman-Kay JD (2016) Modulation of intrinsically disordered protein function by post-translational modifications. J Biol Chem 291:6696–6705CrossRefGoogle Scholar
Bardiaux B, Malliavin T, Nilges M (2012) ARIA for solution and solid-state NMR. Methods Mol Biol 831:453–483CrossRefGoogle Scholar
Bernado P, Bertoncini CW, Griesinger C, Zweckstetter M, Blackledge M (2005) Defining long-range order and local disorder in native alpha-synuclein using residual dipolar couplings. J Am Chem Soc 127:17968–17969CrossRefGoogle Scholar
Bernini A, Venditti V, Spiga O, Niccolai N (2009) Probing protein surface accessibility with solvent and paramagnetic molecules. Prog Nucl Magn Reson Spectrosc 54:278–289CrossRefGoogle Scholar
Bertoncini CW et al (2005) Release of long-range tertiary interactions potentiates aggregation of natively unstructured alpha-synuclein. Proc Natl Acad Sci USA 102:1430–1435ADSCrossRefGoogle Scholar
Bourgeois B, Madl T (2018) Regulation of cellular senescence via the FOXO4-p53 axis. FEBS Lett 592:2083–2097CrossRefGoogle Scholar
Brady JP et al (2017) Structural and hydrodynamic properties of an intrinsically disordered region of a germ cell-specific protein on phase separation. Proc Natl Acad Sci USA 114:E8194–E8203CrossRefGoogle Scholar
Burgering BM, Medema RH (2003) Decisions on life and death: FOXO Forkhead transcription factors are in command when PKB/Akt is off duty. J Leukoc Biol 73:689–701CrossRefGoogle Scholar
Burre J et al (2010) Alpha-synuclein promotes SNARE-complex assembly in vivo and in vitro. Science 329:1663–1667ADSCrossRefGoogle Scholar
Choy MS, Page R, Peti W (2012) Regulation of protein phosphatase 1 by intrinsically disordered proteins. Biochem Soc Trans 40:969–974CrossRefGoogle Scholar
Clore GM, Iwahara J (2009) Theory, practice, and applications of paramagnetic relaxation enhancement for the characterization of transient low-population states of biological macromolecules and their complexes. Chem Rev 109:4108–4139CrossRefGoogle Scholar
Clore GM, Tang C, Iwahara J (2007) Elucidating transient macromolecular interactions using paramagnetic relaxation enhancement. Curr Opin Struct Biol 17:603–616CrossRefGoogle Scholar
de Keizer PL et al (2010) Activation of forkhead box O transcription factors by oncogenic BRAF promotes p21cip1-dependent senescence. Cancer Res 70:8526–8536CrossRefGoogle Scholar
Dedmon MM, Lindorff-Larsen K, Christodoulou J, Vendruscolo M, Dobson CM (2005) Mapping long-range interactions in alpha-synuclein using spin-label NMR and ensemble molecular dynamics simulations. J Am Chem Soc 127:476–477CrossRefGoogle Scholar
Delaglio F et al (1995) NMRPipe: a multidimensional spectral processing system based on UNIX pipes. J Biomol NMR 6:277–293CrossRefGoogle Scholar
Dyson HJ, Wright PE (2004) Unfolded proteins and protein folding studied by NMR. Chem Rev 104:3607–3622CrossRefGoogle Scholar
Dyson HJ, Wright PE (2005) Intrinsically unstructured proteins and their functions. Nat Rev Mol Cell Biol 6:197–208CrossRefGoogle Scholar
Eletsky A, Moreira O, Kovacs H, Pervushin K (2003) A novel strategy for the assignment of side-chain resonances in completely deuterated large proteins using 13C spectroscopy. J Biomol NMR 26:167–179CrossRefGoogle Scholar
Eliezer D (2009) Biophysical characterization of intrinsically disordered proteins. Curr Opin Struct Biol 19:23–30CrossRefGoogle Scholar
Emmanouilidis L et al (2017) Allosteric modulation of peroxisomal membrane protein recognition by farnesylation of the peroxisomal import receptor PEX19. Nat Commun 8:14635ADSCrossRefGoogle Scholar
Essers MA et al (2004) FOXO transcription factor activation by oxidative stress mediated by the small GTPase Ral and JNK. EMBO J 23:4802–4812CrossRefGoogle Scholar
Estana A et al (2019) Realistic ensemble models of intrinsically disordered proteins using a structure-encoding coil database. Structure 27:381–391e2CrossRefGoogle Scholar
Falsone SF et al (2011) The neurotransmitter serotonin interrupts alpha-synuclein amyloid maturation. Biochim Biophys Acta 1814:553–561CrossRefGoogle Scholar
Fernandez-Fernandez MR, Sot B (2011) The relevance of protein-protein interactions for p53 function: the CPE contribution. Protein Eng Des Sel 24:41–51CrossRefGoogle Scholar
Flock T, Weatheritt RJ, Latysheva NS, Babu MM (2014) Controlling entropy to tune the functions of intrinsically disordered regions. Curr Opin Struct Biol 26:62–72CrossRefGoogle Scholar
Gillespie JR, Shortle D (1997) Characterization of long-range structure in the denatured state of staphylococcal nuclease. I. Paramagnetic relaxation enhancement by nitroxide spin labels. J Mol Biol 268:158–169CrossRefGoogle Scholar
Gobl C, Madl T, Simon B, Sattler M (2014) NMR approaches for structural analysis of multidomain proteins and complexes in solution. Prog Nucl Magn Reson Spectrosc 80:26–63CrossRefGoogle Scholar
Gobl C et al (2016) Increasing the chemical-shift dispersion of unstructured proteins with a covalent lanthanide shift reagent. Angew Chem Int Ed Engl 55:14847–14851CrossRefGoogle Scholar
Gobl C et al (2017) Flexible IgE epitope-containing domains of Phl p 5 cause high allergenic activity. J Allergy Clin Immunol 140:1187–1191CrossRefGoogle Scholar
Göbl C, Kosol S, Stockner T, Rückert HM, Zangger K (2010) Solution structure and membrane binding of the toxin fst of the par addiction module. Biochemistry 49:6567–6575CrossRefGoogle Scholar
Gong Z, Gu XH, Guo DC, Wang J, Tang C (2017) Protein structural ensembles visualized by solvent paramagnetic relaxation enhancement. Angew Chem Int Ed Engl 56:1002–1006CrossRefGoogle Scholar
Gu XH, Gong Z, Guo DC, Zhang WP, Tang C (2014) A decadentate Gd(III)-coordinating paramagnetic cosolvent for protein relaxation enhancement measurement. J Biomol NMR 58:149–154CrossRefGoogle Scholar
Guttler T et al (2010) NES consensus redefined by structures of PKI-type and Rev-type nuclear export signals bound to CRM1. Nat Struct Mol Biol 17:1367–1376CrossRefGoogle Scholar
Habchi J, Tompa P, Longhi S, Uversky VN (2014) Introducing protein intrinsic disorder. Chem Rev 114:6561–6588CrossRefGoogle Scholar
Hartlmuller C, Gobl C, Madl T (2016) Prediction of protein structure using surface accessibility data. Angew Chem Int Ed Engl 55:11970–11974CrossRefGoogle Scholar
Hartlmuller C et al (2017) RNA structure refinement using NMR solvent accessibility data. Sci Rep 7:5393ADSCrossRefGoogle Scholar
Hass MA, Ubbink M (2014) Structure determination of protein-protein complexes with long-range anisotropic paramagnetic NMR restraints. Curr Opin Struct Biol 24:45–53CrossRefGoogle Scholar
Hocking HG, Zangger K, Madl T (2013) Studying the structure and dynamics of biomolecules by using soluble paramagnetic probes. ChemPhysChem 14:3082–3094CrossRefGoogle Scholar
Hornsveld M, Dansen TB, Derksen PW, Burgering BMT (2018) Re-evaluating the role of FOXOs in cancer. Semin Cancer Biol 50:90–100CrossRefGoogle Scholar
Huang JR, Ozenne V, Jensen MR, Blackledge M (2013) Direct prediction of NMR residual dipolar couplings from the primary sequence of unfolded proteins. Angew Chem Int Ed Engl 52:687–690CrossRefGoogle Scholar
Huang JR et al (2014) Transient electrostatic interactions dominate the conformational equilibrium sampled by multidomain splicing factor U2AF65: a combined NMR and SAXS study. J Am Chem Soc 136:7068–7076CrossRefGoogle Scholar
Hyberts SG, Arthanari H, Robson SA, Wagner G (2014) Perspectives in magnetic resonance: NMR in the post-FFT era. J Magn Reson 241:60–73ADSCrossRefGoogle Scholar
Iwahara J, Clore GM (2010) Structure-independent analysis of the breadth of the positional distribution of disordered groups in macromolecules from order parameters for long, variable-length vectors using NMR paramagnetic relaxation enhancement. J Am Chem Soc 132:13346–13356CrossRefGoogle Scholar
Jenkins LM et al (2009) Two distinct motifs within the p53 transactivation domain bind to the Taz2 domain of p300 and are differentially affected by phosphorylation. Biochemistry 48:1244–1255CrossRefGoogle Scholar
Jensen MR et al (2009) Quantitative determination of the conformational properties of partially folded and intrinsically disordered proteins using NMR dipolar couplings. Structure 17:1169–1185CrossRefGoogle Scholar
Jensen MR, Zweckstetter M, Huang JR, Blackledge M (2014) Exploring free-energy landscapes of intrinsically disordered proteins at atomic resolution using NMR spectroscopy. Chem Rev 114:6632–6660CrossRefGoogle Scholar
Johansson H et al (2014) Specific and nonspecific interactions in ultraweak protein-protein associations revealed by solvent paramagnetic relaxation enhancements. J Am Chem Soc 136:10277–10286CrossRefGoogle Scholar
Johnson BA (2004) Using NMRView to visualize and analyze the NMR spectra of macromolecules. Methods Mol Biol 278:313–352Google Scholar
Kooshapur H, Schwieters CD, Tjandra N (2018) Conformational ensemble of disordered proteins probed by solvent paramagnetic relaxation enhancement (sPRE). Angew Chem Int Ed Engl 57:13519–13522CrossRefGoogle Scholar
Lee H et al (2000) Local structural elements in the mostly unstructured transcriptional activation domain of human p53. J Biol Chem 275:29426–29432CrossRefGoogle Scholar
Madl T, Bermel W, Zangger K (2009a) Use of relaxation enhancements in a paramagnetic environment for the structure determination of proteins using NMR spectroscopy. Angew Chem Int Ed Engl 48:8259–8262CrossRefGoogle Scholar
Madl T, Bermel W, Zangger K (2009b) Use of relaxation enhancements in a paramagnetic environment for the structure determination of proteins using NMR spectroscopy. Angew Chem Int Ed Engl 48:8259–8262CrossRefGoogle Scholar
Madl T, Guttler T, Gorlich D, Sattler M (2011) Structural analysis of large protein complexes using solvent paramagnetic relaxation enhancements. Angew Chem Int Ed Engl 50:3993–3997CrossRefGoogle Scholar
Maji SK et al (2009) Functional amyloids as natural storage of peptide hormones in pituitary secretory granules. Science 325:328–332ADSCrossRefGoogle Scholar
Marsh JA et al (2007) Improved structural characterizations of the drkN SH3 domain unfolded state suggest a compact ensemble with native-like and non-native structure. J Mol Biol 367:1494–1510CrossRefGoogle Scholar
Marsh JA, Baker JM, Tollinger M, Forman-Kay JD (2008) Calculation of residual dipolar couplings from disordered state ensembles using local alignment. J Am Chem Soc 130:7804–7805CrossRefGoogle Scholar
Mcconnell HM (1958) Reaction rates by nuclear magnetic resonance. J Chem Phys 28:430–431ADSCrossRefGoogle Scholar
Meier S, Blackledge M, Grzesiek S (2008) Conformational distributions of unfolded polypeptides from novel NMR techniques. J Chem Phys 128:052204ADSCrossRefGoogle Scholar
Metallo SJ (2010) Intrinsically disordered proteins are potential drug targets. Curr Opin Chem Biol 14:481–488CrossRefGoogle Scholar
Mittag T, Forman-Kay JD (2007) Atomic-level characterization of disordered protein ensembles. Curr Opin Struct Biol 17:3–14CrossRefGoogle Scholar
Mukrasch MD et al (2007) Highly populated turn conformations in natively unfolded tau protein identified from residual dipolar couplings and molecular simulation. J Am Chem Soc 129:5235–5243CrossRefGoogle Scholar
Nodet G et al (2009) Quantitative description of backbone conformational sampling of unfolded proteins at amino acid resolution from NMR residual dipolar couplings. J Am Chem Soc 131:17908–17918CrossRefGoogle Scholar
Olivier M, Hollstein M, Hainaut P (2010) TP53 mutations in human cancers: origins, consequences, and clinical use. Cold Spring Harb Perspect Biol 2:a001008CrossRefGoogle Scholar
Otting G (2010) Protein NMR using paramagnetic ions. Annu Rev Biophys 39:387–405CrossRefGoogle Scholar
Ozenne V et al (2012) Flexible-meccano: a tool for the generation of explicit ensemble descriptions of intrinsically disordered proteins and their associated experimental observables. Bioinformatics 28:1463–1470CrossRefGoogle Scholar
Parigi G et al (2014) Long-range correlated dynamics in intrinsically disordered proteins. J Am Chem Soc 136:16201–16209CrossRefGoogle Scholar
Pintacuda G, Otting G (2002a) Identification of protein surfaces by NMR measurements with a paramagnetic Gd(III) chelate. J Am Chem Soc 124:372–373CrossRefGoogle Scholar
Pintacuda G, Otting G (2002b) Identification of protein surfaces by NMR measurements with a pramagnetic Gd(III) chelate. J Am Chem Soc 124:372–373CrossRefGoogle Scholar
Putker M et al (2013) Redox-dependent control of FOXO/DAF-16 by transportin-1. Mol Cell 49:730–742CrossRefGoogle Scholar
Raj N, Attardi LD (2017) The transactivation domains of the p53 protein. Cold Spring Harb Perspect Med 7:a026047CrossRefGoogle Scholar
Respondek M, Madl T, Gobl C, Golser R, Zangger K (2007a) Mapping the orientation of helices in micelle-bound peptides by paramagnetic relaxation waves. J Am Chem Soc 129:5228–5234CrossRefGoogle Scholar
Respondek M, Madl T, Gobl C, Golser R, Zangger K (2007b) Mapping the orientation of helices in micelle-bound peptides by paramagnetic relaxation waves. J Am Chem Soc 129:5228–5234CrossRefGoogle Scholar
Rezaei-Ghaleh N et al (2018) Local and global dynamics in intrinsically disordered synuclein. Angew Chem Int Ed Engl 57:15262–15266CrossRefGoogle Scholar
Romero P et al (1998) Thousands of proteins likely to have long disordered regions. Pac Symp Biocomput 3:437–448Google Scholar
Rowell JP, Simpson KL, Stott K, Watson M, Thomas JO (2012) HMGB1-facilitated p53 DNA binding occurs via HMG-Box/p53 transactivation domain interaction, regulated by the acidic tail. Structure 20:2014–2024CrossRefGoogle Scholar
Salmon L et al (2010) NMR characterization of long-range order in intrinsically disordered proteins. J Am Chem Soc 132:8407–8418CrossRefGoogle Scholar
Shan B, Li DW, Bruschweiler-Li L, Bruschweiler R (2012) Competitive binding between dynamic p53 transactivation subdomains to human MDM2 protein: implications for regulating the p53. MDM2/MDMX interaction. J Biol Chem 287:30376–30384CrossRefGoogle Scholar
Shortle D, Ackerman MS (2001) Persistence of native-like topology in a denatured protein in 8 M urea. Science 293:487–489CrossRefGoogle Scholar
Skinner SP et al (2016) CcpNmr AnalysisAssign: a flexible platform for integrated NMR analysis. J Biomol NMR 66:111–124CrossRefGoogle Scholar
Sun Y, Friedman JI, Stivers JT (2011) Cosolute paramagnetic relaxation enhancements detect transient conformations of human uracil DNA glycosylase (hUNG). Biochemistry 50:10724–10731CrossRefGoogle Scholar
Theillet FX et al (2014) Physicochemical properties of cells and their effects on intrinsically disordered proteins (IDPs). Chem Rev 114:6661–6714CrossRefGoogle Scholar
Tompa P (2012) Intrinsically disordered proteins: a 10-year recap. Trends Biochem Sci 37:509–516CrossRefGoogle Scholar
Uversky VN (2011) Intrinsically disordered proteins from A to Z. Int J Biochem Cell Biol 43:1090–1103CrossRefGoogle Scholar
Uversky VN, Oldfield CJ, Dunker AK (2008) Intrinsically disordered proteins in human diseases: introducing the D2 concept. Annu Rev Biophys 37:215–246CrossRefGoogle Scholar
Uversky VN et al (2014) Pathological unfoldomics of uncontrolled chaos: intrinsically disordered proteins and human diseases. Chem Rev 114:6844–6879CrossRefGoogle Scholar
van den Berg MC et al (2013) The small GTPase RALA controls c-Jun N-terminal kinase-mediated FOXO activation by regulation of a JIP1 scaffold complex. J Biol Chem 288:21729–21741CrossRefGoogle Scholar
van der Lee R et al (2014) Classification of intrinsically disordered regions and proteins. Chem Rev 114:6589–6631CrossRefGoogle Scholar
Vousden KH, Prives C (2009) Blinded by the Light: The Growing Complexity of p53. Cell 137:413–431CrossRefGoogle Scholar
Wang Y, Schwieters CD, Tjandra N (2012a) Parameterization of solvent-protein interaction and its use on NMR protein structure determination. J Magn Reson 221:76–84ADSCrossRefGoogle Scholar
Wang Y, Schwieters CD, Tjandra N (2012b) Parameterization of solvent–protein interaction and its use on NMR protein structure determination. J Magn Reson 221:76–84ADSCrossRefGoogle Scholar
Weigel D, Jackle H (1990) The fork head domain: a novel DNA binding motif of eukaryotic transcription factors? Cell 63:455–456CrossRefGoogle Scholar
Wells M et al (2008) Structure of tumor suppressor p53 and its intrinsically disordered N-terminal transactivation domain. Proc Natl Acad Sci USA 105:5762–5767ADSCrossRefGoogle Scholar
Wright PE, Dyson HJ (1999) Intrinsically unstructured proteins: re-assessing the protein structure-function paradigm. J Mol Biol 293:321–331CrossRefGoogle Scholar
Wright PE, Dyson HJ (2009) Linking folding and binding. Curr Opin Struct Biol 19:31–38CrossRefGoogle Scholar
Wright PE, Dyson HJ (2015) Intrinsically disordered proteins in cellular signalling and regulation. Nat Rev Mol Cell Biol 16:18–29CrossRefGoogle Scholar
Yuwen T et al (2018) Measuring solvent hydrogen exchange rates by multifrequency excitation (15)N CEST: application to protein phase separation. J Phys Chem B 122:11206–11217CrossRefGoogle Scholar
Zangger K et al (2009a) Positioning of micelle-bound peptides by paramagnetic relaxation enhancements. J Phys Chem B 113:4400–4406CrossRefGoogle Scholar
Zangger K et al (2009b) Positioning of micelle-bound peptides by paramagnetic relaxation enhancements. J Phys Chem B 113:4400–4406CrossRefGoogle Scholar
Email authorView author's OrcID profile
1.Center for Integrated Protein Science Munich (CIPSM) at the Department of ChemistryTechnische Universität MünchenGarchingGermany
2.Gottfried Schatz Research Center for Cell Signaling, Metabolism and Aging, Institute of Molecular Biology & BiochemistryMedical University of GrazGrazAustria
3.The Campbell Family Institute for Breast Cancer Research at Princess Margaret Cancer CentreTorontoCanada
4.Institute of Pharmaceutical SciencesUniversity of GrazGrazAustria
5.BioTechMed-GrazGrazAustria
Hartlmüller, C., Spreitzer, E., Göbl, C. et al. J Biomol NMR (2019). https://doi.org/10.1007/s10858-019-00248-2
Accepted 11 April 2019 | CommonCrawl |
the open journal for quantum science
Perspectives & editorials
Outreach: Leaps!
Quantized refrigerator for an atomic cloud
Wolfgang Niedenzu1, Igor Mazets2,3, Gershon Kurizki4, and Fred Jendrzejewski5
1Institut für Theoretische Physik, Universität Innsbruck, Technikerstraße 21a, A-6020 Innsbruck, Austria
2Vienna Center for Quantum Science and Technology (VCQ), Atominstitut, TU Wien, 1020 Vienna, Austria
3Wolfgang Pauli Institute, Universität Wien, 1090 Vienna, Austria
4Department of Chemical Physics, Weizmann Institute of Science, Rehovot 7610001, Israel
5Heidelberg University, Kirchhoff-Institut für Physik, Im Neuenheimer Feld 227, D-69120 Heidelberg, Germany
Published: 2019-06-28, volume 3, page 155
Eprint: arXiv:1812.08474v3
Scirate: https://scirate.com/arxiv/1812.08474v3
Doi: https://doi.org/10.22331/q-2019-06-28-155
Citation: Quantum 3, 155 (2019).
We propose to implement a quantized thermal machine based on a mixture of two atomic species. One atomic species implements the working medium and the other implements two (cold and hot) baths. We show that such a setup can be employed for the refrigeration of a large bosonic cloud starting above and ending below the condensation threshold. We analyze its operation in a regime conforming to the quantized Otto cycle and discuss the prospects for continuous-cycle operation, addressing the experimental as well as theoretical limitations. Beyond its applicative significance, this setup has a potential for the study of fundamental questions of quantum thermodynamics.
► BibTeX data
@article{Niedenzu2019quantized, doi = {10.22331/q-2019-06-28-155}, url = {https://doi.org/10.22331/q-2019-06-28-155}, title = {Quantized refrigerator for an atomic cloud}, author = {Niedenzu, Wolfgang and Mazets, Igor and Kurizki, Gershon and Jendrzejewski, Fred}, journal = {{Quantum}}, issn = {2521-327X}, publisher = {{Verein zur F{\"{o}}rderung des Open Access Publizierens in den Quantenwissenschaften}}, volume = {3}, pages = {155}, month = jun, year = {2019} }
► References
[1] R. Alicki, The quantum open system as a model of the heat engine, J. Phys. A 12, L103 (1979).
https://doi.org/10.1088/0305-4470/12/5/007
[2] R. Kosloff, A quantum mechanical open system as a model of a heat engine, J. Chem. Phys. 80, 1625 (1984).
https://doi.org/10.1063/1.446862
[3] D. Gelbwaser-Klimovsky, W. Niedenzu, and G. Kurizki, Thermodynamics of Quantum Systems Under Dynamical Control, Adv. At. Mol. Opt. Phys. 64, 329 (2015a).
https://doi.org/10.1016/bs.aamop.2015.07.002
[4] J. Goold, M. Huber, A. Riera, L. del Rio, and P. Skrzypczyk, The role of quantum information in thermodynamics-a topical review, J. Phys. A 49, 143001 (2016).
https://doi.org/10.1088/1751-8113/49/14/143001
[5] S. Vinjanampathy and J. Anders, Quantum thermodynamics, Contemp. Phys. 57, 1 (2016).
https://doi.org/10.1080/00107514.2016.1201896
[6] R. Kosloff and Y. Rezek, The Quantum Harmonic Otto Cycle, Entropy 19, 136 (2017).
https://doi.org/10.3390/e19040136
[7] A. Ghosh, W. Niedenzu, V. Mukherjee, and G. Kurizki, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 37-66.
https://doi.org/10.1007/978-3-319-99046-0_2
[8] M. O. Scully, M. S. Zubairy, G. S. Agarwal, and H. Walther, Extracting Work from a Single Heat Bath via Vanishing Quantum Coherence, Science 299, 862 (2003).
https://doi.org/10.1126/science.1078955
[9] O. Abah, J. Roßnagel, G. Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, and E. Lutz, Single-Ion Heat Engine at Maximum Power, Phys. Rev. Lett. 109, 203006 (2012).
https://doi.org/10.1103/PhysRevLett.109.203006
[10] M. Horodecki and J. Oppenheim, Fundamental limitations for quantum and nanoscale thermodynamics, Nat. Commun. 4, 2059 (2013).
https://doi.org/10.1038/ncomms3059
[11] P. Skrzypczyk, A. J. Short, and S. Popescu, Work extraction and thermodynamics for individual quantum systems, Nat. Commun. 5, 4185 (2014).
[12] J. B. Brask, G. Haack, N. Brunner, and M. Huber, Autonomous quantum thermal machine for generating steady-state entanglement, New J. Phys. 17, 113029 (2015).
[13] R. Uzdin, A. Levy, and R. Kosloff, Equivalence of Quantum Heat Machines, and Quantum-Thermodynamic Signatures, Phys. Rev. X 5, 031044 (2015).
https://doi.org/10.1103/PhysRevX.5.031044
[14] M. Campisi and R. Fazio, The power of a critical heat engine, Nat. Commun. 7, 11895 (2016).
https://doi.org/10.1038/ncomms11895
[15] W. Niedenzu, V. Mukherjee, A. Ghosh, A. G. Kofman, and G. Kurizki, Quantum engine efficiency bound beyond the second law of thermodynamics, Nat. Commun. 9, 165 (2018).
https://doi.org/10.1038/s41467-017-01991-6
[16] J.-P. Brantut, C. Grenier, J. Meineke, D. Stadler, S. Krinner, C. Kollath, T. Esslinger, and A. Georges, A Thermoelectric Heat Engine with Ultracold Atoms, Science 342, 713 (2013).
[17] J. V. Koski, V. F. Maisi, J. P. Pekola, and D. V. Averin, Experimental realization of a Szilard engine with a single electron, Proc. Natl. Acad. Sci. USA 111, 13786 (2014).
https://doi.org/10.1073/pnas.1406966111
[18] J. Roßnagel, S. T. Dawkins, K. N. Tolazzi, O. Abah, E. Lutz, F. Schmidt-Kaler, and K. Singer, A single-atom heat engine, Science 352, 325 (2016).
https://doi.org/10.1126/science.aad6320
[19] J. Klaers, S. Faelt, A. Imamoglu, and E. Togan, Squeezed Thermal Reservoirs as a Resource for a Nanomechanical Engine beyond the Carnot Limit, Phys. Rev. X 7, 031044 (2017).
[20] J. Klatzow, J. N. Becker, P. M. Ledingham, C. Weinzetl, K. T. Kaczmarek, D. J. Saunders, J. Nunn, I. A. Walmsley, R. Uzdin, and E. Poem, Experimental Demonstration of Quantum Effects in the Operation of Microscopic Heat Engines, Phys. Rev. Lett. 122, 110601 (2019).
[21] J. V. Koski, A. Kutvonen, I. M. Khaymovich, T. Ala-Nissila, and J. P. Pekola, On-Chip Maxwell's Demon as an Information-Powered Refrigerator, Phys. Rev. Lett. 115, 260602 (2015).
[22] D. von Lindenfels, O. Gräb, C. T. Schmiegelow, V. Kaushal, J. Schulz, F. Schmidt-Kaler, and U. G. Poschinger, A spin heat engine coupled to a harmonic-oscillator flywheel, arXiv preprint arXiv:1808.02390 (2018).
arXiv:1808.02390
[23] C. J. Pethick and H. Smith, Bose-Einstein Condensation in Dilute Gases, 2nd ed. (Cambridge University Press, Cambridge, 2008).
[24] E. Geva and R. Kosloff, A quantum-mechanical heat engine operating in finite time. A model consisting of spin-1/2 systems as the working fluid, J. Chem. Phys. 96, 3054 (1992).
[25] T. Feldmann and R. Kosloff, Quantum four-stroke heat engine: Thermodynamic observables in a model with intrinsic friction, Phys. Rev. E 68, 016101 (2003).
https://doi.org/10.1103/PhysRevE.68.016101
[26] O. Abah and E. Lutz, Optimal performance of a quantum Otto refrigerator, EPL (Europhys. Lett.) 113, 60002 (2016).
https://doi.org/10.1209/0295-5075/113/60002
[27] P. A. Erdman, V. Cavina, R. Fazio, F. Taddei, and V. Giovannetti, Maximum Power and Corresponding Efficiency for Two-Level Quantum Heat Engines and Refrigerators, arXiv preprint arXiv:1812.05089 (2018).
[28] A. Mazurenko, C. S. Chiu, G. Ji, M. F. Parsons, M. Kanász-Nagy, R. Schmidt, F. Grusdt, E. Demler, D. Greif, and M. Greiner, A cold-atom Fermi–Hubbard antiferromagnet, Nature 545, 462 (2016).
https://doi.org/10.1038/nature22362
[29] O. Fialko and D. W. Hallwood, Isolated Quantum Heat Engine, Phys. Rev. Lett. 108, 085303 (2012).
[30] K. B. Davis, M.-O. Mewes, M. A. Joffe, M. R. Andrews, and W. Ketterle, Evaporative Cooling of Sodium Atoms, Phys. Rev. Lett. 74, 5202 (1995).
https://doi.org/10.1103/PhysRevLett.74.5202
[31] W. Petrich, M. H. Anderson, J. R. Ensher, and E. A. Cornell, Stable, Tightly Confining Magnetic Trap for Evaporative Cooling of Neutral Atoms, Phys. Rev. Lett. 74, 3352 (1995).
[32] R. Grimm, M. Weidemüller, and Y. B. Ovchinnikov, Optical Dipole Traps for Neutral Atoms, Adv. At. Mol. Opt. Phys. 42, 95 (2000).
https://doi.org/10.1016/S1049-250X(08)60186-X
[33] M. Kolář, D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Quantum Bath Refrigeration towards Absolute Zero: Challenging the Unattainability Principle, Phys. Rev. Lett. 109, 090601 (2012).
[34] D. Gelbwaser-Klimovsky, R. Alicki, and G. Kurizki, Minimal universal quantum heat machine, Phys. Rev. E 87, 012140 (2013).
[35] M. Albiez, R. Gati, J. Fölling, S. Hunsmann, M. Cristiani, and M. K. Oberthaler, Direct Observation of Tunneling and Nonlinear Self-Trapping in a Single Bosonic Josephson Junction, Phys. Rev. Lett. 95, 010402 (2005).
https://doi.org/10.1103/PhysRevLett.95.010402
[36] S. Eckel, J. G. Lee, F. Jendrzejewski, C. J. Lobb, G. K. Campbell, and W. T. Hill, Contact resistance and phase slips in mesoscopic superfluid-atom transport, Phys. Rev. A 93, 063619 (2016).
https://doi.org/10.1103/PhysRevA.93.063619
[37] N. Spethmann, F. Kindermann, S. John, C. Weber, D. Meschede, and A. Widera, Dynamics of Single Neutral Impurity Atoms Immersed in an Ultracold Gas, Phys. Rev. Lett. 109, 235301 (2012).
[38] M. Hohmann, F. Kindermann, T. Lausch, D. Mayer, F. Schmidt, and A. Widera, Single-atom thermometer for ultracold gases, Phys. Rev. A 93, 043607 (2016).
[39] R. Scelle, T. Rentrop, A. Trautmann, T. Schuster, and M. K. Oberthaler, Motional Coherence of Fermions Immersed in a Bose Gas, Phys. Rev. Lett. 111, 070401 (2013).
[40] T. Rentrop, A. Trautmann, F. A. Olivares, F. Jendrzejewski, A. Komnik, and M. K. Oberthaler, Observation of the Phononic Lamb Shift with a Synthetic Vacuum, Phys. Rev. X 6, 041041 (2016).
[41] I. Bloch, Ultracold quantum gases in optical lattices, Nat. Phys. 1, 23 (2005).
https://doi.org/10.1038/nphys138
[42] K.-N. Schymik, Implementing an Optical Accordion Lattice for the Realization of a Quantized Otto Cycle, Masterarbeit, Universität Heidelberg (2018).
[43] M. J. H. Ku, A. T. Sommer, L. W. Cheuk, and M. W. Zwierlein, Revealing the Superfluid Lambda Transition in the Universal Thermodynamics of a Unitary Fermi Gas, Science 335, 563 (2012).
[44] T. Jacqmin, J. Armijo, T. Berrada, K. V. Kheruntsyan, and I. Bouchoule, Sub-Poissonian Fluctuations in a 1D Bose Gas: From the Quantum Quasicondensate to the Strongly Interacting Regime, Phys. Rev. Lett. 106, 230405 (2011).
[45] R. Desbuquois, T. Yefsah, L. Chomaz, C. Weitenberg, L. Corman, S. Nascimbène, and J. Dalibard, Determination of Scale-Invariant Equations of State without Fitting Parameters: Application to the Two-Dimensional Bose Gas Across the Berezinskii-Kosterlitz-Thouless Transition, Phys. Rev. Lett. 113, 020404 (2014).
[46] L. A. Correa, M. Perarnau-Llobet, K. V. Hovhannisyan, S. Hernández-Santana, M. Mehboudi, and A. Sanpera, Enhancement of low-temperature thermometry by strong coupling, Phys. Rev. A 96, 062103 (2017).
[47] A. Lampo, S. H. Lim, M. Á. García-March, and M. Lewenstein, Bose polaron as an instance of quantum Brownian motion, Quantum 1, 30 (2017).
https://doi.org/10.22331/q-2017-09-27-30
[48] V. Mukherjee, A. Zwick, A. Ghosh, X. Chen, and G. Kurizki, Enhanced precision of low-temperature quantum thermometry via dynamical control, arXiv preprint arXiv:1711.09660 (2017).
[49] M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Bauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J Schmiedmayer, Relaxation and Prethermalization in an Isolated Quantum System, Science 337, 1318 (2012).
[50] R. Olf, F. Fang, G. E. Marti, A. MacRae, and D. M. Stamper-Kurn, Thermometry and cooling of a Bose gas to 0.02 times the condensation temperature, Nat. Phys. 11, 720 (2015).
https://doi.org/10.1038/nphys3408
[51] B. Rauer, S. Erne, T. Schweigler, F. Cataldini, M. Tajik, and J. Schmiedmayer, Recurrences in an isolated quantum many-body system, Science 360, 307 (2018).
https://doi.org/10.1126/science.aan7938
[52] L. P. Pitaevski\u\i, Bose—Einstein condensation in magnetic traps. Introduction to the theory, Phys.-Uspekhi 41, 569 (1998).
https://doi.org/10.1070/PU1998v041n06ABEH000407
[53] F. Zambelli, L. Pitaevskii, D. M. Stamper-Kurn, and S. Stringari, Dynamic structure factor and momentum distribution of a trapped Bose gas, Phys. Rev. A 61, 063608 (2000).
[54] F. Dalfovo, S. Giorgini, L. P. Pitaevskii, and S. Stringari, Theory of Bose-Einstein condensation in trapped gases, Rev. Mod. Phys. 71, 463 (1999).
https://doi.org/10.1103/RevModPhys.71.463
[55] A. del Campo, J. Goold, and M. Paternostro, More bang for your buck: Super-adiabatic quantum engines, Sci. Rep. 4, 6208 (2014).
https://doi.org/10.1038/srep06208
[56] F. Grusdt and E. Demler, in Quantum Matter at Ultralow Temperatures, Proceedings of the International School of Physics ``Enrico Fermi'', Vol. 191, edited by M. Inguscio, W. Ketterle, S. Stringari, and G. Roati (IOS Press, Amsterdam, 2016) p. 325.
https://doi.org/10.3254/978-1-61499-694-1-325
[57] M.-G. Hu, M. J. Van de Graaff, D. Kedar, J. P. Corson, E. A. Cornell, and D. S. Jin, Bose Polarons in the Strongly Interacting Regime, Phys. Rev. Lett. 117, 055301 (2016).
[58] N. B. Jørgensen, L. Wacker, K. T. Skalmstang, M. M. Parish, J. Levinsen, R. S. Christensen, G. M. Bruun, and J. J. Arlt, Observation of Attractive and Repulsive Polarons in a Bose-Einstein Condensate, Phys. Rev. Lett. 117, 055302 (2016).
[59] C. J. Myatt, E. A. Burt, R. W. Ghrist, E. A. Cornell, and C. E. Wieman, Production of Two Overlapping Bose-Einstein Condensates by Sympathetic Cooling, Phys. Rev. Lett. 78, 586 (1997).
https://doi.org/10.1103/PhysRevLett.78.586
[60] M. Prüfer, P. Kunkel, H. Strobel, S. Lannig, D. Linnemann, C.-M. Schmied, J. Berges, T. Gasenzer, and M. K. Oberthaler, Observation of universal dynamics in a spinor Bose gas far from equilibrium, Nature 563, 217 (2018).
https://doi.org/10.1038/s41586-018-0659-0
[61] C. Eigen, J. A. P. Glidden, R. Lopes, E. A. Cornell, R. P. Smith, and Z. Hadzibabic, Universal prethermal dynamics of Bose gases quenched to unitarity, Nature 563, 221 (2018).
[62] S. Erne, R. Bücker, T. Gasenzer, J. Berges, and J. Schmiedmayer, Universal dynamics in an isolated one-dimensional Bose gas far from equilibrium, Nature 563, 225 (2018).
[63] D. Gelbwaser-Klimovsky, W. Niedenzu, P. Brumer, and G. Kurizki, Power enhancement of heat engines via correlated thermalization in a three-level ``working fluid'', Sci. Rep. 5, 14413 (2015b).
[64] W. Niedenzu and G. Kurizki, Cooperative many-body enhancement of quantum thermal machine power, New J. Phys. 20, 113038 (2018).
https://doi.org/10.1088/1367-2630/aaed55
[65] R. Nandkishore and D. A. Huse, Many-Body Localization and Thermalization in Quantum Statistical Mechanics, Annu. Rev. Condens. Matter Phys. 6, 15 (2015).
https://doi.org/10.1146/annurev-conmatphys-031214-014726
[66] T. Hartmann, T. A. Schulze, K. K. Voges, P. Gersema, M. W. Gempel, E. Tiemann, A. Zenesini, and S. Ospelkaus, Feshbach resonances in $^{23}\mathrm{Na}+^{39}\mathrm{K}$ mixtures and refined molecular potentials for the NaK molecule, Phys. Rev. A 99, 032711 (2019).
[67] L. J. LeBlanc and J. H. Thywissen, Species-specific optical lattices, Phys. Rev. A 75, 053612 (2007).
[68] M. O. Scully, Collective Lamb Shift in Single Photon Dicke Superradiance, Phys. Rev. Lett. 102, 143601 (2009).
[69] I. E. Mazets and G. Kurizki, Multiatom cooperative emission following single-photon absorption: Dicke-state dynamics, J. Phys. B: At. Mol. Opt. Phys. 40, F105 (2007).
https://doi.org/10.1088/0953-4075/40/6/F01
[70] A. Manatuly, W. Niedenzu, R. Román-Ancheyta, B. Çakmak, Ö. E. Müstecaplioğlu, and G. Kurizki, Collectively enhanced thermalization via multiqubit collisions, Phys. Rev. E 99, 042145 (2019).
[71] F. Jendrzejewski, S. Eckel, N. Murray, C. Lanier, M. Edwards, C. J. Lobb, and G. K. Campbell, Resistive Flow in a Weakly Interacting Bose-Einstein Condensate, Phys. Rev. Lett. 113, 045305 (2014).
[72] E. Torrontegui, S. Ibánez, S. Martínez-Garaot, M. Modugno, A. del Campo, D. Guéry-Odelin, A. Ruschhaupt, X. Chen, and J. G. Muga, Shortcuts to Adiabaticity, Adv. At. Mol. Opt. Phys. 62, 117 (2013).
https://doi.org/10.1016/B978-0-12-408090-4.00002-5
[73] A. del Campo, A. Chenu, S. Deng, and H. Wu, in Thermodynamics in the Quantum Regime, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer, Cham, 2019) pp. 127-148.
[74] R. Kosloff, Quantum Thermodynamics: A Dynamical Viewpoint, Entropy 15, 2100 (2013).
[75] R. Alicki, Quantum Thermodynamics: An Example of Two-Level Quantum Machine, Open Syst. Inf. Dyn. 21, 1440002 (2014).
https://doi.org/10.1142/S1230161214400022
[76] K. Brandner and U. Seifert, Periodic thermodynamics of open quantum systems, Phys. Rev. E 93, 062134 (2016).
[77] V. Mukherjee, W. Niedenzu, A. G. Kofman, and G. Kurizki, Speed and efficiency limits of multilevel incoherent heat engines, Phys. Rev. E 94, 062109 (2016).
[78] D. J. Wineland and W. M. Itano, Laser cooling of atoms, Phys. Rev. A 20, 1521 (1979).
https://doi.org/10.1103/PhysRevA.20.1521
[79] K. Szczygielski, On the application of Floquet theorem in development of time-dependent Lindbladians, J. Math. Phys. 55, 083506 (2014).
https://doi.org/10.1063/1.4891401
[80] M. Lewenstein, J. I. Cirac, and P. Zoller, Master equation for sympathetic cooling of trapped particles, Phys. Rev. A 51, 4617 (1995).
[81] E. Geva, R. Kosloff, and J. L. Skinner, On the relaxation of a two-level system driven by a strong electromagnetic field, J. Chem. Phys. 102, 8541 (1995).
[82] R. Scelle, Dynamics and Motional Coherence of Fermions Immersed in a Bose Gas, Ph.D. thesis, University of Heidelberg (2013).
https://doi.org/10.11588/heidok.00015142
[83] N. Erez, G. Gordon, M. Nest, and G. Kurizki, Thermodynamic control by frequent quantum measurements, Nature 452, 724 (2008).
[84] G. Gordon, G. Bensky, D. Gelbwaser-Klimovsky, D. D. B. Rao, N. Erez, and G. Kurizki, Cooling down quantum bits on ultrashort time scales, New J. Phys. 11, 123025 (2009).
[85] G. A. Álvarez, D. D. B. Rao, L. Frydman, and G. Kurizki, Zeno and Anti-Zeno Polarization Control of Spin Ensembles by Induced Dephasing, Phys. Rev. Lett. 105, 160401 (2010).
[86] R. S. Whitney, Non-Markovian quantum thermodynamics: Laws and fluctuation theorems, Phys. Rev. B 98, 085415 (2018).
https://doi.org/10.1103/PhysRevB.98.085415
[87] A. Wunsche, Displaced Fock states and their connection to quasiprobabilities, Quantum Opt. 3, 359 (1991).
https://doi.org/10.1088/0954-8998/3/6/005
[88] H. Bateman, Higher Transcendental Functions Volume II, 1st ed. (McGraw-Hill, New York, 1953).
[1] Deniz Türkpençe and Ricardo Román-Ancheyta, "Tailoring the thermalization time of a cavity-field using distinct atomic reservoirs", arXiv:1708.03721.
The above citations are from SAO/NASA ADS (last updated 2019-07-16 21:53:38). The list may be incomplete as not all publishers provide suitable and complete citation data.
On Crossref's cited-by service no data on citing works was found (last attempt 2019-07-16 21:53:36).
This Paper is published in Quantum under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.
Quantum is an open-access peer-reviewed journal for quantum science and related fields.
Quantum is non-profit and community-run: an effort by researchers and for researchers to make science more open and publishing more transparent and efficient.
Sign up for our monthly digest of papers and other news.
Steering Board
Anne Broadbent
Harry Buhrman
Jens Eisert
Debbie Leung
Chaoyang Lu
Ana Maria Rey
Anna Sanpera
Urbasi Sinha
Robert W. Spekkens
Reinhard Werner
Birgitta Whaley
Andreas Winter
Ahsan Nazir
António Acín
Carlo Beenakker
Nicolas Brunner
Daniel Burgarth
Guido Burkard
Earl Campbell
Eric Cavalcanti
Gabriele De Chiara
Steven Flammia
Sevag Gharibian
Christopher Granade
Aram Harrow
Khabat Heshami
Chris Heunen
Shelby Kimmel
Matthew Leifer
Anthony Leverrier
Chiara Macchiavello
Ashley Montanaro
Milan Mosonyi
Roman Orus
Saverio Pascazio
Marco Piani
Joseph Renes
Jörg Schmiedmayer
Volkher Scholz
Ujjwal Sen
Jens Siewert
John Smolin
André Stefanov
Aephraim Steinberg
Krysta Svore
Luca Tagliacozzo
Marco Tomamichel
Francesca Vidotto
Thomas Vidick
Michael Walter
Witlef Wieczorek
Alexander Wilce
Ronald de Wolf
Magdalena Zych
Karol Życzkowski
Christian Gogolin
Marcus Huber
Lídia del Rio
Support Quantum and
or print our poster.
Feedback and discussion on /r/quantumjournal
Contact us by email.
© Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften.
Data protection and privacy policy
https://doi.org/10.22331/q
Quantum practices open accounting.
Copyright © 2019 Quantum – OnePress theme by FameThemes
This website uses cookies to improve your experience. For more information see the data protection and privacy policy. Accept | CommonCrawl |
Fast $L^1$ convergence implies almost uniform convergence, check work
I wrote up the following proof for the lemma, please check if I made any mistakes, thank you!
Statement: Suppose that $f_n,f : X\rightarrow \mathbb{R}$ are measurable functions such that $\sum_{n=1}^{\infty} ||f_n - f||_{L^1} < \infty$, then $f_n \rightarrow f$ almost uniformly; which also implies $f_n \rightarrow f$ point wise almost everywhere.
Proof: Let $\|\cdot\|$ denote $L^1$ norm, given $\sum_n \|f_n - f\|<\infty$, choose a sequences $\{c_n\}$ such that $\{c_n\}$ increases to $\infty$ and yet
$$\sum_n c_n \|f_n - f\|< \infty.$$
Using Chebychev ineq, we have $$\frac{1}{c_n} \mu(\{x\in X : |f_n - f|\geq \frac{1}{c_n}\}) \leq \|f_{n} - f\| $$ $$ \mu(\{x\in X : |f_n - f|\geq \frac{1}{c_n}\}) \leq c_n \|f_{n} - f\|.$$
For each $\epsilon > 0$, there exists a $N$ such that $$\sum_{n=N}^\infty c_n \|f_{n} - f\| < \epsilon,$$ Define $$A:=\bigcup_{n=N}^{\infty} \{x\in X : |f_{n} - f|\geq \frac{1}{c_n}\}$$ then $$\mu(A) \leq \sum_{n=N}^\infty \mu(\{x\in X : |f_n - f|\geq \frac{1}{c_n}\}) \leq \sum_{n=N}^\infty c_n \|f_{n} - f\| \leq \epsilon$$ and $$\lim_{n\rightarrow \infty} \sup_{x\not\in A}\|f_{n} - f\|_\infty \leq \frac{1}{c_n} = 0$$ which means $f_n$ converges uniformly to $f$ outside of $A$.
For the existence of $c_n$, see here.
real-analysis convergence-divergence
XiaoXiao
Summing the first inequality over $n$, I get $$\lambda \sum_{n=1}^\infty \frac{1}{2^n} \mu\left(\{x\in X :|f_n(x)-f(x)|\geq \frac{\lambda}{2^n} \}\right) \leq ||f_n-f||_{L^1} $$ with the factor of $1/2^n$ hanging in there. You can't just sum it as a geometric series, it's attached to a quantity dependent on $n$. In a nutshell, $a_1b_1+a_2b_2 \ne (a_1+a_2)(b_1+b_2)$.
To fix this, use $\lambda$ without $2^n$: $$\lambda \sum_{n=1}^\infty \mu\left(\{x\in X :|f_n(x)-f(x)|\geq \lambda \}\right) \leq ||f_n-f||_{L^1} $$ This directly gives the conclusion $$\sum_{n=1}^\infty \mu \left(\{x\in X :|f_n(x)-f(x)|\geq \lambda\}\right) <\infty$$ and the proof proceeds as you wrote.
$\begingroup$ Yes, Thank you. I edited my proof. $\endgroup$ – Xiao Mar 20 '14 at 9:52
I think that this proof is incorrect. You need to show that for any $\varepsilon >0$, there exists $A\subset X$ so that $sup_{x\notin A}{|f_n(x)-f(x)|\to 0}$.
You proved that for any $\varepsilon>0$ and $\lambda >0$, there exists $A\subset X$ so that $|f_n(x)-f(x)|<\lambda$ for large $n$ and $x\notin A$.
$A$ here is dependent on $\lambda$ which is forbiden.
Yoav Bar SinaiYoav Bar Sinai
$\begingroup$ Yes, it was incorrect, I kind of forgot about this one. Updated with new proof. $\endgroup$ – Xiao Feb 2 '15 at 20:09
Not the answer you're looking for? Browse other questions tagged real-analysis convergence-divergence or ask your own question.
For $a_n>0$ such that $\sum a_n $ converges, show that there exist $c_n>0$ such that $c_n\to \infty$ and $\sum a_n c_n$ is finite.
Convergence almost uniformly to zero with certain conditions.
Convergence almost everywhere implies convergence in measure, the proof thereof
Almost uniform convergence implies a.e. pointwise convergence proof
$L^1$ convergence and almost everywhere convergence
Almost Uniform convergence Theorem (Egoroff) doubts
Dominated a.e. convergence implies almost uniform convergence
Equivalent definition of almost uniform convergence
"Almost uniformly convergent" Implies "Uniformly convergent almost everywhere" . Is there something wrong?
Almost uniform convergence implies convergence in measure | CommonCrawl |
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Join them; it only takes a minute:
How derive g and u symmetry labels for orbitals?
When asked whether a molecule has an inversion center, we "invert" the coordinates of all atoms; i.e. move each atom from its position through the center of symmetry and to a new position equidistant to the center of symmetry compared to its initial position, and see if the final configuration is indistinguishable from the initial configuration. Simple enough.
I am confused when trying to decide whether molecular orbitals have g or u symmetry. Should I move the atoms through the center of symmetry, should I move "each lobe" through the center of symmetry, or some other way? What is the most general way of describing the symmetry operation for orbitals?
In Figure 1, a bonding $\pi$ orbital is depicted. This is supposed to be antisymmetric with respect to the inversion center in the middle of the bond. It seems therefore I need to invert the lobes, somehow, or move in an "X" fashion across the bond. But this seems different than for the molecular case, where we simply invert atomic coordinates. Figure 2 should be symmetric under inversion, and inverting the lobe positions lead to this result.
And what when the two overlapping orbitals are not of the same type, e.g. a $\pi$ overlap between d and p orbitals? Will the MO ever have g symmetry?
I just fail to get the unifying link between the molecular case and the orbital case.
Edit: Something just hit me: Should I always think of it as "inverting all coordinates within the entity", regardless of whether the entity is an atom or an orbital?
Figure 1. Bonding $\pi$ orbital
Figure 2. Antibonding $\pi$ orbital
molecular-orbital-theory symmetry
YodaYoda
Short answer:
You should move everything through the centre of symmetry. That's why it's a centre of symmetry.
On the other hand, note that classifying orbitals just by g and u does not make much sense. You should always consider the entire molecule's point group and thus always consider which (set of) irreducible representation(s) an orbital or a set of orbitals has/have in said point group.
For example considering a set of six atoms arranged in an octahedron around a central atom; each of the surrounding atoms has three p-type orbitals and we are only interested in those that are perpendicular to the outside atom – central atom axis. These twelve p-orbitals transform as $\mathrm{t_{1g} + t_{1u} + t_{2g} + t_{2u}}$. Omitting the $\mathrm{t}$ and the subscripted numeral is a non-complete description of the orbitals. Only those in exactly the same irreducible representation will mix in a non-zero fashion.
JanJan
g and u labels are only used when the molecule itself possesses the inversion symmetry element. For a diatomic molecule, this means that both atoms have to be the same, i.e. it has to be a homonuclear diatomic molecule $(D_{\infty\mathrm{h}})$.
How do I carry out the inversion?
$\hspace{25 mm}$
Let's consider the $\ce{O2}$ molecule.
Let's consider the $\pi$ bonding MO. (Or one of them, to be precise, since they occur in degenerate pairs.)
Find the point of inversion of the molecule. Clearly it has to be the midpoint of the O=O bond, i.e. exactly in the middle.
Draw a straight line from one lobe to the point of inversion.
Extend the straight line through the point of inversion. You will see that you have gone from a shaded lobe to an unshaded lobe, indicating that the phase has changed. Therefore, the $\pi$ bonding MO is ungerade, or u.
If you start from the bottom-left lobe, you would draw a line through the point of inversion and end up in the top-right lobe, and you would draw the same conclusion.
The parity of such MOs cannot be classified as g or u. However, if we are talking about a homonuclear diatomic, then this wavefunction is not acceptable. You cannot have a wavefunction that has only d orbital character from one atom, but only p orbital character from the other. By symmetry, the contribution from both atom's d orbitals have to be equal.
This arises because the Hamiltonian is invariant to the permutation of identical particles. Essentially, the idea is that both atoms are exactly identical, so there cannot be any way for them to be distinguished from each other. If you had such a wavefunction, and if it was populated, then there would be different amounts of electron density on the two atoms, which would mean that the atoms could then be distinguished. This is quantum-mechanically forbidden.
Such MOs may arise in heteronuclear diatomics. However, heteronuclear diatomics $(C_{\infty\mathrm{v}})$ do not possess the inversion symmetry element, and accordingly their MOs cannot be classified as g or u. Since inversion does not preserve the molecule, there is no requirement that it should preserve the wavefunction.
What if one atom does not have any d orbitals?
There is no such thing. Every atom has d orbitals.
Whether they have the same energy, or whether they are occupied, is another thing altogether. If the energies are different, or if one atom has occupied d orbitals and the other does not, then it is a heteronuclear diatomic and as I said earlier, the g and u labels are not appropriate in such a case.
orthocresol♦orthocresol
Thanks for contributing an answer to Chemistry Stack Exchange!
Not the answer you're looking for? Browse other questions tagged molecular-orbital-theory symmetry or ask your own question.
How does localized bonding theory and hybridisation work?
Is s-p mixing referring to hybridization or is it the mixing of one atoms s orbital with the other's p orbital?
Orbital conservation in Molecular orbital theory
How to determine gerade & ungerade symmetry of a MO orbital?
Orbital and state correlation diagram for ring-closure of pentadienyl to a cyclic allyl
How do I determine the number and order of ligand orbitals?
Classifying some molecular orbitals after Hartree-Fock calculation
Theoretical basis behind orbital correlation diagram for pericyclic reactions
Resemblance of molecular orbital formed by mixing of s and p orbitals to hybridised sp orbital
Guidelines on qualitatively drawing MOs | CommonCrawl |
Quantitative aspects of chemical change
Stoichiometric calculations
Presentation: VPeyf
The mole (n) (abbreviation mol) is the SI (Standard International) unit for amount of substance.
The number of particles in a mole is called Avogadro's number and its value is \(\text{6,022} \times \text{10}^{\text{23}}\). These particles could be atoms, molecules or other particle units, depending on the substance.
The molar mass (M) is the mass of one mole of a substance and is measured in grams per mole or \(\text{g·mol$^{-1}$}\). The numerical value of an element's molar mass is the same as its relative atomic mass. For a covalent compound, the molar mass has the same numerical value as the molecular mass of that compound. For an ionic substance, the molar mass has the same numerical value as the formula mass of the substance.
The relationship between moles (n), mass in grams (m) and molar mass (M) is defined by the following equation:
\[n = \frac{m}{M}\]
In a balanced chemical equation, the number in front of the chemical symbols describes the mole ratio of the reactants and products.
The empirical formula of a compound is an expression of the relative number of each type of atom in the compound.
The molecular formula of a compound describes the actual number of atoms of each element in a molecule of the compound.
The formula of a substance can be used to calculate the percentage by mass that each element contributes to the compound.
The percentage composition of a substance can be used to deduce its chemical formula.
We can use the products of a reaction to determine the formula of one of the reactants.
We can find the number of moles of waters of crystallisation.
One mole of gas occupies a volume of \(\text{22,4}\) \(\text{dm$^{3}$}\) at S.T.P..
The concentration of a solution can be calculated using the following equation,
\[C = \frac{n}{V}\]
where C is the concentration (in \(\text{mol·dm$^{-3}$}\)), n is the number of moles of solute dissolved in the solution and V is the volume of the solution (in \(\text{dm$^{-3}$}\)). The concentration is a measure of the amount of solute that is dissolved in a given volume of liquid.
The concentration of a solution is measured in \(\text{mol·dm$^{-3}$}\).
Stoichiometry is the calculation of the quantities of reactants and products in chemical reactions. It is also the numerical relationship between reactants and products.
The theoretical yield of a reaction is the maximum amount of product that we expect to get out of a reaction.
Except where otherwise noted, this site is covered by a closed copyright license. All rights reserved. Terms and Conditions and Privacy Policy. | CommonCrawl |
Session code: ycpb
11:00 Lukas Schimmer (University of Copenhagen), Distinguished self-adjoint extensions of operators with gaps
11:20 Hans Konrad Knörr (Aalborg Universitet), On the adiabatic behaviour of a bound state when diving into the continuous spectrum
11:40 Jacob Shapiro (ETH Zurich), The topology of non-interacting electrons in strongly-disordered chiral chains
13:30 Simon Mayer (Institute of Science and Technology Austria), The free energy of a dilute 2d Bose gas
13:50 Nikolai Leopold (Institute of Science and Technology Austria), Mean-field Dynamics for the Nelson model with Fermions
14:10 Markus Lange (Karlsruhe Institute of Technology), On Asymptotic Expansions for Spin Boson Models
14:30 Alessandro Olgiati (SISSA, Trieste), Ground state properties of mixtures of condensates
15:30 Paweł Duch (Jagiellonian University), Adiabatic limit and vacuum state in Epstein-Glaser approach to perturbative quantum field theory
15:50 Federico Faldino (University of Genova), On interacting KMS states in pAQFT: Stability, Relative Entropy and Entropy Production
16:10 Xiao He (Département de mathématiques et de statistique, Université Laval), How to add ''ghosts'' in BRST reduction ? ----A remark on semi-infinite cohomology
16:30 Toshimitsu Takaesu (Gunma University), Ground States of Quantum Electrodynamics with Cutoffs
11:00 Sorour Karimi Dehbokri (Technische Universitat Braunschweig), Renormalization Group flow
11:20 Tianshu Liu (The University of Melbourne), osp(1|2) Minimal Models And Their Coset Construction
11:40 H. M. Bharath (Georgia Institute of Technology), Non-Abelian Geometric Phases Carried by the Spin Fluctuation Tensor
13:30 Konstantin Merz (Ludwig-Maximilians-Universität München), On the Strong Scott Conjecture in the Chandraskehar Model
13:50 Luca Nenna ((CEREMADE) Université Paris-Dauphine), Multi-Marginal Optimal Transport in Quantum Mechanics
14:10 Robert Rauch (Technische Universität Braunschweig), Orthogonalization of Fermion k-Body Operators and Representabilty
14:30 Arnaud Triay (Ceremade - Université Paris-Dauphine), Derivation of the dipolar Gross-Pitaevskii energy
15:30 Atsuhide Ishida (Tokyo University of Science), Propagation property and inverse scattering for fractional powers of the negative Laplacian
15:50 Michal Jex (Karlsruher Institut für Technologie), Revisiting Lieb-Thirring Inequalities
16:10 Yukihide Tadano (The University of Tokyo), Long-range scattering theory for discrete Schrödinger operators
Lukas Schimmer
Distinguished self-adjoint extensions of operators with gaps
Semibounded symmetric operators have a distinguished self-adjoint extension, the Friedrichs extension. The eigenvalues of the Friedrichs extension are given by a variational principle that involves only the domain of the symmetric operator. Although Dirac operators describing relativistic particles are not semibounded, the Dirac operator with Coulomb potential is known to have a distinguished extension. In this talk I will relate this extension to a generalisation of the Friedrichs extension to the setting of operators satisfying a gap condition. In addition I will prove, in the general setting, that the eigenvalues of this extension are also given by a variational principle that involves only the domain of the symmetric operator.
This is joint work with Jan Philip Solovej and Sabiha Tokus.
Hans Konrad Knörr
On the adiabatic behaviour of a bound state when diving into the continuous spectrum
The survival probability of a bound state is studied when an external potential varies smoothly and adiabatically in time. The initial state corresponds to a discrete eigenvalue which dives into the continuous spectrum and re-emerges from it as the potential is varied in time and finally returns to its initial value. The main result is that the survival probability of this bound state vanishes in the adiabatic limit. The methods used in the proof are quite robust and may be adopted to cover a large class of operators, including Schrödinger and Dirac operators. This talk is based on joint work with H. Cornean, A. Jensen and Gh. Nenciu.
Jacob Shapiro
The topology of non-interacting electrons in strongly-disordered chiral chains
We explore the strongly-disordered regime of chiral one dimensional systems which may exhibit topological properties. We extend the usual definitions of the topological invariants given in the spectral gap regime to the mobility gap regime, show the connection to localization, and prove the bulk-edge duality in both spectral and mobility gap regimes.
(Based on joint work with G. M. Graf)
Simon Mayer
Institute of Science and Technology Austria
The free energy of a dilute 2d Bose gas
We consider a two-dimensional interacting Bose gas in a homogeneous setting. The two-body interaction potential is assumed to be non-negative and of finite scattering length $a$. Under these quite general assumptions, we are able to obtain an asymptotic expansion formula of the free energy of the system at non-negative temperature in the dilute limit $a^2 \rho \ll 1$, where $\rho$ is the density. In the limit of zero temperature, the formula reduces to the asymptotic ground state energy which is an earlier result by Lieb and Yngvason (2001). Our work extends the corresponding result in three dimensions proved by R. Seiringer (2008) and J. Yin (2010).
Nikolai Leopold
Mean-field Dynamics for the Nelson model with Fermions
The Nelson model (with ultraviolet cutoff) describes a quantum system of non-relativistic identical particles coupled to a quantized scalar field. In this talk, I would like to discuss its time evolution in a mean-field limit of many fermions which is coupled to a semiclassical limit. At time zero, we assume that the bosons of the radiation field are in a coherent state and that the state of the fermions is given by a Slater determinant, whose reduced one-particle density matrix is an orthogonal projection with semiclassical structure. At later times and in the limit of many fermions it can be proven that the fermion state remains close to a slater determinant and that the time evolution is approximately described by the fermionic Schrödinger-Klein-Gordon equations. I will introduce the mentioned models and explain our main theorem. The talk is based on work in progress with Sören Petrat.
Markus Lange
On Asymptotic Expansions for Spin Boson Models
We consider expansions of eigenvalues and eigenvectors for a class of models known as generalized spin boson models. We prove existence of asymptotic expansions for the ground state and the ground state energy to arbitrary order. We need a mild but very natural infrared assumption, which is weaker than the assumption usually needed for other methods such as operator-theoretic renormalization to be applicable. The result complements known analyticity properties.
Alessandro Olgiati
SISSA, Trieste
Ground state properties of mixtures of condensates
I will present a rigorous proof of the ground state energy asymptotics for multi-component condensates. Such systems consist of multiple species of identical bosons, and their mathematical study has become topical very recently.
I will show that, both in the mean field and Gross-Pitaevskii regime, the leading order of the ground state energy is captured by the minimum of a suitable one-body non-linear functional. Moreover, the ground state exhibits condensation in the sense of reduced density matrices.
In the mean field regime, by an implementation of Bogoliubov theory, we are also able to compute the next-to-leading order of the ground state energy asymptotics, and to prove a norm approximation for the ground state.
All our results hold under a miscibility condition, as is often called in physics literature, that allows us to prove uniqueness of the minimizer of the non-linear theory.
This is a joint work with Alessandro Michelangeli and Phan Thành Nam.
Paweł Duch
Adiabatic limit and vacuum state in Epstein-Glaser approach to perturbative quantum field theory
The fundamental objects of Epstein-Glaser approach to perturbative quantum field theory are the time-ordered products of polynomials in the basic fields and their derivatives. Their construction is carried out in the position space and does not require the introduction of any ultraviolet regularization. Using the time-ordered products one can easily define the scattering matrix, the interacting fields and other objects of interests in the interacting theory in which the coupling constant is replaced by a Schwartz function called the switching function. The switching function plays the role of the infrared regulator, which is removed by taking the adiabatic limit.
In the talk, I will outline my recent results about the existence of the so-called weak adiabatic limit. The result allows to construct the Wightman and Green functions in a large class of models, which includes all models with interaction vertices of dimension 4. The existence of the weak adiabatic limit can be also used to define a vacuum state (a real, normalized, positive, Poincaré-invariant functional) on the algebra of interacting fields constructed by means of the algebraic adiabatic limit.
Federico Faldino
University of Genova
On interacting KMS states in pAQFT: Stability, Relative Entropy and Entropy Production
In this talk, we analyze the stability and return to equilibrium properties of the interacting KMS states built by Fredenhagen and Lindner for a scalar field theory in the framework of parturbative Algebraic Quantum Field Theory [1]. In particular, we show that this properties hold for compactly supported potentials, while they fail if the adiabatic limit is considered. This failure led to the definition of a Non-Equilibrium Steady State in pAQFT [2].
Furthermore, in order to study this new non-equilibrium state, we define relative entropy and of entropy production in the framework of pAQFT [3].
[1] K. Fredenhagen, F. Lindner - "Construction of KMS States in Perturbative QFT and Renormalized Hamiltonian Dynamics". Commun. Math. Phys. 332 – 895 (2014). [2] N. Drago, F. Faldino, N. Pinamonti - "On the stability of KMS states in perturbative algebraic quantum field theory". Commun. Math. Phys 357, Issue 1 (2018) 267-293. [3] N. Drago, F. Faldino, N. Pinamonti - "Relative Entropy and Entropy Production for Equilibrium States in pAQFT". ArXiv:[1710.09747] (2017).
Xiao He
Département de mathématiques et de statistique, Université Laval
How to add ''ghosts'' in BRST reduction ? ----A remark on semi-infinite cohomology
The rigorous mathematical definition of semi-infinite cohomology was introduced by B. Feigin in 1984, it can be considered as the counterpart of BRST reduction in physics. Unlike ordinary Lie algebra cohomology, computing semi-infinite cohomology requires that the Lie algebra admits a semi-infinite structure. Roughly speaking, a semi-infinite structure is a Lie algebra module structure on the space of semi-infinite forms, and the requirement of such a structure is to make the BRST differential nilpotent, i.e., square zero, which is essential in cohomology theory. \newline
What about if the Lie algebra admits no semi-infinite structure? One way to adjust this is to consider some one-dimensional central extension, which is called cancellation of anomalies in physics. Another way is, as the physicists already did, to add more ``ghosts'', hence to modify the BRST complex, and then to make a deformation of the BRST differential to make it nilpotent. \newline
In my talk, I will take affine W-algebras as the example, to explain how to add ''ghosts'' and how to modify the BRST differential in a rigorous mathematical way. As a byproduct, we will give a uniformed definition of affine W-algebras in general nilpotent element case. This is based on our recent work ''A remark on semi-infinite cohomology.'' arXiv:1712.05484.
Toshimitsu Takaesu
Ground States of Quantum Electrodynamics with Cutoffs
We investigate a system of a Dirac field coupled to a quantized radiation field in the Coulomb gauge. The Hilbert space for the system is defined by $ \mathcal{F}_{\textrm{QED}} = \mathcal{F}_{\textrm{Dirac}} \otimes \mathcal{F}_{\textrm{rad}} $ where $ \mathcal{F}_{\textrm{Dirac}}$ is the fermion Fock space over $ L^{2} (\mathbb{R}^{3}_{\mathbf{p}} ; \mathbb{C}^{4} ) $ and $\mathcal{F}_{\textrm{rad}} $ is the boson Fock space over $L^{2} (\mathbb{R}^{3}_{\mathbf{k}} \times \{ 1, 2 \})$. The total Hamiltonian is defined by \begin{align*} H_{\textrm{QED}}= H_{\textrm{Dirac}} \otimes I + I \otimes H_{\textrm{rad}} & + \kappa_{\textrm{I}} \sum_{j=1}^3 \int_{\mathbb{R}^3} \chi_{\textrm{I}}(\mathbf{x}) \left( \psi^{\ast} (\mathbf{x}) \alpha^j \psi (\mathbf{x}) \otimes A^j (\mathbf{x} ) \right) d \mathbf{x} \\ & \qquad + \kappa_{\textrm{II}} \int_{\mathbb{R}^{6}} \frac{ \chi_{\textrm{II}}(\mathbf{x})\chi_{\textrm{II}}(\mathbf{y}) }{|\mathbf{x}-\mathbf{y}|} \left( \psi^{\ast} (\mathbf{x}) \psi (\mathbf{x}) \psi^{\ast} (\mathbf{y}) \psi (\mathbf{y}) \otimes I \right) d \mathbf{x} d \mathbf{y} , \end{align*} on the Hilbert space. Here $H_{\textrm{Dirac}}$ and $H_{\textrm{rad}}$ denote the field energy Hamiltonians, $\psi (\mathbf{x})=(\psi^{\,l}(\mathbf{x}))_{l=1}^4 $ and $ \mathbf{A}(\mathbf{x})= (A^j(\mathbf{x}) )_{j=1}^3$ denote the field operators with ultraviolet cut-offs, and $ (\alpha^j )_{j=1}^3$ denote the Dirac matrices. The total Hamiltonian is self-adjoint and bounded from below. We assume spatially localized conditions and momentum regularity conditions. Then it is proven that the total Hamiltonian has a ground state for all values of coupling constants. In particular, its multiplicity is finite.
Sorour Karimi Dehbokri
Technische Universitat Braunschweig
Renormalization Group flow
Almost two decades ago, Renormalization Group flow defined by the smooth Feshbach-Schur map was shown by V. Bach, Chen, Fröhlich, and Sigal to possess a codimension-one contractivity property. This contractivity insures that the iterative application of $R_{\rho}$ (the Renormalization Transformation depends on a scaling parameter $\rho$) generates a (time-discrete) dynamical system on $D$ (small ball of Banach space of operators, that is the domain of definition of the RG map) with a fixed point manifold of dimension one. Now we improved scheme that is (fully) contracting and has no marginal directions anymore. This allows for characterizing the properties on the fixed point much more precisely. This is joint work with V. Bach
Tianshu Liu
osp(1|2) Minimal Models And Their Coset Construction
Conformal field theory is an essential tool of modern mathematical physics with applications to string theory and to the critical behaviour of statistical lattice models. The symmetries of a conformal field theory include all angle-preserving transformations. In two dimensions, these transformations generate the Virasoro algebra, a powerful symmetry that allows one to calculate observable quantities analytically. The construction of one family of conformal field theories from the affine Kac-Moody algebra sl(2) were proposed by Kent in 1986 as a means of generalising the coset construction to non-unitary Virasoro minimal models, these are known as the Wess-Zumino-Witten models at admissible levels. This talk aims to illustrate, with the example of the affine Kac-Moody superalgebra osp(1|2) at admissible levels, how the representation theory of a vertex operator superalgebra can be studied through a coset construction. The method allows us to determine key aspects of the theory, including its module characters, modular transformations and fusion rules.
H. M. Bharath
Non-Abelian Geometric Phases Carried by the Spin Fluctuation Tensor
The geometric information of the trajectory along which a physical system is transported is often accumulated in the system's gauge variables, and is known as geometric phase. This has been a subject of intense study, both theoretically and experimentally over the past three decades. Here, we develop a new non-Abelian geometric phase that is accumulated in the second order spin moments of a quantum spin system.
The expectation values of the first and second moments of the quantum mechanical spin operator can be used to define a spin vector and spin fluctuation tensor respectively. The former is a vector inside the unit ball in three space, while the latter is represented by an ellipsoid. By considering transport of the spin vector along loops in the unit ball we show that the spin fluctuation tensor picks up geometric phase information [1]. For the physically important case of spin one, the geometric phase is formulated in terms of an SO(3) operator. Loops defined in the unit ball fall into two classes: those which do not pass through the origin and those which pass through the origin. The former class of loops subtend a well defined solid angle at the origin while the latter do not and the corresponding geometric phase is non-Abelian. To deal with both classes, we introduce a generalized solid angle, which helps to clarify the interpretation of the geometric phase information.
[1]. H. M. Bharath, "Non-Abelian geometric phases carried by the spin fluctuation tensor", arXiv: 1702.08564
Konstantin Merz
On the Strong Scott Conjecture in the Chandraskehar Model
We consider large neutral atoms of atomic number $Z$. For such atoms the speed of electrons close to the nucleus is a substantial fraction of the speed of light $c$. Thus, a relativistic description is necessary. We model the atom by the pseudo-relativistic Hamiltonian of Chandrasekhar.
Our main result is the convergence of the suitably rescaled one-particle ground state density in each angular momentum channel: it converges on distances $1/Z$ from the nucleus to the corresponding density of the one-particle hydrogenic Chandrasekhar operator. This proves a generalization of the strong Scott conjecture for relativistic atoms.
The proof uses the Scott correction, i.e., the two term expansion of the ground state energy (Solovej, Sørensen, and Spitzer and Frank, Siedentop, and Warzel), and a new equivalence of Sobolev norms generated by the free and the hydrogenic Chandrasekhar operators.
The result underscores that relativistic effects occur close to the nucleus and that self-interactions of the innermost electrons are negligible.
Luca Nenna
(CEREMADE) Université Paris-Dauphine
Multi-Marginal Optimal Transport in Quantum Mechanics
The strong-interaction limit of the Hohenberg-Kohn functional defines a multi-marginal optimal transport problem with Coulomb cost. From physical arguments, the solution of this limit is expected to yield strictly-correlated particle positions, related to each other by co-motion functions (or optimal maps), but the existence of such a deterministic solution in the general three-dimensional case is still an open question. A conjecture for the co-motion functions for radially symmetric densities was presented in Phys. Rev. A 75, 042511 (2007), and later used to build approximate exchange-correlation functionals for electrons confined in low-density quantum dots. In this talk I will revisit the whole issue both from the formal and numerical point of view (by means of the entropic regularisation of Optimal Transport), finding that even if the conjectured maps are not always optimal, they still yield an interaction energy (cost) that is numerically very close to the true minimum.
Robert Rauch
Orthogonalization of Fermion k-Body Operators and Representabilty
The reduced $k$-particle density matrix ($k$-RDM) of a density matrix $\rho$ on fermion Fock space $\mathcal{F}$ can be defined as the image under the orthogonal projection $$\pi_k:\mathcal{L}^2(\mathcal{F})\to \mathcal{O}_k\subset\mathcal{L}^2(\mathcal{F})$$ onto the space $\mathcal{O}_k$ of $k$-body observables on $\mathcal{F}$ within the space of Hilbert-Schmidt operators $\mathcal{L}^2(\mathcal{F})$. A proper understanding of $\pi_k$ is intimately related to the representability problem, a long-standing open problem in computational quantum chemistry, which amounts to give a computationally efficient characterization of the cone $\pi_k(\mathcal{P})$ of representable $k$-RDMs, where $\mathcal{P}$ denotes the cone of positive trace-class operators on $\mathcal{F}$.
The goal of this joint work with V. Bach is the derivation of new representability conditions and the characterization of $\pi_k$ in the finite-dimensional case. We have recently completed the first step towards this goal by explicitly constructing a distinguished orthonormal basis of $\mathcal{L}^2(\mathcal{F})$ which restricts to a basis adapted to the flag $0\subsetneq\mathcal{O}_1\subsetneq\mathcal{O}_2\subsetneq\cdots$ of $k$-body observables. This orthonormal basis serves as a tool for the study of the cone $\pi_k(\mathcal{P})$ of representable density-matrices.
Arnaud Triay
Ceremade - Université Paris-Dauphine
Derivation of the dipolar Gross-Pitaevskii energy
The Gross-Pitaevskii theory effectively describes the ground state and the evolution of a dilute and ultracold gas of bosons. A very vast literature exists on the derivation of this theory from the principles of quantum mechanics, nevertheless it remains a challenging task is to address the case of non-positive interactions such as dipole-dipole potentials. We will present how, by using the so-called quantum de Finetti theorem, we can we show the convergence of the ground state and of the ground state energy of the (linear) $N$ body Hamiltonian towards those of the dipolar GP functional. The latter, in addition to the usual cubic nonlinearity, has a long range dipolar term $K\star |u|^2 |u|^2$. Our results hold under the assumption that the two-particle interaction is scaled in the form $N^{3\beta-1}w(N^\beta x)$ for some $0\leq\beta< \beta_{max}$ with $\beta_{max} = 1/3 + s/(45 + 42s)$ where $s$ is related to the growth of the trapping potential. arXiv:1703.03746
Atsuhide Ishida
Propagation property and inverse scattering for fractional powers of the negative Laplacian
We define the fractional power of the negative Laplacian as the self-adjoint operator acting on $L^2(\mathbb{R}^n)$: \begin{equation*} H_{0,\rho}=(-\Delta)^\rho/(2\rho) \end{equation*} for $1/2\leqslant\rho\leqslant1$ where $\Delta=\sum_{j=1}^n\partial_{x_j}^2$. If $\rho=1$, $H_{0,1}$ denotes the free Schrödinger operator $H_{0,1}=-\Delta/2$. On the other hand, if $\rho=1/2$, then $H_{0,1/2}$ denotes the massless relativistic Schrödinger operator $H_{0,1/2}=\sqrt{-\Delta}$. We study one of the propagation estimates (Enss-type estimate) for the free dynamics $e^{-itH_{0,\rho}}$ and try to apply this estimate to inverse scattering for $\rho>1/2$ by using the Enss-Weder time-dependent method. We report that the high velocity limit of the scattering operator uniquely determines the short-range interactions. This work was partially supported by the Grant-in-Aid for Young Scientists (B) No.16K17633 from JSPS.
Michal Jex
Revisiting Lieb-Thirring Inequalities
The moment inequalities due to Lieb and Thirring are effective tools in the operator theory. Especially the one for the sum of the (negative) eigenvalues of a Schrödinger operator, since, by duality, it is equivalent to a lower bound for the kinetic energy of Fermions, which is exactly of the right semi-classical Thomas--Fermi type.
Based on ideas of Rumin, we show a novel approach of proving the Lieb-Thirring inequalities for the operator $H=|p|^k-U$ with arbitrary $k>0$ in any dimension $d$. The obtained constants are improvements of currently known results in all cases, in particular, for $k=2$.
The other advantage is that the derived factors relating our inequality to semiclassical ones, that is, the quotient of our constants divided by the semi-classical guess, are uniformly bounded for all $k$ and $d$ by $e$.
We also estimate number of negative eigenvalues for the operator $H$ with dimension $d>k$. Factoring out the semiclassical estimate on the number of bound states yields a uniformly bounded estimate converging to $e^2$ for large dimensions. These results work for all $k$ and do not use an extension of the bounds to operator-valued potentials and the induction in the dimension trick of Laptev and Weidl, which works only for $k=2$.
This seems to be the first time that one can prove universal bounds without using some type of induction in the dimension argument. However, for $k=2$ one can do this and we get bounds which improving the bounds for small values of $d$ in this case.
Yukihide Tadano
The University of Tokyo
Long-range scattering theory for discrete Schrödinger operators
In this talk, we consider discrete Schrödinger operators $H=H_0+V$ on periodic lattices including the square lattice $\mathbb{Z}^d$ and the hexagonal lattice. We prove that we can construct a long-range scattering theory for a pair of $H_0$ and $H$ if the perturbation $V$ is a long-range potential. More precisely, we construct time-independent (or Isozaki-Kitada) modifiers $W^\pm(\Gamma)=\operatorname{s-lim}_{t\to\pm\infty} e^{itH} J e^{-itH_0}E_{H_0}(\Gamma)$, where $\Gamma$ is any open set of $\sigma(H_0)$ away from the threshold energies, and prove that they are asymptotically complete. The above modifiers are constructed from a solution of the corresponding eikonal equation on the outgoing and incoming regions of $T^* \mathbb{T}^d$. The proof is analogous to that in the paper by Isozaki and Kitada in 1985; we use the stationary phase method and the Enss method for the proof of the existence and the completeness of $W^\pm(\Gamma)$, respectively. The proof for the hexagonal lattice is more complicated, because we need the diagonalization of $H_0$ and additional argument due to the corresponding Hilbert space $\ell^2(\mathbb{Z}^2;\mathbb{C}^2)$. | CommonCrawl |
Using Scheduler
SESAPS Meeting
76th Annual Meeting of the Southeastern Section of APS
Wednesday–Saturday, November 11–14, 2009; Atlanta, Georgia
Session HB: Advances in Stellar Astronomy
Chair: Russel White, Georgia State University
Room: Frankfurt
HB.00001: Evolution of the Outer Galactic Disk via Chemical Abundance Patterns
Invited Speaker:
I review briefly simple models to explain chemical abundance gradients in the disk of our Milky Way Galaxy, and then discuss the observations of both iron, [Fe/H], where the bracket notation refers a logarithmic scale and 0.0 represents the abundances in our Sun, and other so-called ``$\alpha $'' elements that to be produced primarily in Type II supernovae. I compare the results with the simple models, demonstrating unexpected behavior in the outer Galactic disk, and different behaviors in the old star clusters compared to the much younger Cepheid variables. I conclude that the evidence appears to support a steady growth of the Galactic disk over cosmic time. [Preview Abstract]
HB.00002: Surveying the Neighborhood of the Sun
In the spirit of the next human census to be carried out on planet Earth in 2010, the RECONS (Research Consortium On Nearby Stars) group at Georgia State University is gearing up to complete a decadal census of the Sun's neighbors. We'll present the latest results on the stars, brown dwarfs, and planets within 10 parsecs (about 33 light years) of the Sun and place the results into context for our Milky Way Galaxy. When compared to what we knew in 2000, our closest 300+ neighbors are significantly cooler, redder and more populated with planets than 10 years ago. [Preview Abstract]
HB.00003: The Evolutionary History of the R Coronae Borealis Stars
The R Coronae Borealis (RCB) stars are rare hydrogen-deficient carbon-rich supergiants, all apparently single stars which are consistent with being post-AGB stars. RCB stars undergo massive declines of up to 8 mag due to the formation of carbon dust at irregular intervals. The mechanism of dust formation around RCB stars is not well understood but the dust is thought to form in or near the atmosphere of the stars. Their rarity may stem from the fact that they are in an extremely rapid phase of the evolution or in an evolutionary phase that most stars do not undergo. Several evolutionary scenarios have been suggested to account for the RCB stars including, a merger of two white dwarfs (WDs), or a final helium shell flash in a PN central star. The large overabundance of 18O found in most of the RCB stars favors the WD merger scenario while the presence of Li in the atmospheres of four of the RCB stars favors the FF scenario. In particular, the measured isotopic abundances imply that many, if not most, RCB stars are produced by WD mergers, which may be the low-mass counterparts of the more massive mergers thought to produce type Ia supernovae. I will present recent visible and IR observations of various RCB stars obtained with HST, Spitzer and ground-based telescopes. [Preview Abstract]
HB.00004: The First Three Years of Science from the CHARA Array
Georgia State University's Center for High Angular Resolution Astronomy designed, built and now operates the CHARA Array on the grounds of Mt. Wilson Observatory in southern California. The Array consists of six 1-m aperture telescopes arranged in a Y-shaped configuration to comprise an interferometer operating in the visible and near infrared. With 15 baselines ranging from 34 to 331 meters, the CHARA Array possesses the longest interferometric baselines in the world. The facility achieved routine science operations in 2005 and emphasizes high spatial resolution measurements of stars to measure such parameters as stellar angular and linear diameters, effective temperatures, and limb darkening. More complicated parameters such as stellar shape, mass, and the presence of surface spots, circumstellar gas and dust in shells and in disks can also be detected. In its first three years of observations, the CHARA Array has a number of ``firsts'', most notably including the first images of the surface of a main sequence star other than the sun and the first imagery of an interacting binary star. This paper will provide an overview of selected scientific results obtained to date. [Preview Abstract] | CommonCrawl |
Is Schatten p-norm a monotone ideal norm?
Let $T$ be bounded operators between Hilbert Spaces and define the Schatten p-norm $(p \geq 1)$ \begin{equation*} \sigma_p(T) = \left( \sum_{n=1}^\infty a_n(T)^p \right)^{1/p}, \end{equation*} where $a_n(T)$ are the singular values of $T$.
Suppose that $T \in \mathcal{K}(H_1,H_2)$, $S \in \mathcal{K}(H_1,H_3)$, and \begin{equation*} ||Tx|| \leq ||Sx|| \quad \text{for all} \ x \in H_1, \end{equation*} Does it follow that $ \sigma_p(T) \leq \sigma_p(S)$?
The case $p = 2$ it's clear, because the 2-Schatten class are the Hilbert-Schmidt operators and \begin{equation*} \sigma_2(T)^2 = \sum_{n=1}^\infty ||Te_n||^2 \leq \sum_{n=1}^\infty ||Se_n||^2 = \sigma_2(S)^2, \end{equation*} where $(e_n)$ is an orthonormal basis of $H_1$.
What can we say if $p \neq 2$?
operator-theory hilbert-spaces norm
Javier GonzálezJavier González
Yes the $p$-Schatten norm is monotone. This can be seen from the following fact: A mapping $T:H_1\rightarrow H_2$ is of $p$-Schatten class if and only if$$\{||T\psi_j||_{H_2}\}_{j=1}^{\infty}\in\ell^p$$ for all($2\leq p<\infty$) orthonormal bases/for some ($0<p<2$) orthonormal basis $\{\psi_j\}_j$ of $H_1$ and the $p$-Schatten norm is obtained by maximizing($2\leq p<\infty$) or minimizing($0<p<2$)the expression $$\left(\sum_{j=1}^{\infty}||T\psi_j||^p_{H_2}\right)^{\frac{1}{p}}$$ where the maximum or minimum is taken over all orthonormal bases $\{\psi_j\}_j$ of $H_1$. I proved this fact in my master thesis:
https://www.fernuni-hagen.de/analysis/download/diplomarbeit_melech.pdf
You find the proof in chapter 6 (p.36). Since: $$\left(\sum_{j=1}^{\infty}||T\psi_j||^p_{H_2}\right)^{\frac{1}{p}}\leq \left(\sum_{j=1}^{\infty}||S\psi_j||^p_{H_3}\right)^{\frac{1}{p}}$$ this proves the monotonicity of the $p$-Schatten norm for $0<p<\infty $.
Peter MelechPeter Melech
$\begingroup$ Ok, I found the proof of maximizing for $2 \leq p < \infty$. The case $ 0 < p < 2$ is equivalent? $\endgroup$ – Javier González Feb 28 '18 at 17:28
$\begingroup$ Yes the minimum is obtained by the orthonormal basis consisting of eigenvectors of $T^*T$ in this case $\endgroup$ – Peter Melech Feb 28 '18 at 17:33
$\begingroup$ $\sum_{j=1}^{\infty}||T\psi_j||^p=\sum_{j=1}^{\infty}(T\psi_j,T\psi_j)^{\frac{p}{2}}=\sum_{j=1}^{\infty}(T^*T\psi_j,\psi_j)^{\frac{p}{2}}=\sigma_p(T)^p$ for this basis. This is actually sufficient for what You want to show. In both cases!! $\endgroup$ – Peter Melech Feb 28 '18 at 17:40
Not the answer you're looking for? Browse other questions tagged operator-theory hilbert-spaces norm or ask your own question.
Hilbert Schmidt operators as an ideal in operators.
How to calculate norm of operator in Hilbert space
Is the identity map $id: H^2(-\pi,\pi) \to L^2(-\pi,\pi)$ Hilbert-Schmidt?
Showing that the space of Hilbert-Schmidt operators form a Banach space.
Working on linear maps
Question on why Hilbert-Schmidt operator definition is independent of the choice of basis
If $\|A\|_p \leq \|B\|_p$ does it follow that $\|A\|_q \leq \|B\|_q$ where $\| \cdot \|_p$ is the Schatten $p$-norm.
How to prove $\|AB\|_1\leq\|A\|_2\|B\|_2$ (Trace-class and Hilbert-Schmidt operators)
Does convergence in Hilbert-Schmidt norm imply convergence of singular values?
Relation between Schatten-$p$-norm and $l^p$ norm of operator matrix | CommonCrawl |
Sep 30, 2014 30, 2014 , 7:15 PM. A wind turbine, a roaring crowd at a football game, a jet engine running full throttle: Each of these things produces sound
44% of baby boomers have "no concerns" versus 23% of millennials. Respondents, most notably in China and India, were most concerned that 5G will be "too expensive"; concern about battery drainage was a close second. 15. Pneumonitis due to solids and liquids • In 2013, the infant mortality rate was 5.96 infant deaths per 1,000 live births. • The 10 leading causes of infant death were: 1. Congenital malformations, deformations and chromo somal abnormalities (congenital malformations) 2.
Liber liber audiolibri
Military time notation is based on 24-hour clock. A time of day is written in the form hhmm, where hh (0-23) stands for full hours that have passed since midnight, mm (00-59) is the number of minutes that have passed since the last full hour. Step 4: Similarly, $x=15\%$ x =15%. Step 5: This results in a pair of simple equations: $16000=100\%(1)$16 0.462). after doing this, we subtract z from 1 and convert the total to a percentage by ($70 - $55) / $70 = $15 / $70 = 0.2143 = 21.43% (approximately).
A loan gives you the money you need upfront and lets you spread the cost of paying it back. So whether it's a special holiday, a car or a new kitchen, a personal loan can make it more affordable.
4. 2019 vstoupila v platnost novela zákona o DPH The North American X-15 is a hypersonic rocket-powered aircraft.It was operated by the United States Air Force and the National Aeronautics and Space Administration as part of the X-plane series of experimental aircraft.The X-15 set speed and altitude records in the 1960s, reaching the edge of outer space and returning with valuable data used in aircraft and spacecraft design. The VAT Calculator helps you calculate the VAT to add or subtract from a price, at different rates of VAT. Value Added Tax (VAT) is charged on most goods and services purchased in the UK. Most products are charged at the standard rate of 20% but some are charged at a reduced rate of 5%, and others are exempt from any VAT charges.
Other 4-digit numbers represent the operating frequency with no changes. CL Timing: CL15 or C15 = CL timing (tCL) is 15 (i.e. CL 15-x-x-x) Modules per kit: S =
The North American X-15 is a hypersonic rocket-powered aircraft.It was operated by the United States Air Force and the National Aeronautics and Space Administration as part of the X-plane series of experimental aircraft. Dec 12, 2020 · Well, someone has made Santa's naughty list. Six-year-old George Johnson secretly racked up more than $16,000 in Apple app store charges for his favorite video game, Sonic Forces — leaving his MacBook Pro — our most powerful notebooks featuring fast processors, incredible graphics, Touch Bar, and a spectacular Retina display. Increasing the ratio by five times yields a 5:10:15 ratio, and this can be multiplied by whatever the actual amount of sugar, flour, and butter are used in the actual cake recipe. Typical Aspect Ratios and Sizes of Screens and Videos.
ARKANSAS, AMENDING THE OFFICIAL ZONING MAP OF THE CITY . 8 . OF LITTLE ROCK, ARKANSAS; AND FOR OTHER PURPOSES. 9. 10.
80000 as his share of capital. The Profit and Loss Account showed a credit 2014 Chevrolet Camaro LT 2LT. Red Chevrolet Camaro Convertible 2014 1 15 Photos. NEWLY LISTED.
0.649. 750X114106. 14. 15. Brochure Availabale at www. GEITI.com 0.3 Z. 75 kV. 764X121003.
It can access areas with limited space while executing the same functions as larger excavators, just on a smaller scale. It's ideal for small- to medium-sized projects that require you to transport large amounts of dirt, rubble or gravel to be either excavated, relocated or In this example, if you buy an item at $1600 with 15% discount, you will pay 1600 - 240 = 1360 dollars. 3) 240 is what percent off 1600 dollars? Solution: Using the formula (b) and replacing given values: Amount Saved = Original Price x Discount in Percent /100. So, 240 = 1600 x Discount in Percent / 100 .
Zamienić procent na ułamek: Zamiana 2% na ułamek: 2 \% = \frac{2\%}{100\%} = 0.02 Poziomica Vorel Poziomica wodna z wężem 15 m 16000 - od 18,00 zł, porównanie cen w 4 sklepach. Zobacz inne Poziomice, najtańsze i najlepsze oferty, 2-1/16" DPSS SHIFT-LIGHT, 0-16,000 RPM, LEVEL 2, Z-Series 2-1/16" FUEL PRESSURE, 0-15 PSI, MECHANICAL, Z-SERIES Product #2603. $74.95 15. 160. 179. 11.
p2p požičiavanie usa
čo je zcash krypto
čo sú mobilné služby google
hodl mincovna penazenka
ako aktualizovať atomos shogun 7
čo sú logá v literatúre
ponuka peňazí m2 obsahuje kvíz m1
XmFo
HFXc
gdHTk
EdtEY
Cmt krypto
Získajte bitcoinovú adresu z bitpay faktúry
Kalkulačka ťažby dogecoin gpu
Ako dlho trvá úplné nabitie nintendo 3ds
Kde kúpiť 1099 formulárov pre rýchle knihy
Statočný význam v kórejčine
Koľko účtuje cashapp za odoslanie peňazí
Vášmu vyhľadávaniu nezodpovedajú žiadne články wattpad
1 dolár v £
Jan 16, 2014 · Our divergence time estimates imply that dogs and wolves diverged 14.9 thousand years ago (kya) with 13.9–15.9 kya Bayesian 95% credible interval (CI), assuming an average mutation rate per generation of μ = 1×10 −8 and three years per generation . Divergence times between wolf populations were tightly clustered at 13.4 kya (11.7–15.1
Just a small amount saved every day, week, or month can add up to a large amount over time. One of the best website ever with equation solutions and equations solver for your needs. Solutions for almost all most important equations involving one unknown.
3. If 16000 is 100%, so we can write it down as 16000=100%. 4. We know, that x is 15% of the output value, so we can write it down as x=15%. 5. Now we have two simple equations: 1) 16000=100% 2) x=15% where left sides of both of them have the same units, and both right sides have the same units, so we can do something like that: 16000/x=100%/15% 6.
Respondents, most notably in China and India, were most concerned that 5G will be "too expensive"; concern about battery drainage was a close second. Show Me the Money . Do you think 5G will be positive for the economy? It Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.
(c) Land and Building (Book value Rs. 12,50,000) sold for Rs. 15,00,000 (h) 'Z' an old customer whose account for Rs. 20,000 was written off as bad in the and cash in full settlement of their claims after allowing a discou They admit Z as a partner with 1/4^th share in the profits of the firm. Z brings in Rs . 80000 as his share of capital. The Profit and Loss Account showed a credit 2014 Chevrolet Camaro LT 2LT. Red Chevrolet Camaro Convertible 2014 1 15 Photos. NEWLY LISTED. Sep 28, 2020 Get answer: X,Y and Z were in partnership sharing profits in proportion to their capitals. | CommonCrawl |
← Methuselah's choice
Computing with encrypted data →
The Jevons Number
Posted on 11 August 2012 by Brian Hayes
I was doing some reading in the history of cryptography when I came upon a reference to a 1996 article by Solomon W. Golomb. Golomb always has something interesting to say, so I had to go off and find the paper, especially since the title was intriguing and a little mysterious: "On Factoring Jevons' Number." That would surely be William Stanley Jevons, 19th-century economist and statistician. I've run into Jevons before—he gets two chapters in Stephen Stigler's Statistics on the Table—but I never knew he had a number.
Golomb's paper lives behind a paywall, so I'll quote at some length from the introductory paragraph:
In his book The Principles of Science: A Treatise on Logic and Scientific Method, written and published in the 1870′s, William S. Jevons observed that there are many situations where the "direct" operation is relatively easy, but the "inverse" operation is significantly more difficult. One example mentioned briefly is that enciphering (encryption) is easy while deciphering (decryption) is hard. In the same section of Chapter 7: Induction titled "Induction an Inverse Operation", much more attention is devoted to the principle that multiplication of integers is easy, but finding the (prime) factors of the product is much harder. Thus, Jevons anticipated a key feature of the RSA algorithm for public key cryptography, though he certainly did not invent the concept of public key cryptography.
Golomb calls attention to the following passage from the Jevons book (p. 123 in the second edition, issued in 1874):
Can the reader say what two numbers multiplied together will produce the number 8,616,460,799? I think it unlikely that anyone but myself will ever know; for they are two large prime numbers, and can only be rediscovered by trying in succession a long series of prime divisors until the right one be fallen upon.
Obviously Jevons did not reckon on rapid progress in computing machinery. In our gigahertz age, even the crudest version of the trial-division algorithm finds the two prime factors of that 10-digit number in milliseconds. And of course Jevons was wrong in suggesting that trial division is the only method possible.
Golomb's critique of Jevons's claim is even harsher:
With a 10-place hand-held calculator, using only one memory location, and only the operations of subtraction, square and square-root, it took me less than six minutes to factor Jevons' number…. This success led me to consider how easy or difficult it would have been for someone in the 1870′s, using only hand calculation, to have succeeded in finding this factorization. I concluded that at most a few hours, and quite possibly less than an hour, would have been sufficient.
Just in case anyone would like to put Golomb's assertion to the test, I'll refrain from giving the factors here. Sharpen your pencils!
Golomb was not the first to crack the Jevons number. Derrick Lehmer the elder presented a factorization at a 1903 meeting of the American Mathematical Society. He added this note to the published account of the talk: "I think that the number has been resolved before, but I do not know by whom."
All of these results tend to make Jevons look a bit of a fool for so badly misjudging the difficulty of his factoring challenge. And his reputation is somewhat doubtful for another reason as well. Economists make fun of his theory that business cycles and sunspot cycles are causally connected. (Let us quietly ignore the close correlation between the most recent sunspot minimum and the financial unpleasantness of 2008–09.)
I'm not going to take it on myself to rehabilitate Jevons, but I can report that I've spent a few days browsing in his Principles of Science, and I find it charming. Jevons was one of those frightfully prolific Victorian scribblers. He lived only to age 46, but in his short life he published at least a dozen substantial works. Principles of Science runs to almost 800 pages. His aim in this volume is to assemble a complete mental toolkit for doing science, starting with logic (Jevons was an early champion of Boole's Laws of Thought), and proceeding through probability, measurement, experiment and on to various aspects of theory-building (generalization, classification, analogy). A theme that runs through the whole narrative is the importance of inductive inference, which Jevons sees as an inverse problem—how to deduce causes from effects.
Consider the moment when Jevons was writing. Darwin's Origin of Species had been published 15 years before. The debate over corpuscular and undulatory theories of light was in full cry. Phlogiston was long gone, but the luminferous ether was still permeating the universe. The apparent conflict between the antiquity of the earth and the measured flux of heat from the planet's interior was a great puzzlement. (It would not be resolved for another 20 years, with the discovery of radioactivity.) Thermodynamics was beginning to emerge as a science. Reading a contemporary's account of these developments offers a certain voyeuristic excitement. Jevons didn't yet know how these stories were going to turn out.
There's a fair amount of math in the book, but even more noteworthy is how much math isn't in the book. In my perusal of the text, I found not one differential equation or even a use of elementary calculus. The square root of –1 appears just once. Instead of analysis, there is a strong emphasis on areas we would now call discrete mathematics, especially combinatorics and probability, as well as the interface between logic and the foundations of arithmetic. I was quite surprised by this slant. I tend to think of discrete math as a modern enthusiasm, inspired in part by the rise of computer science.
Maybe Jevons should be considered a modern in this respect. He certainly seems to enjoy counting and calculating things. His approach to symbolic logic begins with the enumeration of all possible propositions with a given number of terms. Elsewhere he estimates the number of English words with various combinations of letters, and the number of chemical compounds with a given number of elemental constituents. He performs 20,480 coin tosses to test the law of large numbers. He correctly calculates that 2^2^2^2^2 has 19,729 decimal digits. This is a person who would have been thrilled to have access to the sort of computing machinery we now take for granted. He built an early digital device of his own—a Logic Machine, sometimes called the Logical Piano, for evaluating Boolean formulas.
Jevons seems to have a particular fascination with permutations, combinations and factorials, bringing them up even in contexts where you might not expect them to have much bearing. For example, there's this curious definition of Euler's number:
At the base of all logarithmic theory is the mysterious natural constant commonly denoted by e, or ε, being equal to the infinite series $$1 + \frac{1}{1} + \frac{1}{1 \cdot 2} + \frac{1}{1 \cdot 2 \cdot 3} + \frac{1}{1 \cdot 2 \cdot 3 \cdot 4} + \cdots,$$ and thus consisting of the sum of the ratios between the numbers of permutations and combinations of 0, 1, 2, 3, 4, &c. things. [p. 330]
The formula \(e = \sum{1/n!}\) is standard, but what's that about ratios of permutations and combinations? Do they have anything to do with the value of e? (I'm not even sure what ratios are being summed.)
Another instance:
It is worth noting that this Law of Error, abstruse though the subject may seem, is really founded upon the simplest principles. It arises entirely out of the difference between permutations and combinations, a subject upon which I may seem to have dwelt with unnecessary prolixity in previous pages. [p. 383]
Is the law of error—the normal distribution—entirely a matter of permutations and combinations?
And Jevons returns to the same topic a third time at the very end of his final chapter, as he is summing up his views on life, the universe and everything.
There formerly seemed to me to be something mysterious in the denominators of the binomial expansion (p. 190), which are reproduced in the natural constant ε, or $$1 + \frac{1}{1} + \frac{1}{1 \cdot 2} + \frac{1}{1 \cdot 2 \cdot 3} + \cdots$$ and in many results of mathematical analysis. I now perceive, as already explained (pp. 33, 160, 383), that they arise out of the fact that the relations of space do not apply to the logical conditions governing the numbers of combinations as contrasted to those of permutations. [p. 769]
I have pursued the internal references that Jevons cites here, but I confess that what he formerly found mysterious is still wholly mysterious to me. If it makes sense to anyone else, I hope you'll enlighten me.
One final digression. The full text of Principles of Science is available on Google Books. I started out reading it there, but eventually decided that ink and paper still have some advantages when it comes to 800-page tomes. So, being lucky enough to have library privileges, I went spelunking through the dim stacks of the Widener Library at Harvard and borrowed a copy. I soon realized it was the very copy that had been scanned by Google Books. The evidence lies in the distinctive marginal notes made by the volume's original owner. For example, on page 194 and 195 we find Jevons and his pencil-wielding reader disagreeing over the value of 2^2^2^2. (Jevons is correct; the penciled corrections are not.)
A signature and stamp at the front of the book reveal the identity of the marginalist. He was George F. Swain, who held the Gordon McKay professorship at Harvard from 1909 until his death in 1931. There's a biography among the publications of the National Academy of Sciences (but the link seems wonky; I had to hunt down Google's cached copy). Swain was a civil engineer (now an extinct species at Harvard) who seems to have signed off on just about every railroad bridge in the Commonwealth of Massachusetts and a few other bridges farther afield (e.g., the Golden Gate). But evidently Swain also had more abstract interests. In the NAS biography a former student comments: "Logical reasoning was constantly emphasized, and I well remember his earnest recommendation that we procure copies of Jevon's [sic] 'Logic' and master its contents."
Reading over the shoulder of Professor Swain adds one more layer to the palimpsest. At the bottom we have Jevons himself, deeply embedded in the world of late-19th-century science, speaking familiarly of his contemporaries Professor Maxwell and Dr. Joule and Mr. Venn. Then Swain enters with his itchy annotator's pencil, calling attention to those passages that still resonated 50 years later—and showing occasional impatience with ideas that no longer seemed so compelling. And we have our own knowing, modern perspective, you and I and Solomon Golomb, with all our computational power tools.
Update 2012-09-15: As mentioned above, Derrick Lehmer's 1903 paper on the Jevons number includes the footnote: "I think that the number has been resolved before, but I do not know by whom." Josh Jordan may have now answered the "by whom" question. He searched Google Books for "8616460799″ and turned up an 1889 article in Nature by Charles J. Busk that presents a factoring algorithm and illustrates its application with the Jevons number. Busk claims that his scheme is "different from any previously tried," but Jordan points out that in fact Busk has reinvented Fermat's method, based on expressing an odd product of two factors as a2–b2 = (a+b)(a–b). Fermat described the method in 1643.
But perhaps we shouldn't be too hard on Charles Busk. A little more Googling turns up an 1898 discussion of Busk's ideas in Mathematical Questions and Solutions from the Educational Times (the MathOverflow of the 19th century). None of the other contributors mention the Fermat precedent either.
This entry was posted in books, computing, mathematics, science, statistics.
16 Responses to The Jevons Number
Maybe the thing about e is derangements, who are roughly 1/e as numerous as permutations?
svat says:
I think this is what he means. Fix a large number m.
The number of "permutations" of 4 objects out of these m is m(m-1)(m-2)(m-3), while
the number of "combinations" of 4 objects out of these m is m(m-1)(m-2)(m-3)/4!.
Thus the ratio between the number of these "permutations" and "combinations" is 4!. In general, to use the notation I learnt at school but don't like, nPk/nCk = k!.
Supporting evidence is found in his sentence that "the fact that the relations of space do not apply to the logical conditions governing the numbers of combinations as contrasted to those of permutations" — I think by "relations of space do not apply", he means "order does not matter" (for combinations as contrasted with permutations).
About the Law of Error / the normal distribution being something that "arises entirely out of the difference between permutations and combinations", my guess is that again he means "ignoring order", and is thinking of something like the central limit theorem: in a sequence of N coin tosses, each particular sequence of results (HHTHTT, etc.) has probability 1/2^N, but if you ignore the order and consider only the "combination" (reporting it as the number of heads), you get the normal distribution (in the limit for large N).
Boaz Barak says:
This is fascinating. Javons may have been factually wrong in choosing a too small number, but his was morally right and quite prescient in understanding that factoring an integer is much hard than multiplying.
While the main ingredient in inventing public key cryptography was Diffie, Hellman and Merkle's realization that this is a valid question to be asked, I believe it took some time until they realized that number theory could be a source for problems useful for it. My understanding is that it was Hellman's colleague John Gill who suggested modular exponentiation as an easy to compute but hard to invert problem.
About that expansion of e, my guess is somewhat similar to Svat. Consider the definition \(e = \lim_{n\rightarrow\infty} (1+1/n)^n\). If you take some large \(n\) and look at \((1+1/n)^n = (1+1/n) (1+1/n) \cdots (1+1/n)\), and open up the parenthesis, then the term corresponding to taking \(k\) times the factor \(1/n\) and the rest of the times the factor \(1\) will be \(\binom{n}{k}/n^k\). In other words, its the ratio between choosing \(k\) unordered elements from a universe of size \(n\) without repetition and choosing \(k\) ordered elements from this universe with repetition. As \(n\) goes to infinity, the issue of repetitions is negligible, and hence this ratio becomes just the number of orderings of \(k\) elements, which is \(k!\), thus giving rise to the formula \(e = \sum_{k=1}^{\infty} \tfrac{1}{k!}\).
My thanks to the commenters, who have cleared away some cobwebs (in my mind, not Jevons's).
The suggestion of a connection with derangements is really intriguing, but I'm afraid I can find no evidence in the text that Jevons was thinking along those lines.
Michael Lugo says:
Based on the peculiar set of operations he mentions, I suspect Golomb used Fermat's factorization method.
@ Michael Lugo: Yes, Golomb did use the Fermat method. So did Derrick Lehmer in 1903.
Barry Cipra says:
A small typo: In the first excerpt from Jevons, you quote him as saying "…the numbers of permutations and combinations of 0, 1, 2, 4, &c. things." This gave me pause, so I went and looked. You left out the "3."
–Fixed. Brian
George Marrows says:
This essay is a wonderful mix of detail and historical sweep, beautifully written. Thank you.
Ragupathy says:
Really nice article. I enjoyed reading all these titbits about Jevons which I never knew. Thanks.
What neither Jevons nor I anticipated was that I would do all my day-to-day calculation with Wolfram Alpha. Whether checking a sum or 'factor 8,616,460,799′, the site is a marvel.
wheels says:
This is fascinating to me, because my mother's maiden name was Jevons, and my late father once told me that we were related through her to a "Victorian economist who tried to tie the London stock market to sunspot cycles."
This article gives me more information than I had about him, and I'm pleased to have found it.
ShreevatsaR says:
A couple of (very!) inconsequential remarks on the factorization of the Jevons number.
Firstly, the article by Busk ("To find the Factors of any Proposed Number"): as the preview on Google Books is only available in the US (I guess), let me note that the article is also available on archive.org or in djvu format here (dolnośląska Biblioteka Cyfrowa / Lower Silesian Digital Library).
Secondly, the slides "Algorithmic Number Theory Before Computers" by Jeffrey Shallit contain on p. 6 a timeline, prepared in 2000, that mentions both the 1874 claim by Jevons that nobody else "will ever know" the factorization of the number, and the 1889 factorization by C. J. Busk. Looking at that is what reminded me of this post.
Shawn Van Ittersum says:
Jevons: "I now perceive, as already explained (pp. 33, 160, 383), that they arise out of the fact that the relations of space do not apply to the logical conditions governing the numbers of combinations as contrasted to those of permutations."
Perhaps Jevons was making a philosophical statement here, with "space" referring to a three-dimensional space, which at any instant can only have one permutation or another, but not multiple combinations simultaneously.
Peter Hayes says:
I wasted an afternoon as an undergraduate in the early 1980s cracking Jevons' number by hand with the help of a 19th century book of squares tables.
Doesn't sound like a wasted afternoon to me! | CommonCrawl |
A simple and effective algorithm for the maximum happy vertices problem
Marco Ghirardi ORCID: orcid.org/0000-0002-4222-83751 &
Fabio Salassa1
TOP volume 30, pages 181–193 (2022)Cite this article
In a recent paper, a solution approach to the Maximum Happy Vertices Problem has been proposed. The approach is based on a constructive heuristic improved by a matheuristic local search phase. We propose a new procedure able to outperform the previous solution algorithm both in terms of solution quality and computational time. Our approach is based on simple ingredients implying as starting solution generator an approximation algorithm and as an improving phase a new matheuristic local search. The procedure is then extended to a multi-start configuration, able to further improve the solution quality at the cost of an acceptable increase in computational time.
Vertex coloring problems are one of the most popular and extensively studied subjects in the field of graph theory. They have received wide attention in the literature, not only for their real-world applications but also for their theoretical aspects and for the computational hardness (Malaguti and Toth 2010). Traditional vertex coloring problems consist of coloring all vertices of a graph G with different colors in such a way that any pair of adjacent vertices are labeled with different colors. Recently, interest has been also devoted to vertex coloring problems where the coloring of adjacent vertices is desired to be the same. This is the case of the problem called Maximum Happy Vertices Problem (MHV) considered in this paper. Given a set of precolored vertices, the problem asks to extend the coloring to the remaining vertices with the objective to maximize the number of nodes colored with the same color of their adjacent vertices.
The MHV problem and the concept of "happiness" related to vertices have been proposed in Zhang and Li (2015). A vertex is considered happy if all its neighbors are of the same color. The problem objective is the maximization of the number of happy vertices.
More formally, the MHV problem considers an undirected graph \(G = (V, E)\) with n vertices and m edges (with \(\varGamma (i)\) defined as the set of neighbors of vertex i), a color set \(K=\{ 1,\ldots ,k\}\), a subset of vertices \(A \subseteq V\) where \(|A| \ge k\) and a partial coloring \(c : A \rightarrow \{1,\dots ,k\}\) such that \(\forall \ i \in \{1,\dots ,k\}, \exists \ v \in A : c(v) = i\). The problem asks to extend the coloring c to the remaining non-precolored vertices to a complete graph coloring \(\bar{c} : V \rightarrow \{1,\dots ,k\}\) such that the total number of happy vertices is maximized.
In a recent paper Lewis et al. (2019), the MHV problem has been addressed and a solution approach based on the Construct, Merge, Solve & Adapt (CMSA) framework of Blum et al. (2016) has been applied to deal with 380 computationally hard instances.
The problem has also been tackled from a theoretical point of view, see the proof of NP-hardness in Zhang and Li (2015), approximation algorithms in Zhang et al. (2018) and complexity results in Agrawal (2017) and Aravind et al. (2016), where polynomial algorithms for simple special cases have been proposed.
From a computational perspective, to the best of our knowledge, the work of Lewis et al. (2019) is the first attempt to propose solution procedures dealing with large-size instances. Moreover, the authors of Lewis et al. (2019) made freely available both the instance generator Lewis et al. (2018a, b) and the source code (except the part related to the mixed integer linear programming solver GUROBI) Lewis et al. (2019). The work of Lewis et al. (2019) proposes a hybrid heuristic approach, based on a constructive heuristic improved by a matheuristic local search phase.
Matheuristics are solution methods that have been successfully applied to several combinatorial optimization problems [see for instance Ball (2011), Della Croce et al. (2013)], giving rise to an impressive amount of research in recent years. Matheuristics have been applied to routing Macrina et al. (2019) Shahmanzari et al. (2020), packing Billaut et al. (2015), Martinez-Sykora et al. (2017), rostering Della Croce and Salassa (2014), Doi et al. (2018), lot sizing Ghirardi and Amerio (2019) and machine scheduling Della Croce et al. (2014), Croce et al. (2019), Fanjul-Peyro et al. (2017) just to cite a few of them. Matheuristics rely on the general idea of exploiting the strength of both metaheuristic algorithms and exact methods.
In the present work, we developed a simple but effective matheuristic algorithm, along the same line of CMSA in Lewis et al. (2019), to deal with the Maximum Happy Vertices Problem. The proposed matheuristic algorithm is based on an overarching neighborhood search approach with an intensification search phase realized by a MILP solver. The main advantages of our approach, with respect to the one of Lewis et al. (2019), are:
Better performances in terms of solution quality,
Much better performances in terms of computational times (few seconds against 1 h),
Simple design of the solution procedure,
Simple integration in a multi-start version able to further improve the solutions quality.
The paper is organized as follows. In Sect. 2 , the integer linear programming formulations of the problem are provided. Section 3 is devoted to the description of the proposed solution algorithms. In Sect. 4, computational results and benchmarks are presented. Sect. 5 concludes the paper with final remarks.
MIP models
Two mixed integer linear programming formulations are provided in Lewis et al. (2019).
In the first model (M1), integer variables \(x_i \in \{1,\ldots , k\}\) define the color assigned to each vertex, while variables \(y_i\) are forced to be one only if vertex i is unhappy. The set A represents the set of precolored vertices while c(i) is the color assigned to each vertex in A. Recall that \(\varGamma (i)\) is defined as the set of neighbors of vertex i. The overall model is:
$$\begin{aligned}&\displaystyle \max n - \sum _{i=1}^{n} y_i \end{aligned}$$
$$\begin{aligned}&\text{ subject } \text{ to: } \nonumber \\&x_i = c(i)&\qquad \forall v_i \in A \end{aligned}$$
$$\begin{aligned}&y_i \ge \frac{|x_i-x_j|}{n}&\qquad \forall i \in V, \forall j \in \varGamma (i) \end{aligned}$$
$$\begin{aligned}&x_i \in \{1,\ldots , k\}&\qquad \forall i \in V \end{aligned}$$
$$\begin{aligned}&y_i \in \{0,1\}&\qquad \forall i \in V \end{aligned}$$
where (1) maximizes the number of happy vertices, (2) assigns colors to all the precolored vertices, (3) sets \(y_i=1\) for unappy vertices. (4) and (5) define the optimization variables.
Note that constraints (3) are not linear, and hence they require a linearization [not explicitated in Lewis et al. (2019)], which results in their substitution with:
$$\begin{aligned}&y_i \ge \frac{x_i-x_j}{n}&\qquad \forall i \in V, \forall j \in \varGamma (i) \end{aligned}$$
$$\begin{aligned}&y_i \ge \frac{x_j-x_i}{n}&\qquad \forall i \in V, \forall j \in \varGamma (i) \end{aligned}$$
The second model (M2) uses binary variables \(x_{ij}\) where \(x_{ij}=1\) if and only if color j is assigned to vertex i. Variables \(y_i\) have the same meaning as in the first model.
$$\begin{aligned}&\text{ subject } \text{ to: } \nonumber \\&x_{ij} = 1&\qquad \forall i \in A : c(i) = j \end{aligned}$$
$$\begin{aligned}&\sum _{j=1}^{k} x_{ij} = 1&\qquad \forall i \in V \end{aligned}$$
$$\begin{aligned}&y_i \ge |x_{ij}-x_{lj}|&\qquad \forall i \in V, \forall l \in \varGamma (i), \forall j \in K \end{aligned}$$
$$\begin{aligned}&x_{ij} \in \{0,1\}&\qquad \forall i \in V , \forall j \in K \end{aligned}$$
Here, (8) maximizes the number of happy vertices, constraints (9) specifies the precolorings, constraints (10) ensures that one color is assigned to any vertex, and constraints (11) forces \(y_i = 1\) if vertex i is unhappy. (12) and (13) define the optimization variables.
As before, constraints (11) are not linear, and we propose the following linearization:
$$\begin{aligned}&y_i \ge x_{ij}-x_{lj}&\qquad \forall i \in V, \forall l \in \varGamma (i), \forall j \in K \end{aligned}$$
$$\begin{aligned}&y_i \ge x_{lj}-x_{ij}&\qquad \forall i \in V, \forall l \in \varGamma (i), \forall j \in K \end{aligned}$$
We also point out that \(y_i\) variables, for this second model, not necessarily need to be defined as binary. It is, in fact, sufficient to define them as \(0 \le y_i \le 1\) given that they are constrained by (14) and (15) which enforce \(y_i\) to be 0 or 1.
Despite the fact that in Lewis et al. (2019), it is reported that model M2 is far less reliable w.r.t. solution quality, we tested it against all instances using CPLEX 12.7 as MIP solver and found out that 80 instances out of the whole dataset made of 380 instances were solved to optimality. All instances with 250, 500, 750 and 1000 nodes and \(k = 10\) had been solved to optimality within the time limit of 3600 s, the same time limit used in Lewis et al. (2019). Thus, in our experiments, model M2 outperforms model M1. From now on, in our algorithms, we use model M2 for benchmarks since, overall, it gives better solutions within the same time limit with respect to the model M1.
A simple solution approach
We propose here a simple but effective matheuristic improvement approach. Starting from a given solution, the algorithm iteratively improves it with a scheme based on the neighborhood search approach. Each iteration explores the neighborhood by constructing a problem where the variables to be optimized refer to a subset of the variables of the original problem, while other ones are fixed to the value they have in the current solution. The detailed procedure is described in Algorithm 1. The algorithm starts with a given feasible solution \(\bar{c}\) (step 1). A counter \(no\_improvement\) of iterations passed without finding an improving solution is set to 0 (step 2). At each iteration of the main loop (cycle 3–16), a subset of candidate colors for each node is selected and an exact method is employed to build a possibly improved solution. In the current solution (steps \(4--8\)), candidate colors for each node i are the current color and all the colors assigned to nodes adjacent to i with a path of length L, i.e. nodes colors that can be recognized following an \(L-length\) edge path. The resulting problem is then optimally solved through model (8)–(13), obtaining solution \(\bar{c}'\) (step 9). If the new solution is better than the previous one, the counter of non-improving iterations is reset to 0 (step 11). Otherwise, it is increased by one (step 13). Note that solution \(\bar{c}'\) cannot be worse than the current solution \(\bar{c}\) because the latter is a feasible solution of the ILP model. Hence, it is always accepted as the new current solution (step 15). The improvement phase is repeated if less than S iterations have been performed without improving the current solution.
CMSA solution approach proposed in Lewis et al. (2019) is based on a similar improvement scheme as the one we propose, with a different neighborhood definition. We highlight here the main differences:
In CMSA, candidate colors for each node i to be chosen for reoptimization are only the color of i plus, with a given probability, a subset of the colors of the neighbor nodes of i, while our matheuristic procedure considers, as possible candidates for each node i, all colors of the nodes that could be reached from node i with a path of a given length. Hence, the neighborhood dimension of the proposed algorithm is larger than the one of CMSA.
CMSA uses model M1, while in our case, model M2 has been selected. This choice does not affect the algorithm results in terms of solution quality (all ILPs are solved to optimality) but influences the running time.
In Lewis et al. (2019), two constructive methods are proposed for the initial solution generation, namely \(Greedy-MHV\) and \(Growth-MHV\). We point out that \(Greedy-MHV\) is the same procedure as the approximation algorithm \(\mathcal {G}\) proposed in Zhang et al. (2018). The approximation algorithm \(\mathcal {G}\) has been used in our approach as starting solution. During preliminary testings, we tried the best among algorithm \(\mathcal {G}\) (a.k.a. \(Greedy-MHV\)) and \(Growth-MHV\), but we experienced no improvements in the solutions quality.
The rationale of algorithm \(\mathcal {G}\) of Zhang et al. (2018) is to label all the uncolored vertices with the same color and testing all possible k colors obtaining, in such way, k different vertices colorings. The starting solution is then chosen among all k colorings, i.e. the one exhibiting the largest number of happy vertices (Algorithm 2).
To further test the matheuristic improvement algorithm in order to assess the quality of the proposed approach, we tested it in a multi-start setting. In this new configuration, the improvement procedure depicted in Algorithm 1 is applied not only on the solution obtained by the best coloring of algorithm \(\mathcal {G}\), but on all possible k different vertices colorings. The best final result is then returned. Algorithm 3 resumes the main steps of the multi-start procedure.
Computational experiments
We decided to test different configurations of algorithms over a dataset generated thanks to the instance generator in Lewis et al. (2018a, b). According to Lewis et al. (2019), instances were generated as random graphs using values of \(p = 5 / (n - 1)\), where n is the total number of nodes and p the probability of two vertices being adjacent. This value of p induces an average vertex degree of 5. In all instances, \(10\%\) of the vertices were precolored. Authors of Lewis et al. (2019) state that these configurations lead to the creation of the most difficult-to-solve instances. As in Lewis et al. (2019), we considered classes of instances with a number of colors k equal to 10 or 50 on graphs having a number of nodes n of 500, 750, 1000, 2000, 3000, 4000, 5000, 7500 and 10000. Since the solver is able to solve to optimality four classes (namely all the ones with \(k=10\) and with n equal to 250, 500, 750 and 1000), these were not considered in our dataset. For each of the remaining 15 classes, 20 instances were generated. For tuning the algorithm parameters, we generated an additional smaller dataset, composed of 10 instances for each of the classes with k equal to 10 or 50 and n equal to 3000, 5000 and 10000.
The algorithms have been implemented in C++ and the source code is available upon request to the authors. All tests have been performed on an i5-8500 3 GHz CPU system with 16 GB of RAM and CPLEX 12.7 as MIP solver. CPLEX solver has been applied with no parameters tuning and in multi-threaded mode.
The following two subsections present the results of the experiments aiming to tune the algorithm parameters, and the comparison between the results of the proposed algorithms and CMSA, proposed in Lewis et al. (2019).
Parameter tuning
In order to tune the values of parameters L and S of Algorithm 1, a set of computational experiments has been performed.
Table 1 summarizes the results. For each class of instances (k colors and n nodes), we present the average percentage of happy nodes \(H\%\) and computational time T over the 10 tuning instances, with different parameters values L and S. The best entries for each line are highlighted in bold.
Parameter L defines the neighborhood size, and has been considered equal to 1 or 2. Setting a value of 3 or more will result in the creation of ILP models with too many free variables, sometimes exceeding a time limit of 3600 s without finding the optimal solution. It is clear from the table that the best choice is \(L=2\), having better results at cost of an acceptable increase of computational time.
Paramenter S configures the algorithm stopping criterion and ranges from 1 to 3. While an improvement is clear in results obtained increasing S from 1 to 2, the results are, for most instances, the same when \(S=3\). Hence, we decided to set \(S=2\).
Table 1 Improvement procedure algorithm parameter tuning
Algorithms results comparison
As previously pointed out, authors of Lewis et al. (2019) made freely available the source code of their algorithms except the part related to the mixed integer linear programming model and solver. Then, in order to benchmark our procedure with the reference algorithm CMSA, we re-implemented it, integrating their source code with a mixed integer linear programming model. In the description of CMSA, it is not clear how single-color labels could be efficiently excluded from the list of possible colors since the variables used are of integer type (model M1 is used) and no constraints sets (i.e. disjunctive constraints) have been explicitated to deal with values exclusion. Hence, we contacted the authors of Lewis et al. (2019) asking for details on CMSA implementation which is slightly different with respect to the published paper. Thanks to their help we reconstructed CMSA as originally implemented. Each time the LP model M1 is run, the following rules are used:
If a node \(\bar{i}\) has only one candidate color \(\bar{c}\), the corresponding variable is set to that color (\(x_{\bar{i}} = \bar{c}\)).
If a node \(\bar{i}\) has more candidate colors, the corresponding variable is left free to get any value (\(x_{\bar{i}} \in \{1,...,k\}\)).
For other details about CMSA refer to Lewis et al. (2019). On the other side, excluding values implying model M2 as in our matheuristic is rather simple: it is, in fact, sufficient to add constraints like \(x_{\bar{i}\bar{j}} = 0\) if we want to prevent node \(\bar{i}\) to be labeled with color \(\bar{j}\).
We tested the following approaches:
CPLEX: Lower Bound and Upper Bound after 3600 s calculated by CPLEX solver with model M1.
CMSA: original CMSA using as starting solution the best among \(Greedy-MHV\) and \(Growth-MHV\) with a time limit of 3600 s [as in Lewis et al. (2019)]. Considering that CMSA is not a deterministic algorithm, we present here the best result obtained with 10 different executions.
MH-G: matheuristic algorithm 1, configured with \(L=2\) and \(S=2\), using as starting solution the approximation Algorithm \(\mathcal {G}\).
MS: multi-start version of the procedure, depicted in algorithm 3.
Table 2 summarizes the results. The meaning of the columns of Table 2 is the following:
Column 1: number of different colors k.
Column 2: number of nodes n of the specific class of instances.
Column 3: percentage of "happy" vertices w.r.t the total number of nodes of the upper bound provided by CPLEX after 3600 s of run.
Column 4: percentage of "happy" vertices w.r.t the total number of nodes of the lower bound provided by CPLEX after 3600 s of run.
Column 5: average values of the percentage of "happy" vertices given by the CMSA approach after 3600 s—best of 10 executions.
Column 6: average values of the percentage of "happy" vertices given by the proposed \(MH-G\) algorithm (bold characters if \(MH-G\) is better than CMSA).
Column 7: average maximum CPU time needed to compute the result of \(MH-G\) algorithm, in seconds.
Column 8: average values of the percentage of "happy" vertices given by the MS configuration of the proposed algorithm (bold characters if MS is better than \(MH-G\)).
Column 9: average maximum CPU time needed to compute the result of MS procedure, in seconds.
Table 2 Computational results: algorithms results comparison
Table 3 Number of improved instances with respect to CMSA algorithm
As can be seen, the simple proposed approach \(MH-G\) outperforms CMSA both in terms of solution quality and CPU effort. We recall that the stopping criterion used in Lewis et al. (2019) is the time limit of 3600 s. Our approach gives better results in about two order of magnitude less CPU time. Moreover with algorithm MS, we gain even more solution quality, largely within the 3600 s limit. These results illustrate the effectiveness of our approach which shows up to improve with respect to the current literature.
To further assess the effectiveness of our approaches, Table 3 is reported. Even if the averages improvements of objectives function values may seem limited, the number of improvements is definitely clear. Here, Columns 1 and 2 are the same as in Table 3, while column 3 explicits the number of instances per class. Columns 4 and 5 are dedicated to enlight the number of instances improved with respect to the CMSA procedure of \(MH-G\) and MS algorithms, respectively. As can be seen, apart from one case where the number of improved instances is very limited, \(MH-G\) (and consequently MS) approaches consistently improve over CMSA. It is important to note that there are no instances where CMSA is better than any of our approaches. Globally, we could improve 180 out of 300 instances and we point out that our approach is consistently better on the larger-size instances. This again confirms the effectiveness of the proposed approach.
A simple procedure has been developed to deal with the Maximum Happy Vertices Problem. A starting solution generation obtained thanks to an approximation algorithm is improved via a large-scale neighborhood exploration made with an MILP formulation of the problem. The procedure is then extended in a multi-start configuration. Both approaches have been tested over 300 instances from the literature and compared with a reference algorithm, namely CMSA from Lewis et al. (2019). Solution quality and very limited running times confirm the effectiveness of our approach which is based on simple elements and shows up to improve with respect to the current literature.
Agrawal A (2017) On the parameterized complexity of happy vertex coloring. In IWOCA
Aravind N, Kalyanasundaram S, Anjeneya SK (2016) Linear time algorithms for happy vertex coloring problems for trees. In IWOCA
Ball M (2011) Heuristics based on mathematical programming. Surv Oper Res Manag Sci 16:21–38
Billaut J, Croce F, Grosso A (2015) A single machine scheduling problem with two-dimensional vector packing constraints. Eur J Oper Res 243:75–81
Blum C, Pinacho P, López-Ibáñez P, Lozano JA (2016) Construct, merge, solve and adapt a new general algorithm for combinatorial optimization. Comput Oper Res 68:75–88
Croce F, Grosso A, Salassa F (2019) Minimizing total completion time in the two-machine no-idle no-wait flow shop problem. J Heuristics 1:1–15
Della Croce F, Salassa F (2014) A variable neighborhood search based matheuristic for nurse rostering problems. Ann Oper Res 218:185–199
Della Croce F, Grosso A, Salassa F (2013) Matheuristics: embedding milp solvers into heuristic algorithms for combinatorial optimization problems. Heuristics: theory and applications. Nova Science Publishers, New York, pp 31–52
Della Croce F, Grosso A, Salassa F (2014) A matheuristic approach for the two-machine total completion time flow shop problem. Ann Oper Res 213:67–78
Doi Tsubasa, Nishi Tatsushi, Voß Stefan (2018) Two-level decomposition-based matheuristic for airline crew rostering problems with fair working time. Eur J Oper Res 267:428–438
Ghirardi M, Amerio A (2019) Matheuristics for the lot sizing problem with back-ordering, setup carry-overs, and non-identical machines. Comput Ind Eng 127:822–831
Giusy Macrina G, Laporte F. Guerriero, Pugliese L (2019) An energy-efficient green-vehicle routing problem with mixed vehicle fleet, partial battery recharging and time windows. Eur J Oper Res 276:971–982
Lewis R, Thiruvady D, Morgan K (2018a) Algorithm source code. http://www.rhydlewis.eu/resources/happyalgs.zip. Accessed date 7 Jun 2021
Lewis R, Thiruvady D, Morgan K (2018b) Problem instance generator. http://www.rhydlewis.eu/resources/happygen.zip. Accessed date 7 Jun 2021
Lewis R, Thiruvady D, Morgan K (2019) Finding happiness: an analysis of the maximum happy vertices problem. Comput Oper Res 103:265–276
Luis Fanjul-Peyro F, Perea R. Ruiz (2017) Models and matheuristics for the unrelated parallel machine scheduling problem with additional resources. Eur J Oper Res 260:482–493
Malaguti E, Toth P (2010) A survey on vertex coloring problems. Int Trans Oper Res 17:1–34
Martinez-Sykora A, Alvarez-Valdés R, Bennell J, Ruiz R, Tamarit JM (2017) Matheuristics for the irregular bin packing problem with free rotations. Eur J Oper Res 258:440–455
Shahmanzari Masoud, Aksen D, Salhi S (2020) Formulation and a two-phase matheuristic for the roaming salesman problem: application to election logistics. Eur J Oper Res 280:656–670
Zhang P, Li A (2015) Algorithmic aspects of homophyly of networks. Theor Comput Sci 593:117–131
Zhang P, Xu Y, Li A, Lin G (2018) Improved approximation algorithms for the maximum happy vertices and edges problems. Algorithmica 80:1412–1438
Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement.
DIGEP, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
Marco Ghirardi & Fabio Salassa
Marco Ghirardi
Fabio Salassa
Correspondence to Marco Ghirardi.
Ghirardi, M., Salassa, F. A simple and effective algorithm for the maximum happy vertices problem. TOP 30, 181–193 (2022). https://doi.org/10.1007/s11750-021-00610-4
Issue Date: April 2022
Happy coloring
Matheuristics
Mathematics Subject Classification
90C27 Combinatorial Optimization
90C11 Mixed Integer Programming
90C59 Approximation methods and heuristics in mathematical programming | CommonCrawl |
Chapter 9: Critical Incidents
tdemichael
Terms in this set (154)
(True or False) In special circumstances, the U.S. Department of Transportation (DOT) does not require placecards.
Incident Command System (ICS)
A system implemented to manage disasters and mass-casualty incidents in which section chiefs, including finance, logistics, operations, and planning, report to the incident commander.
(True or False) ICS has helped officers throughout both Florida and the nation handle situations, such as large vehicle crashes, hurricanes, wildfires, large social gatherings, and missing persons.
When acting as part of the initial response to an incident, you should obtain the necessary information from dispatch and immediately do the following:
Identify the type of incident or threat.
Determine the appropriate Personnel Protective Equipment (PPE).
Establish the ICS.
Set up a command post.
Determine the resources needed, including the assistance of other agencies.
Determine whether to shelter in place or evacuate (with evacuation routes and collection points).
The FBI defines an ___ as one or more individuals participating in a random or systematic killing spree demonstrating their intent to harm others with a firearm.
Active shooter's objective
They plan for mass murder, not traditional criminal acts, such as robbery or hostage-taking.
Active shooters may be motivated by:
Loss of significant relationships.
Changes in financial status, losing a job or getting fired.
Changes in living arrangements.
Major adverse changes to life circumstances.
Being bullied or feeling humiliated or rejected.
(True or False) Many active shooters show their desire to hurt others through social media posts, journal writings, and statements made to others.
(True or False) When confronted with an active shooter situation, you may encounter a chaotic situation with large numbers of injured people, fleeing crowds, and secondary hazards, such as Improvised Explosive Devices (IEDs).
(True or False) The primary response to an active shooter incident would be to stand by and wait for the SWAT team to arrive. While they are en route, help the injured.
(True or False) The current tactics for handling an active shooter situation focuses on immediately locating the active shooter and neutralizing him, her, or them before helping the injured.
(True or False) Have a disaster plan in place for your family members and pets. Having a plan in place will help you concentrate better on your assigned duties without being distracted or worried about the well being of your family.
Defined by Section 790.166 as any device or object that is designed or intended to cause death or serious injury to any human or animal through the release of biological contaminants, toxic chemical agents, incendiary fires, and conventional explosives.
(True or False) It is a second-degree felony to unlawfully manufacture, possess, sell, deliver, send, mail, display, use, threaten to use, attempt to use, conspire to use, or make readily accessible to others a "hoax weapon of mass destruction."
Hoax Weapon of Mass Destruction (WMD)
Any object that is designed to appear as an actual weapon of mass destruction.
Primary locations for WMD attacks include:
Airports, subways, schools, places of worship, government buildings, or large public gatherings such as fairs, festivals, or sporting events.
CBRNE
An acronym for Chemical, Biological, Radiological, Nuclear, and Explosives. Used to identify types of hazards that a law enforcement officer may face either as part of an accidental release or intentional use of a WMD.
Awareness Role
First responders have been trained to initiate the emergency response sequence and notify authorities of the situation. They take no further action beyond notifying the authorities of the release.
Operational Role
At this level responders take defensive action to protect nearby people, property, or the environment from the effects of the release. They are trained to respond in a defensive fashion without actually trying to stop the release. Their function is to contain the release from a safe distance, keep it from spreading, and prevent exposures.
Advanced levels of training in responding to WMDs and hazardous material:
Hazardous Materials Technicians.
Hazardous Materials Specialists.
Hazardous Materials Incident Commanders.
Patrol officers are typically trained to respond at the awareness level and have only four responsibilities or goals, sometimes abbreviated as
RIP-NOT
Recognition and identification
(RIP-NOT) Recognition and identification
Officers must be able to recognize that an incident involves WMDs or hazardous materials and identify the materials involved. Avoid exposure, most materials can be identified from a safe distance.
(RIP-NOT) Isolation
Denying or restricting access to the involved area and removing uninjured and uncontaminated people from that area. The incident commander will designate the triage area, once a preliminary perimeter has been set up.
(RIP-NOT) Protection
Using personal protective equipment and evacuating nearby structures for the safety of both you and the public.
(RIP-NOT) Security
Focus more on keeping people out than letting people in. With that as your top priority, you should also try to keep the contamination from spreading by relocating contaminated and injured people to the triage area and keeping them from leaving the scene.
Secure the scene to isolate exposed victims and the contaminated area. Tactics include:
Monitor entry to the scene.
Ensuring public protection by evacuating or protecting the area.
Confining and containing all contaminated victims.
Determining if the scene is or can be made safe for operations.
Protecting the scene and any device.
Coordinating with other agencies to provide security and control perimeters.
Emergency Response Plan (ERP)
A written plan that describes the actions that an organization will take in response to various major events.
If you are the first on the scene of a CBRBE incident, relay information to responding units through dispatch. Your responsibilities might include communicating updates and setting a perimeter. Tell dispatch about:
Any substances involved.
The number of people exposed.
The type of vehicle, container, or device involved, if known.
What is your first responsibility in a CBRNE situation?
Protect yourself. If you get hurt, the situation can only get worse.
What is your second responsibility in a CBRNE situation?
Protect other people and property.
Take immediate shelter where you are—at home, work, school or in between—usually for just a few hours.
Hazardous Material
The U.S. Occupational Safety and Health Administration (OSHA) defines it as any substance or material that, when released, may cause harm, serious injury, or death to humans or animals, or harm the enviroment.
CBRNE chemical threats
Include both industrial chemical hazards as well as weaponized chemical hazards.
Industrial chemical hazards
Occur when hazardous materials are released due to incidents, such as accidents involving tanker or semi-trucks, railroad cars, gasoline stations, and manufacturing plants.
Weaponized chemical hazards
Usually occur as acts of terrorism or war.
Written, accepted levels of emergency care expected by reason of training and profession; written by legal or professional organizations so that patients are not exposed to unreasonable risk or harm.
Emergency Response Guidebook (ERG)
A preliminary action guide that helps in identifying materials, outlines basic actions for first responders, recommends areas of protective action, and gives responders an initial safety man.
ERG Yellow and Blue Section
These sections help you identify the material.
ERG Orange Section
Lists response guidelines related to Potential Hazards, Public Safety, and Emergency Response.
ERG Green Section
Contains information on evacuation details for certain materials.
You may identify a material using the ERG by finding the following:
The four-digit number on the placard or orange panel on the container.
The name of the material on the shipping papers or packaging.
The number of the material on the shipping papers or packaging.
Only a property equipped and trained officer should approach any potential hazmat situation, always using extreme caution. If you cannot approach from upwind, your next choice is crosswind. The main objectives are to:
Isolate the area without entering it.
Keep people away from the scene.
Make sure that people are upwind and out of low-lying areas.
(True or False) To make accurate decisions, it is essential that you identify the type of hazardous material involved, even if you have to put yourself at risk in the process.
(True or False) To identify the material, you may have to look at documents or shipping papers or interview the transport driver or facility staff.
The U.S. Department of Transportation (DOT) defines nine common classes of hazardous materials.
Explosives.
Gases.
Flammable liquids and combustible liquids
Flammable solid, spontaneously combustible, and dangerous when wet.
Oxidizers and organic peroxides.
Toxic materials and infectious substance.
Radioactive substances.
Corrosive substances.
Miscellaneous dangerous goods.
Some examples of ___ are molten sulfur, PCBs (poly-chlorinated biphenyls), and hazardous waste.
Corrosive substances
Materials in this category include acids, solvents, or other materials that may cause irreversible damage to human tissues.
Toxic Materials and Infectious Substance
These materials include medical waste and biological hazards.
Radioactive substances
Substances that contain atoms of unstable isotopes that can spontaneously emit radiation. This category includes nuclear waste, radioactive medical materials, and X-ray equipment.
Materials that are capable of an instantaneous release of energy.
Matter without a definite shape or size. No fixed volume or shape; takes the shape of its container.
Flammable Liquids/ Combustible Liquids
These materials burn in the presence of an ignition source.
These materials are neither liquid nor gas and burn in the presence of an ignition source. Some ignite
spontaneously or in the presence of heat or friction. Others are dangerous when wet.
Oxidizers and Organic Peroxides
These materials may cause spontaneous combustion or increase the intensity of a fire.
(True or False) The shape of the container involved in the hazmat incident can give useful information on the type of hazard.
The main types of containers include
portable, fixed, and transportation
(True or False) A direct relationship usually exists between the size of the container and the size of the affected geographical area. Therefore, the bigger the container, the bigger the area covered.
(True or False) Regulations govern the use of placecards or labels on vehicles and facilities that store hazardous materials.
(True or False) The U.S. Department of Transportation (DOT) requires most vehicles transporting hazardous materials to display placecards that describe the class of hazardous materials on board.
(True or False) Anything that holds two or more classes of hazardous materials must display a "DANGEROUS" placard and may use it instead of the specific placard for each class of material.
Materials like explosives and toxic gases can use the "DANGEROUS" sign as well.
(True or False) The U.S. Environmental Protection Agency (EPA) requires all pesticides and some other chemical substances to show warning labels on the outside of the container to indicate harmful contents.
(True or False) Commercial vehicle operators are required to carry documents that list the contents of their shipment. These documents are called shipping papers.
The sheet that provides information on the safe use of and hazards of chemicals, as well as emergency steps to take in the event chemicals are splashed, sprayed, or ingested.
The National Fire Protection Association has developed a standard facility marking system called the
___.
704 System
Placed on the outside of structures or storage facilities, this large symbol indicates that hazardous products are stored. The diamond-shaped symbol is divided into four segments that indicate the following risks:
Blue: Health hazards.
Red: Flammability hazards.
Yellow: Reactivity.
White: Other (provides information on any special hazards or material).
Sight and hearing are considered lower-risk senses when identifying hazmats. Use these senses from a safe distance and look for the following:
Pressure release.
Smoke or fire.
Liquids, gas leaks, or vapor cloud.
Condensation on pipelines or containers.
Chemical reactions.
Mass casualties.
The orange color-coded pages are the most important part of the Emergency Response Guidebook (ERG). This section has 3 main topics for each substance identified:
Potential Hazards.
Public Safety.
Emergency Response.
The ERG's Public Safety topic has three subsections:
The ERG's Emergency Response topic has three subsections:
Spill or leak
All awareness-level responders should follow agency policies and procedures to terminate their involvement in a hazmat incident. The three steps are:
On-scene debriefing.
Incident critique.
After-action analysis.
On-Scene Debriefing
Officers are advised of the materials to which they may have been exposed, signs and symptoms of overexposure, and who to contact if they notice signs or symptoms of exposure.
During the incident critique phase, officers provide information on operational strengths and weaknesses.
In the after-action analysis, the agency's goal is to review any weaknesses and implement any additional or corrective training.
Incident Critique
Officers provide information on operational strengths and weaknesses.
After-action Analysis
The agency's goal is to review any weaknesses and implement any additional or corrective training.
Meth lab
Locations where methamphetamine is manufactured. Can be as small as a soda bottle or as large as a warehouse.
(True or False) Dangerous chemicals used in the manufacturing process of meth can be found anywhere in a home, vehicle, vessel, shed, motel, or other location.
(True or False) The ingredients used to produce meth are not always flammable, so there is no risk of a violent explosion or release of toxic gasses. In fact, there is no danger when inhaling the chemical fumes or turning on anything.
The decontamination process for a meth lab:
Evacuate the occupants and leave the premises immediately.
Do not place anything in the patrol car before decontamination or allow the removal of any item from the site.
It is necessary to establish a perimeter and follow agency policies and procedures for meth lab response.
Remember that many meth labs are mobile and are found in vehicles.
Chemical suicide
A method of committing
suicide by mixing two or more easily acquired chemicals, commonly an acid and a base.
Examples of chemical WMDs include:
Sarin, a nerve agent.
Chlorine, a choking agent.
(True or False) A primary indicator of chemical exposure is the quick onset of symptoms can appear within minutes or hours.
Symptoms of Exposure to Nerve Agents
Blurred vision, uncontrolled twitching, convulsions, seizures, or respiratory distress.
Symptoms of Exposure to Choking Agents
Respiratory distress, burning of the lungs and airways, choking, and coughing.
What is the most common substances for chemical suicide?
Hydrogen cyanide
(True or False) Biological weapons contain living organisms and are unpredictable and uncontrollable when released.
What is a key indication of exposure to biological agents?
The onset of symptoms, which may appear within a few hours, or may develop over a period of days.
Aside from written or verbal threats, possible indicators of a biological attack include:
Unusual numbers of sick or dying people or animals.
An unusually high occurrence of respiratory problems in diseases that typically cause a non-pulmonary syndrome.
Unexplained damage and ruin to crops and agriculture products.
Abnormal swarms of insects.
Unscheduled or unusual spraying or fogging.
Casualty distribution that corresponds with wind direction.
Abandoned spray or distribution devices.
The appearance of containers from laboratory or biological supply houses or bio-hazard cultures.
The most common examples of biological weapons include:
Botulinum
A naturally occurring bacterium. Infection can occur through the skin and by inhalation.Symptom onset occurs between one day to over two months after the infection is contracted.
The skin form presents with a small blister with surrounding swelling that often turns into a painless ulcer with a black center.
The inhalation form presents with fever, chest pain, and shortness of breath.
A contagious infectious disease that can be transmitted by prolonged face- to-face contact with an infected person, direct contact with infected bodily fluids, and direct contact with infected objects such as clothes.
A neurotoxin derived from mash that is left from the castor bean; causes pulmonary edema and respiratory and circulatory failure leading to death.
A neurotoxin produced by bacterium. It can be absorbed through the eyes, mucous membranes, respiratory tract, and broken skin.
Symptoms include difficulty seeing, speaking, and swallowing and having double vision, drooping eyelids, slurred speech, dry mouth, and muscle weakness.
Dirty Bomb
Also known as radiation dispersal devices, are traditional bombs with radioactive materials loaded into the casing.
They are not nuclear weapons, because they do not contain the same explosive powder and their radioactive material is already in the bomb.
(True or False) A dirty bomb is not a hazmat incident.
Class 1 Hazardous Materials
Examples of Class 1 Hazardous Materials
Small arms ammunition
Class 2 of Hazardous Materials
Flammable liquids
Sulfer
Calcium carbide
Toxic materials and infectious substances
Nuclear waste
Corrosive materials
Magnetized metals
Auto-inflating devices
Molten sulfur
Many places require facilities to keep documents that outline the type of hazardous materials stored or manufactured on site. One example of a facility document is the.....
Yellow section of an Emergency Response Guidebook (ERG):
References the material in order of its assigned 4-digit ID number/UN/NA number.
Blue section of an Emergency Response Guidebook (ERG):
References the material in alphabetical order of its name and identifies the appropriate guide number to reference in the Orange Section.
Orange section of an Emergency Response Guidebook (ERG):
The actual response guides. It includes includes information for responders on appropriate protective clothing.
Green section of an Emergency Response Guidebook (ERG):
Suggests initial evacuation or shelter in place distances (protective action distances).
Sight and hearing are considered low-risk sense when identifying hazmat situations. Use these senses from a safe distance and look for the following:
Pressure release
Liquids, gas leaks, or vapor cload
Condensation on pipelines or containers
Mass casualities
NPFA 704 Diamond (Blue)
NPFA 704 Diamond (Red)
Flammability hazards
NPFA 704 Diamond (Yellow)
NPFA 704 Diamond (white)
Other (provides information on any special hazards of the material).
Common methods used in making meth:
On-pot "Shake and Bake" method
Red Phosphorous method
"Nazi" (anhydrous ammonia) method
(True or False) Areas surrounding meth labs often have dead vegetation.
Incendiary devices consist of a minimum of three components:
The ignition source.
The combustible filler material.
A housing/container.
Indicators of a Vehicle-Borne Improvised Explosive Device (VBIED) include the following:
A threat that specifically mentions explosives in a vehicle.
A vehicle parked suspiciously close to a building or in a restricted parking area.
A vehicle that is unfamiliar to building occupants or seems to have a heavy load.
A vehicle that has a strange smell or leaks powder or liquid.
Vehicle driver or passenger leaves in a hurry.
Bomb dog alerting officers that the vehicle is a threat.
(True or False) Military devices can be easily aquired and are generally recognizable.
Mail bombs can be difficult to detect. Some possible signs are envelops and packages that:
Are rigid.
Have too much postage.
Have misspellings of common words.
Are handwritten.
Have poorly typed addresses.
Have discoloration.
Have protruding wires.
Have strange odors.
____ are traditional bombs with radioactive materials loaded into the casing. They are not considered nuclear weapons because they do not contain the same explosive power and their radiation is preloaded, whereas nuclear weapons create radioactivity upon detonation.
(True or False) Because dirty bombs require a casing and a detonation mechanism, these pieces may be present at the scene of an explosion.
What is the intention of a dirty bomb?
To cause psychological panic and physical harm. Also, health effects might include gastrointestinal disorders, bacterial infections, and hemorrhaging.
(True or False) Military devices can be easily acquired and are generally recognizable.
Improvised Explosive Device (IED)
An ___ is a homemade bomb constructed and deployed in ways other than conventional military action and can be made from commercially available materials.
Vehicle-borne Improvised Explosive Device (VBIED)
A motor vehicle used as a bomb is referred to as a ___. They can be very powerful and dangerous. They are capable of carrying extremely large amounts of explosives.
Indicators of a VBIED
A vehicle that is parked suspiciously close to a building or in a restricted parking area without a proper decal or sticker.
A car that is unfamiliar to building occupants or seems to have a heavy load, indicated by riding low on its rear axle.
Reports that a driver or passenger exited a vehicle and left hurriedly.
A bomb dog alerting officers that a vehicle is a threat.
Incendiary devices consist of a minimum of three components:Their numbers are believed to consist of
Ignition source.
A combustible filler material.
A container.
Incendiary device
Material or chemicals designed and used to start a fire.
Examples of Incendiary Devices
Molotov Cocktail
Firebombs
A creative bomb maker can construct an explosive device to detonate through a number of methods. Some examples include:
Tripwires
Infared beams
(True or False) Evacuation distance from a vehicle should be much greater than evacuation distance from building because a VBIED is potentially very large, and pieces of the vehicle can act as shrapnel.
When responding to a bomb threat, get as much information as possible from dispatch. This information guides your actions upon arrival. Information collected should include:
The nature of the complaint.
The means of the threat.
The time the threat was received.
The alleged time of detonation.
A description of the device.
The location of device.
Who received that threat.
A common policy is to out of the building at least ____ minutes before the alleged time of detonation and not return until the building has been cleared.
(True or False) When approaching a possible bomb situation, you must decide whether to turn off radios and radio wave-transmitting devices. It may prevent the accidental triggering or detonation of a bomb designed to explode by radio waves.
What is the major issue in determining whether to search or evacuate the premises because of bomb threat?
Credibility of the threat.
In a bomb threat situation, the decision to conduct a search depends on different factors:
Permission to search a building or area.
The level of risk for those conducting the search.
The credibility and amount of detail provided in the threat.
Additional threats or the possibility of secondary devices.
Agency policies on officers searching for explosives.
In a bomb situation, the decision to conduct a search depends on different factors:
The credibility and amount of detail provided in the threat, as discussed in the previous lesson.
(True or False) In most cases, you must ask the owner or building representative to search the property, and they must give you permission.
(True or False) In an emergency, if the owner or building representative cannot be located, you may conduct the search without consent.
First step for conducting a planned search for a potential bomb inside a building:
An exterior search of the building perimeter. Follow this by searching evacuation routes, evacuee collection points, staging areas, and command posts.
Second step for conducting a planned search for a potential bomb inside a building:
Conduct an interior search. Be sure to look for any items that seem out of place, and search potential hiding spots.
Third step for conducting a planned search for a potential bomb inside a building:
Search the publicly accessible areas, including entryways and foyers, lobbies, waiting areas, restrooms, cleaning and storage closets, and elevator shafts.
Fourth step for conducting a planned search for a potential bomb inside a building:
A building's interior search should go from bottom to top, beginning with the basement areas, including utility rooms and areas of heating, cooling, electrical power, and telephone equipment.
Secondary explosive devices
Bombs placed at the scene of an ongoing emergency response that are intended to cause casualties among responders.
High Liability: Chapter 4 Defensive Tactics
180 terms
High Liability: First Aid for Criminal Justice Off…
Chapter 11: DUI Traffic Stop
High Liability: Law Enforcement Vehicle Operations
Florida Law Enforcement Academy: Critical Incident…
travispayne51
Chapter 9 Unit 3
Tee_Burrafato
AEMT Ch 48 Hazardous Materials
bluemtnsky
Chapter 44:Hazardous Materials
SOW 5305 Social Work Practice I Study Guide for Qu…
Communication Skills Lab: Interviewing Children an…
Progressive Pre-Employment Assessment
Module 4 : Child Development
Abortion in Domestic Species
FarrahnO
AP Comp Gov U3 Russia Woods 9th Edition MCQs
austinfriend
Starkey Ch 18 Guided Reading Quiz
hunterstuart9
shannasteves
Verified questions
Given v₁ = 45 sin (ωt + 30°) V and v₂ = 50 cos (ωt - 30°) V, determine the phase angle between the two sinusoids and which one lags the other.
Suppose that a parallel-plate capacitor has a dielectric that breaks down if the electric field exceeds K V/m. Thus, the maximum voltage rating of the capacitor is $V_{max} = Kd$, where d is the thickness of the dielectric. The maximum energy that can be stored before breakdown is $w_{max} = 1/2\epsilon_r\epsilon_0K^2(Vol)$, in which Vol is the volume of the dielectric. Air has approximately $K = 32 \times 10^5 \ V/m$ and $\epsilon_r = 1$. Find the minimum volume of air (as a dielectric in a parallel-plate capacitor) needed to store the energy content of one U.S. gallon of gasoline, which is approximately 132 MJ. What thickness should the air dielectric have if we want the voltage for maximum energy storage to be 1000 V?
The Maclaurin series expansion for the cosine is $$ \cos x = 1 - \frac{x^{2}}{2!} + \frac{x^{4}}{4!} - \frac{x^{6}}{6!} + \frac{x^{8}}{8!} - \cdots $$ Use MATLAB to create a plot of the sine (solid line) along with a plot of the series expansion (black dashed line) up to and including the term $x^{8}/8!$. Use the built-in function factorial in computing the series expansion. Make the range of the abscissa from x = 0 to 3$\pi$/2.
Is the given function even or odd or neither even nor odd? Find its Fourier series. Show details of your work. f(x)=x|x| (-1<x<1), p=2 | CommonCrawl |
How do we know that radioactive decay rates are constant over billions of years?
A friend and I recently discussed the idea that radioactive decay rates are constant over geological times, something upon which dating methods are based.
A large number of experiments seem to have shown that decay rate is largely uninfluenced by the environment (temperature, solar activity, etc.). But how do we know that decay rates are constant over billions of years? What if some property of the universe has remained the same over the one hundred years since radioactivity was discovered and measured, but was different one billion years ago?
An unsourced statement on the Wikipedia page on radioactive decay reads:
[A]strophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us) strongly indicate that unperturbed decay rates have been constant.
I'm interested in verifying constancy of decay rates over very long periods of time (millions and billions of years). Specifically, I'm not interested in radiocarbon dating or other methods for dating things in the thousands-of-years range. Radiocarbon dates, used for dating organic material younger than 50,000 years, are calibrated and crossed-checked with non-radioactive data such as tree rings of millennial trees and similarly countable yearly deposits in marine varves, a method of verification that I find convincing and that I am here not challenging.
radioactivity statistics half-life
PertinaxPertinax
95922 gold badges66 silver badges88 bronze badges
$\begingroup$ Isn't this question of the same vein as questions about whether the fine structure, the cosmological constant, the speed of light, etc. have remained constant over billions of years? With the apparent lack of any strong theoretical argument of why these parameters should be expected to change over the past few billions of years, and the absence of any experiments or astronomical observations that suggest these parameters are changing, I suppose that most people just take the Occam's Razor approach and assume that these parameters are constant until evidence appears suggesting otherwise. $\endgroup$ – user93237 May 23 '17 at 19:31
$\begingroup$ @Samuel I've got nothing against assumptions, but I like to know where they are made. I come from a discipline where people are already regularly telescoping six or seven assumptions without even realising it, justifying each one of them with Occam's razor, and arriving at a conclusion they call the "most likely" that to me sounds little better than "least unlikely". This assumption does seem very likely true, but so much in archaeology rests upon it that I would be happy if it could be grounded on more than parsimony and be observationally confirmed. $\endgroup$ – Pertinax May 23 '17 at 19:57
$\begingroup$ Related: physics.stackexchange.com/q/48543/50583, physics.stackexchange.com/q/7008/50583 (on variability of half-life and non-exponential decay), physics.stackexchange.com/q/78684/50583 (on the meaningfulness of the "change" of a dimensionful constant over time), $\endgroup$ – ACuriousMind♦ May 23 '17 at 21:01
$\begingroup$ It's a good question! I don't think any of the linked questions really cover it. Decay rates can be derived in principle from the Standard Model coupling constants, and I doubt that they can be changed much without changing basically everything else (e.g. making nuclear fusion go too fast or slow, changing stellar spectra), but I don't know enough to pin it down. $\endgroup$ – knzhou May 23 '17 at 21:10
$\begingroup$ @TheThunderChimp See for example xxx.lanl.gov/abs/astro-ph/9912131 and xxx.lanl.gov/abs/astro-ph/9901373 $\endgroup$ – hdhondt May 24 '17 at 23:24
Not an answer to your exact question but still so very related that I think it deserves to be mentioned: the Oklo natural nuclear reactor, discovered in 1972 in Gabon (West Africa). Self-sustaining nuclear fission reactions took place there 1.8 billion years ago. Physicists quickly understood how they could use this as a very precise probe into neutron capture cross sections that far back. Actually, a re-analysis of the data [1] has been published in 2006 featuring one of the author of the original papers in the 70's. The idea is that neutron capture is greatly augmented when neutron energy gets close to a resonance of the capturing nucleus. Thus even a slight shift of those resonance energies would have resulted in a dramatically different outcome (a different mix of chemical compounds in the reactor). The conclusion of the paper is that those resonances did not change by more than 0.1 eV.
It should be noted that the most interesting outcome from the point of view of theoretical physics is that this potential shift can be related to a potential change of the fine-structure constant $\alpha$. The paper concludes that
$$−5.6 \times 10^{−8} < \frac{\delta\alpha}{\alpha} < 6.6 \times 10^{−8}$$
[1] Yu. V. Petrov, A. I. Nazarov, M. S. Onegin, V. Yu. Petrov, and E. G. Sakhnovsky, Natural nuclear reactor at oklo and variation of fundamental constants: computation of neutronics of a fresh core, Phys. Rev. C 74 (2006), 064610. https://journals.aps.org/prc/abstract/10.1103/PhysRevC.74.064610
$\begingroup$ Kudos for mentioning the Oklo natural reactor, which is one of the coolest bits of physics that I'm aware of. $\endgroup$ – Michael Seifert May 24 '17 at 14:37
The comment Samuel Weir makes on the fine structure constant is pretty close to an answer. For electromagnetic transitions of the nucleus, these would change if the fine structure constant changed over time. Yet spectral data on distant sources indicates no such change. The atomic transitions would change their energies and we would observe photons from distant galaxies with different spectral lines.
For the weak and strong nuclear interactions, the answer is more difficult or nuanced. For the strong interactions, we have more of an anchor. If strong interactions changed their coupling constant this would impact stellar astrophysics. Stars in the distant universe would be considerably different than they are today. Again observations of distant stars indicate no such drastic change. For weak interactions, things are more difficult.
A lot of nuclear decay is by weak interactions and the production of $\beta$ radiation as electrons and positrons. Creationists might argue the rate of weak interactions was considerably larger in the recent past to give the appearance of more daughter products than what occurs today. This then gives the appearance of great age that is not there. The problem with carbon dating with the decay process $$ {}^{14}_ 6C~\rightarrow~ {}^{14}_7N~+~e^−+~\nu_e $$ is that if this has changed over the last $6000$ years, a favorite time for creationists, this would mean there would be deviations between carbon dating methods and historical record.
None of this is proof really, but it does fall in line with Bertrand Russell's idea of a teapot orbiting Jupiter.
Lawrence B. CrowellLawrence B. Crowell
$\begingroup$ The "Teapot orbiting Jupiter" seems a very weak response to this. That is a response for proposals that are (currently) complely unobservable, hence both unverifiable and unfalsifiable. Having provided hints about how we actually can observe indirect effects of radioactive decay rates elsewhere (and elsewhen), don't undermine that limited observability by likening it to Russell's proposition which, by design, is thoroughly undecideable. $\endgroup$ – Steve Jessop May 25 '17 at 11:04
$\begingroup$ Of course ignoring the hypothetical possibility of changes from a misapplication of Occam is even worse. We know that many kinds of particle behaviour at very high energies are markedly different from low energies, and hence different at very early epochs of the universe. Physicists should and do seek evidence one way or the other for whether things change, and if so what, how, why. There's a difference between looking and not finding, vs. not looking, and the situation here is the former. "Nothing to see here, move along" only needs to be deployed when you're actually hiding something ;-) $\endgroup$ – Steve Jessop May 25 '17 at 11:15
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – ACuriousMind♦ May 25 '17 at 14:15
$\begingroup$ You may wish to qualify "creationists" as "young Earth creationists." $\endgroup$ – jpmc26 May 25 '17 at 23:25
$\begingroup$ Having once argued the position, I can say absolutely this makes no attempt to answer the Young Earth Creationist claim whatsoever. The claim's nature is a sudden change of rate around either the flood or the event around the time of Peleg. $\endgroup$ – Joshua May 26 '17 at 21:34
There are various questions that one would have to answer, if one wished to claim that there had been large changes in decay rates over geological time. Here is what I think might be the best experiment to prove this claim.
Without using radiological evidence, one can deduce that the Earth is at least a billion years old by counting annual sedimentation layers and measuring thicknesses of rock strata, and cross-correlating between them by presence of identical or near-identical fossil species. This is what Victorian geologists did, leading to the only case I know where geology beat physics for deducing the truth. The physicists asserted that the world could not be much older than 50 million years, because no known chemical process could keep the sun hot for longer than that. The geologists insisted on at least a billion years, and that if it wasn't chemistry, something else must be powering the sun. They were right. The Sun shines by then-unknown nuclear fusion, not chemistry. BTW, it's "at least" because it is hard to find sedimentary rocks more than a billion years old, and such rocks do not contain helpful fossils. Tectonic activity has erased most evidence of pre-Cambrian ages ... except for zircons, but I'm jumping ahead.
Now, jump forwards to today, when we can do isotopic microanalysis of uranium and lead inside zircon (zirconium silicate) crystals. (Skip to the next paragraph if you know about radio-dating zircons.) Zircon has several unique properties. An extremely high melting point. Extreme hardness, greater than quartz. High density. Omnipresence (zirconium in melted rock always crystallizes into zircons as the melt cools, before any other minerals crystallize at all). And most importantly, a very tight crystal structure, which cannot accommodate most other elements as impurities at formation. The main exception is uranium. The only way that lead can get into a zircon crystal, is if it started as uranium which decays into lead after the crystal has solidified from a melt. That uranium comes in two isotopes with different decay times, and each decay chain ends with a different lead isotope. By measuring the relative concentrations of two lead and two uranium isotopes in a zircon, you can deduce the time since it formed using two different "clocks". These zircons are typically the size of grains of sand, so a rock sample will contain millions of independent "clocks" which will allow for good statistical analysis.
So, let's find some zircons in an igneous intrusion into a sedimentary rock whose age we know, roughly, by Victorian geology. It's best if the igneous rock is one which formed at great depth, where all pre-existing zircons would have dissolved back into the melt. The presence of high-pressure metastable minerals such as diamond or olivine would allow us to deduce this, and the fact that all the zircons have the same uranium-to-lead ratios would confirm the deduction. Otherwise one would expect to find a mix of young and older zircons. Choose the youngest, which would have crystallized at the time of the intrusion, rather than having been recycled by tectonic activity from an older time. (Which in many cases is the primaeval solidification of the Earth's crust, and the best estimation of the age of our planet, but that's not relevant here).
Now, compare the age deduced by radioactive decay, to the less accurate age from Victorian geology. If the rate of radioactive decay has changed greatly over geological deep time, there will be a disagreement between these two estimated ages. Furthermore, the disagreement will be different for intrusions of different ages (as judged by Victorian geology), but consistent for intrusions of similar age in different location.
Look for locations where there is a sedimentary rock with intrusion, covered by a younger sedimentary rock without intrusion, meaning that the age of the intrusion can be deduced to be between that of the two sedimentary strata. The closer the age of the two sedimentary strata, the better.
I do not know if this has been done (I'd certainly hope so). Any serious proponent of time-varying radioactive decay, needs to research this. If nobody has looked, get out in the field, find those discrepancies, and publish. It might lead to a Nobel prize if he is right. The onus is certainly on him to do this, because otherwise Occam's razor applies to this theory.
Back to the physics, I'd ask another question, if this observation fails to uncover strong evidence that radioactive decay rates do vary with time. It is this. How come that the $^{238}$U and $^{235}$U "clocks" in zircons always agree? Radioactive decay is basically quantum tunnelling across a potential barrier. The half-life depends exponentially on the height of the barrier. Any proposed time variation, would mean that the height of this barrier varied in deep time, in such a way that the relative rate of $^{235}$U and $^{238}$U decay does not change. Which is a big ask of any such theory, given the exponential sensitivity to changes.
nigel222nigel222
$\begingroup$ Great answer, I very much appreciate the "how to test" approach, and the idea of counting sedimentary layers to cross-check the radiodates seems like the good one, especially since this dating method was used as long ago as Victorian times (I find this of historical interest, any nineteenth century sources on this? Did anyone actually manually count to one billion?). @DavidHammen suggests that some cross-checking has already been done, do you (or him) have any sources on this? $\endgroup$ – Pertinax May 24 '17 at 16:36
$\begingroup$ RE U235-U238: Would a change of the, for instance, weak interaction be expected to change the relative rate? $\endgroup$ – Pertinax May 24 '17 at 16:45
$\begingroup$ @TheThunderChimp you can download Sir Charles Lyell's "Principles of Geology" for free from Amazon Kindle or public domain. It's a seriously weighty tome and he lacked Darwin's gift for the English language. But its interesting to dip into, to find the state of Victorian geology. $\endgroup$ – nigel222 May 24 '17 at 17:45
$\begingroup$ Re relative decay rates: it might be possible to formulate a theory which kept the relative decay rates of U235 and U238 the same while varying both. My instincts tell me that this would be hard (especially when other longlived isotopes are also checked). $\endgroup$ – nigel222 May 24 '17 at 17:51
$\begingroup$ The last paragraph, if I understand it, is actually actually an excellent point all on it's own because it means that changes to fundamental constants would not produce proportional changes in decay rates. That alone should provide all the basis needed to refute any significantly shorter-timeline hypothesis. $\endgroup$ – RBarryYoung May 24 '17 at 20:20
The basic point here is that we don't "know" anything about "the real world". All we have is a model of the world, and some measure of how well the model matches what we observe.
Of course, you can construct an entirely consistent model which says "an invisible, unobservable entity created everything I have ever observed one second before I was born, and made it appear to be much older for reasons that cannot be understood by humans". But as Newton wrote in Principia in the section where he states his "rules for doing science," hypotheses non fingo - don't invent theories just for the sake of inventing them.
Actually one of the Newton's examples gave to illustrate that point was spectacularly wrong - he used his general principle to conclude that the sun gives off light and heat by the same chemical reactions as a coal fire on earth - but that's not the point: given the limited experimental knowledge that he had, he didn't need a different hypothesis about the sun to explain what was known about it.
So, the situation between you and your friend is actually the other way round. You (and all conventional physicists) have a model of the universe which assumes these constants don't change over time, and it fits very well with experimental observations. If your friend wants to claim they do change, the onus is on him/her to find some observable fact(s) which can't be explained in any other way - and also to show that his/her new hypothesis doesn't mess up the explanations of anything else.
As some of the comments have stated, if you start tinkering with the values of the fundamental constants in the Standard Model of particle physics, you are likely to create an alternative model of the universe which doesn't match up with observations on a very large scale - not just over the dating of a few terrestrial fossils.
The "big picture" approach is critically important here. You can certainly make the argument that finding a fossil fish on the top of a high mountain means there must have been a global flood at some point in history - but once you have a global model of plate tectonics, you don't need to consider that fossilized fish as a special case any more!
alephzeroalephzero
$\begingroup$ I don't think this gets to the heart of the question: what exactly would go wrong if a coupling constant changed? This isn't a crazy idea, as many of them did change in the early universe. We don't "need" to prove this, but we should easily be able to. $\endgroup$ – knzhou May 23 '17 at 22:04
$\begingroup$ I think this is ultimately not the right answer. Physicists' belief that the fundamental constants involved haven't changed is not an a prioi deduction from Ockham's razor but an a posteriori hypothesis resulting from many independent lines of evidence, including measurements and modelling, as the other answers detail. $\endgroup$ – Nathaniel May 24 '17 at 6:01
I thought I would include something on how coupling constants and masses vary. This might be a bit off topic, and I thought about asking a question that I would answer myself. Anyway here goes.
We have a number of quantities in the universe that are related to each other by fundamental constants. The first two of these are time and space, which are related to each other by the speed of light $x~=~ct$. The speed of light is something I will consider to be absolutely fundamental. It really is in correct units a light second per second or one. The speed of light defines light cones that are projective subspaces of Minkowski spacetime. Minkowski spacetime can be thought of then as due to a fibration over the projective space given by the light cone. The other fundamental quantity that relates physical properties is the Planck constant $h$ or $\hbar~=~h/2\pi$. This is seen in $\vec p~-~\hbar\vec k$ where $\vec k~=~\hat k/\lambda$. This relates momentum and wavelength, and is also seen in the uncertainty principle $\Delta p\Delta x~\ge~\hbar/2$. The uncertainty principle can be stated according to the Fubini-Study metric, which is a fibration from a projective Hilbert space to Hilbert space. These two systems share remarkably similar structure when seen this way. I will then say as a postulate that $c$ and $\hbar$ are absolutely constant, and since momentum is reciprocal length then in natural units the Planck constant is length per length and is unitless.
There are other constants in nature such as the electric charge. The important constant most often cited is the fine structure constant $$ \alpha~=~\frac{e^2}{4\pi\epsilon\hbar c}~\simeq~1/137. $$ This constant is absolutely unitless. In any system of units it has no units. In natural systems of units we have that $ e^2/4\pi\epsilon$ has ithe units of $\hbar c$, which in MKS units is $j-m$. However, we know from renormalization that $e~\rightarrow~e)-~+~\delta e$ is a correction with $\delta e~\sim~1/\delta^2$, for $\delta~=~1/\Lambda$ the cutoff in space scale for a propagator or the evaluation of a Feynman diagram. This means the fine structure constant can change with scattering energy, and at the TeV energies of the LHB $\alpha'~\sim~1/127$. We have of course the strong and weak interactions and we can well enough state there are coupling constants $e_s$ and $e_w$ and the analogues of the dielectric constants $\epsilon_w$ and $\epsilon_w$ so there are the fine structure constants $$ \alpha_s~=~\frac{e_s^2}{4\pi\epsilon_s\hbar c}~\simeq~1,~\alpha_w~=~\frac{e_w^2}{4\pi\epsilon_w\hbar c}~\simeq~10^{-5}. $$ Most often these coupling constants are $g_s$ and $g_w$. These two have renormalizations $g_s~=~g^0_s~+~\delta g_s$ and $g_w~=~g^0_w~+~\delta g_w$ this runs into the hierarchy problem and how coupling constants vary. These
What is clear is that gauge coupling constants vary with momentum. They do not vary with time, which by $x~=~ct$ or more generally Lorentz boosts means if gauge fields did vary with time they would do so with spatial distance. So far there is no observation and data of such variation from radiation emitted from the very distant universe.
What about gravitation and mass? We do have mass renormalization $m~\rightarrow~m~+~\delta m$. This can mean the mass of a particle can be renormalized at higher energy, and more it means terms due to vacuum energy contributions that renormalize the mass of a bare particle mass must add up and cancel to give the mass we observe. Again this happens with momentum. For the Higgs field the self interaction is due to the $\lambda\phi^4$ term, Technically this means there is a mass renormalization term $\sim~\lambda/\delta^2$ $=~\lambda\Lambda$ for $\delta$ a small region around the point for the $4$ point interaction where we have smeared it out into some small ball or disk of radius $\delta$. Also $\Lambda$ is the corresponding momentum cut off. We have similar physics for other fields, though with fermions have subtle sign issues,
I used the Higgs field because I think there is a deep relationship between gravitation and the Higgs field. I am from this going to compute what I think is the appropriate $\alpha_{grav}$. We can compute the ratio of the Compton wavelength $\lambda~=~M_H/hc$ and gravitational radius $r~=~2GM_H/c^2$ of a Higgs particles, with mass $m~=~125GeV$ $=~2.2\times 10^{-25}kg$. This means $$ \alpha_g~=~\frac{4\pi GM_H^2}{\hbar c}~=~\left(\frac{4\pi M_H}{M_p}\right)^2~=~1.3\times 10^{-33}, $$ where $M_p$ is the Planck mass. This constant is then connected to the mass of all elementary particles. The renormalization to of the Higgs mass determines the mass of all other particles.
There is then no indication of there being any variation of particle masses or coupling constants that depend on time. They all depend on momenta, and the large number of Feynman diagram terms to various orders add and cancel to give observed masses. With supersymmetry this is made somewhat more simple with the cancellation of many diagrams.
Not the answer you're looking for? Browse other questions tagged radioactivity statistics half-life or ask your own question.
Is it possible to speak about changes in a physical constant which is not dimensionless?
Long time deviations from exponential decay in radioactivity
Do some half-lives change over time?
Since radioactive material decays how is it possible that there is any left after 4.5 billion years?
How do we know that some radioactive materials have a half life of millions or even billions of years?
Strange modulation of radioactive decay rates with solar activity
Earth's beginnings and early years, Re radioactive decay or not
How do we know that radioactive decay is memoryless?
Are there radioactive elements that can be seen to diminish/change with the naked eye?
How do we know the rate of decay for radiometric dating is constant?
Radioactive Dating: How do we know the initial amount of radioactive atoms present in the object?
How does radioactive decay affect material properties? | CommonCrawl |
Two-stage matching-adjusted indirect comparison
Antonio Remiro-Azócar1,2
A Correction to this article was published on 01 November 2022
This article has been updated
Anchored covariate-adjusted indirect comparisons inform reimbursement decisions where there are no head-to-head trials between the treatments of interest, there is a common comparator arm shared by the studies, and there are patient-level data limitations. Matching-adjusted indirect comparison (MAIC), based on propensity score weighting, is the most widely used covariate-adjusted indirect comparison method in health technology assessment. MAIC has poor precision and is inefficient when the effective sample size after weighting is small.
A modular extension to MAIC, termed two-stage matching-adjusted indirect comparison (2SMAIC), is proposed. This uses two parametric models. One estimates the treatment assignment mechanism in the study with individual patient data (IPD), the other estimates the trial assignment mechanism. The first model produces inverse probability weights that are combined with the odds weights produced by the second model. The resulting weights seek to balance covariates between treatment arms and across studies. A simulation study provides proof-of-principle in an indirect comparison performed across two randomized trials. Nevertheless, 2SMAIC can be applied in situations where the IPD trial is observational, by including potential confounders in the treatment assignment model. The simulation study also explores the use of weight truncation in combination with MAIC for the first time.
Despite enforcing randomization and knowing the true treatment assignment mechanism in the IPD trial, 2SMAIC yields improved precision and efficiency with respect to MAIC in all scenarios, while maintaining similarly low levels of bias. The two-stage approach is effective when sample sizes in the IPD trial are low, as it controls for chance imbalances in prognostic baseline covariates between study arms. It is not as effective when overlap between the trials' target populations is poor and the extremity of the weights is high. In these scenarios, truncation leads to substantial precision and efficiency gains but induces considerable bias. The combination of a two-stage approach with truncation produces the highest precision and efficiency improvements.
Two-stage approaches to MAIC can increase precision and efficiency with respect to the standard approach by adjusting for empirical imbalances in prognostic covariates in the IPD trial. Further modules could be incorporated for additional variance reduction or to account for missingness and non-compliance in the IPD trial.
In many countries, health technology assessment (HTA) addresses whether new treatments should be reimbursed by public health care systems [1]. This often requires estimating relative effects for interventions that have not been directly compared in a head-to-head trial [2]. Consider that there are two active treatments of interest, say A and B, that have not been evaluated in the same study, but have been contrasted against a comparator C in different studies. In this situation, an indirect comparison of relative treatment effect estimates is required. The analysis is said to be anchored by the common comparator C.
A typical situation in HTA is that where a pharmaceutical company has individual patient data (IPD) from its own study comparing A versus C, which we shall denote the index trial, but only published aggregate-level data (ALD) from another study comparing B versus C, which we call the competitor trial. In this two-study scenario, cross-trial imbalances in effect measure modifiers, effect modifiers for short, make the standard indirect treatment comparisons [3] vulnerable to bias [4]. Novel covariate-adjusted indirect comparison methods have been introduced to account for these imbalances and provide equipoise to the comparison [5,6,7,8,9].
The most popular methodology [10] in peer-reviewed publications and submissions for reimbursement is matching-adjusted indirect comparison (MAIC) [11,12,13]. MAIC weights the subjects in the index trial to create a "pseudo-sample" with balanced moments with respect to the competitor trial. The standard formulation of MAIC proposed by Signorovitch et al. [11] uses a method of moments to estimate a logistic regression, which models the trial assignment mechanism. The weights are derived from the fitted model and represent the odds of assignment to the competitor trial for the subjects in the IPD, conditional on selected baseline covariates.
Under no failures of assumptions, MAIC has produced unbiased treatment effect estimation in simulation studies [7, 14,15,16,17,18,19,20]. Nevertheless, there are some concerns about its inefficiency and instability, particularly where covariate overlap is poor and effective sample sizes (ESSs) after weighting are small [21]. These scenarios are pervasive in health technology appraisals [10]. In these cases, weighting methods are sensitive to inordinate influence by a few subjects with extreme weights and are vulnerable to poor precision. A related concern is that feasible numerical solutions may not exist where there is no common covariate support [21, 22]. Where overlap is weak, methods based on modeling the outcome expectation exhibit greater precision and efficiency than MAIC [21, 23,24,25] but are prone to extrapolation, which may lead to severe bias under model misspecification [26, 27].
Consequently, modifications of MAIC that seek to maximize precision have been presented. An alternative implementation estimates the weights using entropy balancing [17, 28]. The proposal is similar to the standard method of moments, with the additional constraint that the weights are as close as possible to unit weights, potentially penalizing extreme weighting schemes. While the approach has appealing computational properties, Phillippo et al. have proved that it is mathematically equivalent to the standard method of moments [29].
More recently, Jackson et al. have developed a distinct weight estimation procedure that satisfies the conventional method of moments while explicitly maximizing the ESS [22]. This translates into minimizing the dispersion of the weights, with more stable weights improving precision at the expense of inducing bias.
Other approaches to limit the undue impact of extreme weights involve truncating or capping the weights. These are common in survey sampling [30] and in many propensity score settings [31, 32] but are yet to be investigated specifically alongside MAIC. Again, a clear trade-off is involved from a bias-variance standpoint. Lower variance comes at the cost of sacrificing balance and accepting bias [33, 34]. Limitations of weight truncation are that it shifts the target population or estimand definition, and that it requires arbitrary ad hoc decisions on cutoff thresholds.
In order to gain efficiency, I propose a modular extension to MAIC which uses two parametric models. One estimates the treatment assignment mechanism in the index study, the other estimates the trial assignment mechanism. The first model produces inverse probability of treatment weights that are combined with the weights produced by the second model. I term this approach two-stage matching-adjusted indirect comparison (2SMAIC).
In the anchored scenario, the conventional version of MAIC relies on randomization in the index trial. In this setting, the treatment assignment mechanism (the true conditional probability of treatment among the subjects) is typically known. In addition, randomization ensures that there is no confounding on expectation. Therefore, it may seem counter-intuitive to model the treatment assignment mechanism in this study. Nevertheless, this additional step is beneficial to control for finite-sample imbalances in prognostic baseline covariates. These imbalances often arise due to chance and correcting for them leads to efficiency gains.
An advantage of 2SMAIC is that, due to incorporating a treatment assignment model, it is also applicable where the index study is observational. In this case, within-study randomization is not leveraged and concerns about internal validity must be addressed by including potential confounders of the treatment-outcome association in the treatment assignment model. The estimation procedure for the trial assignment weights does not necessarily need to be that of Signorovitch et al. [11] and alternative methods could be used [16, 22]. Further modules could be incorporated to account for missingness [35] and non-compliance [36], e.g. dropout or treatment switching, in the index trial.
I conduct a proof-of-concept simulation study to examine the finite-sample performance of 2SMAIC with respect to the standard MAIC when the index study is an RCT. The two-stage approach improves the precision and efficiency of MAIC without introducing bias. The results are consistent with previous research on the efficiency of propensity score estimators [37, 38]. Finally, the use of weight truncation in combination with MAIC is explored for the first time. Example code to implement the methodologies in R is provided in Additional file 1.
Context and data structure
We focus on the following setting, which is common in submissions to HTA agencies. Let S and T denote indicators for the assigned study and the assigned treatment, respectively. There are two separate studies that enrolled distinct sets of participants and have now been completed. The index study (S=1) compares active treatment A (T=1) versus C (T=0), e.g. standard of care or placebo. The competitor study (S=2) evaluates active treatment B (T=2) versus C (T=0). Covariate-adjusted indirect comparisons such as MAIC perform a treatment comparison in the S=2 sample, implicitly assumed to be of policy interest. We ask ourselves the question: what would be the marginal treatment effect for A versus B had these treatments been compared in an RCT conducted in S=2?
The marginal treatment effect for A vs. B is estimated on the linear predictor (e.g. mean difference, log-odds ratio or log hazard ratio) scale as:
$$\hat{\Delta}_{12}^{(2)} = \hat{\Delta}_{10}^{(2)} - \hat{\Delta}_{20}^{(2)},$$
where \(\hat {\Delta }_{10}^{(2)}\) is an estimate of the hypothetical marginal treatment effect for A vs. C in the competitor study sample, and \(\hat {\Delta }_{20}^{(2)}\) is an estimate of the marginal treatment effect of B vs. C in the competitor study sample. MAIC uses weighting to transport inferences for the marginal A vs. C treatment effect from S=1 to S=2. The estimate \(\hat {\Delta }_{10}^{(2)}\) is produced, which is then input into Eq. 1. Because the within-trial relative effect estimates are assumed statistically independent, their variances are summed to estimate the variance of the marginal treatment effect for A vs. B.
The manufacturer submitting evidence for reimbursement has access to individual-level data \(\mathcal {D}_{AC}=({\boldsymbol {x},\boldsymbol {t},\boldsymbol {y}})\) on covariates, treatment and outcomes for the participants in its trial. Here, x is a matrix of pre-treatment baseline covariates (e.g. comorbidities, age, gender), of size n×k, where n is the total number of subjects in the study sample and k is the number of covariates. A row vector xi=(xi,1,xi,2,…,x1,k) of k covariates is recorded for each participant i=1,…n. We let y=(y1,y2,…,yn) denote a vector of the clinical outcome of interest and t=(t1,t2,…,tn) denote a binary treatment indicator vector. We shall assume that there is no loss to follow-up or missing data on covariates, treatment and outcome in \(\mathcal {D}_{AC}\).
We consider all baseline covariates to be prognostic of the clinical outcome and select a subset of these, z⊆x, as marginal effect modifiers for A with respect to C on the linear predictor scale, with a row vector zi recorded for each patient i. In the absence of randomization, the variables in x would induce confounding between the treatment arms in the index study (internal validity bias). On the other hand, cross-trial imbalances in the variables in z induce external validity bias with respect to the competitor study sample.
Neither the manufacturer submitting the evidence nor the HTA agency evaluating it have access to IPD for the competitor trial. We let \(\mathcal {D}_{BC}=[\boldsymbol {\theta }_{\boldsymbol {x}}, \hat {\Delta }_{20}^{(2)}, \hat {V}(\hat {\Delta }_{20}^{(2)})]\) represent the published ALD that is available for this study. No patient-level covariates, treatment or outcomes are available. Here, θx denotes a vector of means or proportions for the covariates; although higher-order moments such as variances may also be available. An assumption is that a sufficiently rich set of baseline covariates has been measured for the competitor study. Namely, that summaries for the subset θz⊆θx of covariates that are marginal effect modifiers are described in the table of baseline characteristics in the study publication.
Also available is an internally valid estimate \(\hat {\Delta }_{20}^{(2)}\) of the marginal treatment effect for B vs. C in the competitor study sample, and an estimate \(\hat {V}(\hat {\Delta }_{20}^{(2)})\) of its variance. These are either directly reported in the publication or, assuming that the competitor study is a well-conducted RCT, derived from crude aggregate outcomes in the literature.
Matching-adjusted indirect comparison
In MAIC, IPD from the index study are weighted so that the moments of selected covariates are balanced with respect to the published moments of the competitor study. The weight wi for each participant i in the index trial is estimated using a logistic regression:
$$\ln(w_{i}) = \ln[w(\boldsymbol{z}_{i})] = \ln \left[ \frac{Pr(S=2 \mid \boldsymbol{z}_{i})}{1 - Pr(S=2 \mid \boldsymbol{z}_{i})} \right] = \alpha_{0} + \boldsymbol{z}_{i}\boldsymbol{\alpha}_{\boldsymbol{1}},$$
where α0 is the model intercept and α1 is a vector of model coefficients. While most applications of weighting, e.g. to control for confounding in observational studies, construct "inverse probability" weights for treatment assignment, MAIC uses "odds weighting" [39, 40] to model trial assignment. The weight wi represents the conditional odds that an individual i with covariates zi, selected as marginal effect modifiers, is enrolled in the competitor study. Alternatively, the weight represents the inverse conditional odds that the individual is enrolled in the index study.
The logistic regression parameters in Eq. 2 cannot be derived using conventional methods such as maximum-likelihood estimation, due to unavailable IPD for the competitor trial. Signorovitch et al. propose using a method of moments instead to enforce covariate balance across studies [11]. Prior to balancing, the IPD covariates are centered on the means or proportions published for the competitor trial. The centered covariates for subject i in the IPD are defined as \(\boldsymbol {z}^{\boldsymbol {*}}_{i} = \boldsymbol {z}_{i} - \boldsymbol {\theta }_{\boldsymbol {z}}\).
Weight estimation involves minimizing the objective function:
$$Q(\boldsymbol{\alpha}_{\boldsymbol{1}}) = \sum\limits_{i=1}^{n} \exp \left(\boldsymbol{z}^{\boldsymbol{*}}_{i} \boldsymbol{\alpha}_{\boldsymbol{1}}\right).$$
The function Q(α1) is convex [11] and can be minimized using standard convex optimization algorithms [41]. Provided that there is adequate overlap, minimization yields the unique finite solution: \(\hat {\boldsymbol {\alpha }}_{\boldsymbol {1}}=\text {argmin}[Q(\boldsymbol {\alpha }_{\boldsymbol {1}})]\). Feasible solutions do not exist if all the values observed for a covariate in z are greater or lesser than its corresponding element in θz [22].
After minimizing the objective function in Eq. 3, the weight estimated for the i-th participant in the IPD is:
$$\hat{w}_{i} = \exp(\boldsymbol{z}^{\boldsymbol{*}}_{i}\hat{\boldsymbol{\alpha}}_{\boldsymbol{1}}).$$
The estimated weights are relative, in the sense that any weights that are proportional are equally valid [22]. Weighting reduces the ESS of the index trial. The approximate ESS after weighting is typically estimated as \(\left (\sum _{i}^{n}\hat {w}_{i}\right)^{2}/\sum _{i}^{n}\hat {w}_{i}^{2}\) [5, 42]. Low values of the ESS suggest that a few influential participants with disproportionate weights dominate the reweighted sample.
Consequently, marginal mean outcomes for treatments A and C in the competitor study sample (S=2) are estimated as the weighted average:
$$\hat{\mu}^{(2)}_{t} = \frac{\sum_{i=1}^{n_{t}} y_{i,t} \hat{w}_{i,t}}{\sum_{i=1}^{n_{t}} \hat{w}_{i,t}},$$
where nt denotes the number of participants assigned to treatment t∈{0,1} of the index trial, yi,t represents the observed clinical outcome for subject i in arm t, and \(\hat {w}_{i,t}\) is the weight assigned to patient i under treatment t. For binary outcomes, \(\hat {\mu }_{t}\) would estimate the expected marginal outcome probability under treatment t. Absolute outcome estimates may be desirable as inputs to health economic models [25] or in unanchored comparisons made in the absence of a common control group.
In anchored comparisons, the objective is to estimate a relative effect for A vs. C, as opposed to absolute outcomes. Indirect treatment comparisons are typically conducted on the linear predictor scale [3, 4, 6]. Consequently, this scale is also used to define effect modification, which is scale specific [5].
One can convert the mean absolute outcome predictions produced by Eq. 5 from the natural scale to the linear predictor scale, and compute the marginal treatment effect for A vs. C in S=2 as the difference between the average linear predictions:
$$\hat{\Delta}_{10}^{(2)} = g \left(\hat{\mu}_{1}^{(2)} \right) - g \left(\hat{\mu}_{0}^{(2)} \right).$$
Here, g(·) is an appropriate link function, e.g. the identity link produces a mean difference for continuous-valued outcomes, and the \(\text {logit} \left (\hat {\mu }^{(2)}_{t} \right) = \ln \left [\hat {\mu }^{(2)}_{t}/\left (1-\hat {\mu }^{(2)}_{t} \right)\right ]\) generates a log-odds ratio for binary outcomes. Different, potentially more interpretable, choices such as relative risks and risk differences are possible for the marginal contrast. One can map to these scales by manipulating \(\hat {\mu }_{1}^{(2)}\) and \(\hat {\mu }_{0}^{(2)}\) differently.
Alternatively, the weights generated by Eq. 4 can be used to fit a simple regression of outcome on treatment to the IPD [43]. The model can be fitted using maximum-likelihood estimation, weighting the contribution of each individual i to the likelihood by \(\hat {w}_{i}\). In this approach, the treatment coefficient of the fitted weighted model is the estimated marginal treatment effect \(\hat {\Delta }_{10}^{(2)}\) for A vs. C in S=2.
The original approach to MAIC uses a robust sandwich-type variance estimator [44] to compute the standard error of \(\hat {\Delta }_{10}^{(2)}\). This relies on large-sample properties and has understated variability with small ESSs in a previous simulation study investigating MAIC [7] and in other settings [45,46,47,48]. In addition, most implementations of the sandwich estimator, e.g. when fitting the weighted regression [49], ignore the estimation of the trial assignment model, assuming the weights to be fixed quantities. While analytic expressions that incorporate the estimation of the weights could be derived, a practical alternative is to resample via the ordinary non-parametric bootstrap [23, 50, 51], re-estimating the weights and the marginal treatment effect for A vs. C in each bootstrap iteration. Point estimates, standard errors and interval estimates can be directly calculated from the bootstrap replicates.
We briefly describe the assumptions required by MAIC and their implications:
Internal validity of the effect estimates derived from the index and competitor studies. This is certainly feasible where the studies are RCTs because randomization ensures exchangeability over treatment assignment on expectation. While internal validity may hold in RCTs, it is a more stringent condition for observational studies. The absence of informative measurement error, missing data, non-adherence, etc. is assumed.
Consistency under parallel studies [52]. There is only one well-defined version of each treatment [53] or any variations in the versions of treatment are irrelevant [54, 55]. This applies to the common comparator C in particular.
Conditional transportability (exchangeability) of the marginal treatment effect for A vs. C from the index to the competitor study [39]. Namely, trial assignment does not affect this measure, conditional on z. Prior research has referred to this assumption as the conditional constancy of relative effects [5, 6, 9]. It is plausible if z comprises all of the covariates that are considered to modify the marginal treatment effect for A vs. C (i.e., there are no unmeasured effect modifiers) [56,57,58]Footnote 1.
Sufficient overlap. The ranges of the selected covariates in S=1 should cover their respective moments in S=2. Overlap violations can be deterministic or random. The former arise structurally, due to non-overlapping trial target populations (eligibility criteria). The latter arise empirically due to chance, particularly where sample sizes are small [60]. Therefore, overlap can be assessed based on absolute sample sizes. The ESS is a convenient one-number diagnostic.
Correct specification of the S=2 covariate distribution. Analysts can only approximate the joint distribution because IPD are unavailable for the competitor study. Covariate correlations are rarely published for S=2 and therefore cannot be balanced by MAIC. In that case, they are assumed equal to those in the pseudo-sample formed by weighting the IPD [5].
I make a brief remark on the specification of the parametric trial assignment model in Eq. 2. This does not necessarily need to be correct as long as it balances all the covariates, and potential transformations of these covariates, e.g. polynomial transformations and product terms, that modify the marginal treatment effect for A vs. C [9, 23]. Squared terms are often included to balance variances for continuous covariates [11] but initial simulation studies do not report performance benefits [14, 17]. This is probably due to greater reductions in ESS and precision [25].
The identification of effect modifiers will likely require prior background knowledge and substantive domain expertise. Bias-variance trade-offs are also important. Failing to include an influential effect modifier in z, whether in imbalance or not, leads to bias in S=2 [5, 40, 61]. On the other hand, the inclusion of covariates that are not effect modifiers reduces overlap, thereby increasing the chance of extreme weights. This decreases precision without improving the potential for bias reduction [6, 62], even if the covariates are strongly imbalanced across studies. That is, even if they predict or are associated to trial assignment.
Put simply, as is the case for other weighting-based methods [63, 64], MAIC is potentially unbiased if either the trial assignment mechanism or the outcome-generating mechanism is known, with the latter leading to better performance due to reduced variance and increased efficiency.
While the standard MAIC models the trial assignment mechanism, two-stage MAIC (2SMAIC) additionally models the treatment assignment mechanism in the index trial. The treatment assignment model is estimated to produce inverse probability of treatment weights. Then, these are combined with the odds weights generated by the standard MAIC. The resulting weights seek to balance covariate moments between the studies and the treatment arms of the index trial.
For the treatment assignment mechanism, a propensity score logistic regression of treatment on the covariates is fitted to the IPD:
$$\text{logit}[e_{i}] = \text{logit}[e(\boldsymbol{x}_{i})] = \text{logit}[Pr(T=1\mid \boldsymbol{x}_{i})] = \beta_{0} + \boldsymbol{x}_{i} \boldsymbol{\beta}_{\boldsymbol{1}},$$
where β0 and β1 parametrize the logistic regression. The propensity score ei is defined as the conditional probability that participant i is assigned treatment A versus treatment C given measured covariates xi [65].
Having fitted the model in Eq. 7, e.g. using maximum-likelihood estimation, propensity scores for the subjects in the index trial are predicted using:
$$\hat{e}_{i} = \text{expit}[\hat{\beta}_{0} + \boldsymbol{x}_{i} \hat{\boldsymbol{\beta}}_{\boldsymbol{1}}],$$
where \(\text {expit}(\cdot)=\exp (\cdot)/[1+\exp (\cdot)], \hat {\beta }_{0}\) and \(\hat {\boldsymbol {\beta }}_{\boldsymbol {1}}\) are point estimates of the logistic regression parameters, and \(\hat {e}_{i}\) is an estimate of ei. Inverse probability of treatment weights are constructed by taking the reciprocal of the estimated conditional probability of the treatment assigned in the index study [37]. That would be \(1/\hat {e}_{i}\) for units under treatment A and \(1/(1-\hat {e}_{i})\) for units under treatment C.
Consequently, the weights produced by the standard MAIC (Eq. 4) are rescaled by the estimated inverse probability of treatment weights. The contribution of each subject i in the IPD is weighted by:
$$\hat{\omega}_{i} = \frac{t_{i} \hat{w}_{i}}{\hat{e}_{i}} + \frac{(1-t_{i}) \hat{w}_{i}}{(1-\hat{e}_{i})}.$$
The weights \(\{ \hat {w}_{i}, i=1,\dots,n \}\) estimated by the standard MAIC are odds, constrained to be positive. These balance the index and competitor study studies in terms of the selected effect modifier moments. The estimated propensity scores \(\{ \hat {e}_{i},\, i=1,\dots,n \}\) are probabilities bounded away from zero and one. Therefore, the weights \(\{ \hat {\omega }_{i},\, i=1,\dots,n \}\) produced by 2SMAIC in Eq. 9 are constrained to be positive. These weights achieve balance in effect modifier moments across studies, but also seek to balance covariate moments between the index trial's treatment groups.
Marginal mean outcomes for treatments A and C in the competitor study sample are estimated as the weighted average of observed outcomes:
$$\hat{\mu}^{(2)}_{t} = \frac{\sum_{i=1}^{n_{t}} y_{i,t} \hat{\omega}_{i,t}}{\sum_{i=1}^{n_{t}} \hat{\omega}_{i,t}},$$
where \(\hat {\omega }_{i,t}\) is the weight assigned to patient i under treatment t. One can convert the mean absolute outcome predictions generated by Eq. 10 to the linear predictor scale, and compute the marginal treatment effect for A vs. C in S=2 as the difference between the average linear predictions, as per Eq. 6. Alternatively, a weighted regression of outcome on treatment alone can be fitted to the IPD, in which case the treatment coefficient of the fitted model represents the estimated marginal treatment effect \(\hat {\Delta }_{10}^{(2)}\) for A vs. C in S=2.
Inference can be based on a robust sandwich-type variance estimator or on resampling approaches such as the non-parametric bootstrap. As noted previously, the sandwich variance estimator is biased downwards when the ESS after weighting is small, leading to overprecision. In practice, the non-parametric bootstrap is a preferred option, re-estimating both the trial assignment model and the treatment assignment model in each iteration. This approach explicitly accounts for the estimation of the weights and is expected to perform better where the ESS is small.
It may seem counter-intuitive to estimate the treatment assignment mechanism when the index trial is an RCT. The randomized design implies that the true propensity scores {ei, i=1,…,n} are fixed and known. For instance, consider a marginally randomized two-arm trial with a 1:1 treatment allocation ratio. The trial investigators have determined in advance that the probability of being assigned to active treatment versus control is ei=0.5 for all i.
The rationale for estimating the propensity scores is the following. Randomization guarantees that there is no confounding on expectation [66]. Nevertheless, covariate balance is a large-sample property, and one may still observe residual covariate imbalances between treatment groups due to chance, especially when the trial sample size is small [67]. As formulated by Senn [66], "over all randomizations the groups are balanced; for a particular randomization they are unbalanced." The use of estimated propensity scores allows to correct for random finite-sample imbalances in prognostic baseline covariates. In the RCT literature, inverse probability of treatment weighting is an established approach for covariate adjustment [68], and has increased precision, efficiency and power with respect to unadjusted analyses in the estimation of marginal treatment effects [48, 69].
Insofar, the use of anchored MAIC has been limited to situations where the index trial is an RCT. 2SMAIC can be used when the index study is observational, provided that the baseline covariates in x offer sufficient control for confounding. In non-randomized studies, the true propensity score for each participant in the index study is unknown, and additional conditions are needed to produce internally valid estimates of the marginal treatment effect for A vs. C. These are: (1) conditional exchangeability over treatment assignment [70]; and (2) positivity of treatment assignment [60]. Randomized trials tend to meet these assumptions by design. The assumptions have conceptual parallels with the conditional transportability and overlap conditions previously described for MAIC.
The first assumption indicates that the potential outcomes of subjects in each treatment group are independent of the treatment assigned after conditioning on the selected covariates. It relies on all confounders of the effect of treatment on outcome being measured and accounted for [71]. The second assumption indicates that, for every participant in the index study, the probability of being assigned to either treatment is positive, conditional on the covariates selected to ensure exchangeability [60]. This requires overlap between the joint covariate distributions of the subjects under treatment A and under treatment C. This assumption is threatened if there are few or no individuals from either treatment group in certain covariate subgroups/strata.
Simulation study
The objectives of the simulation study are to provide proof-of-principle for 2SMAIC and to benchmark its statistical performance against that of MAIC in an anchored setting where the index study is an RCT. We also investigate whether weight truncation can improve the performance of MAIC and 2SMAIC by reducing the variance caused by extreme weights.
Each method is assessed using the following frequentist characteristics [72]: (1) unbiasedness; (2) precision; (3) efficiency (accuracy); and (4) randomization validity (valid confidence interval estimates). The selected performance metrics specifically evaluate these criteria. The ADEMP (Aims, Data-generating mechanisms, Estimands, Methods, Performance measures) framework [72] is used to describe the simulation study design. Example R code implementing the methodologies is provided in Additional file 1. All simulations and analyses have been conducted in R software version 4.1.1 [73]Footnote 2.
Data-generating mechanisms
We consider continuous outcomes using the mean difference as the measure of effect. For the index and competitor studies, outcome yi for participant i is generated as:
$$y_{i} = \beta_{0} + \boldsymbol{x}_{i}\boldsymbol{\beta}_{\boldsymbol{1}} + \left(\beta_{t} + \boldsymbol{x}_{i}\boldsymbol{\beta}_{\boldsymbol{2}} \right)\mathbbm{1}(t_{i}=1) + \epsilon_{i},$$
using the notation of the index study data. Each xi contains the values of three correlated continuous covariates, which have been simulated from a multivariate normal distribution with pre-specified means and covariance matrix. There is some positive correlation between the three covariates, with pairwise Pearson correlation levels set to 0.2. The covariates have main effects and are prognostic of individual-level outcomes independently of treatment. They also have first-order covariate-treatment product terms, thereby modifying the conditional (and marginal) effects of both A and B versus C on the mean difference scale, i.e., z is equivalent to x. The term εi is an error term for subject i generated from a standard (zero-mean, unit-variance) normal distribution.
The main "prognostic" coefficient β1,k=2 for each covariate k. This is considered a strong covariate-outcome association. The interaction coefficient β2,k=1 for each covariate k, indicating notable effect modification. We set the intercept β0=5. Active treatments A and B are assumed to have the same set of effect modifiers with respect to the common comparator, and identical interaction coefficients for each effect modifier. Consequently, the shared (conditional) effect modifier assumption holds [5]. The main treatment coefficient βt=−2 is considered a strong conditional treatment effect versus the control at baseline (when the covariate values are zero).
The continuous outcome may represent a biomarker indicating disease severity. The covariates are comorbidities associated with higher values of the biomarker and which interact with the active treatments to hinder their effect versus the control.
It is assumed that the index and competitor studies are simple, marginally randomized, RCTs. The number of participants in the competitor RCT is 300, with a 1:1 allocation ratio for active treatment vs. control. For this study, individual-level covariates are summarized as means. These would be available to the analyst in a table of baseline characteristics in the study publication. Individual-level outcomes are aggregated by fitting a simple linear regression of outcome on treatment to produce an unadjusted estimate of the marginal mean difference for B vs. C, with its corresponding nominal standard error. This information would also be available in the published study.
We adopt a factorial arrangement using two index trial sample sizes times three overlap settings. This results in a total of six simulation scenarios. The following parameter values are varied:
Sample sizes of n∈{140,200} are considered for the index trial, with an allocation ratio of 1:1 for intervention A vs. C. The sample sizes are small but not unusual in applications of MAIC in HTA submissions [10]. It is anticipated that smaller trials are subject to a greater chance of covariate imbalance than larger trials [74].
The level of (deterministic) covariate overlap. Covariates follow normal marginal distributions in both studies. For the competitor trial, the marginal distribution means are fixed at 0.6. For the index trial, the mean μk∈{0.5,0.4,0.3} for each covariate k. These settings yield strong, moderate and poor overlap, respectively. The standard deviations in both studies are fixed at 0.4, i.e., a one standard deviation increase in each covariate is associated with a 0.8 unit increase in the outcome. Greater covariate imbalances across studies lead to poorer overlap between the trials' target populations, which translates into more variable weights and a lower ESS. Unless otherwise stated, when describing the results of the simulation study, "covariate overlap" relates to deterministic overlap between the trials' target populations and not to random violations arising due to small sample sizes.
Estimands
The target estimand is the marginal mean difference for A vs. B in S=2. The treatment coefficient βt=−2 is the same for both A vs. C and B vs. C, and the shared (conditional) effect modifier assumption holds. Therefore, the true conditional treatment effects for A vs. C and B vs. C in S=2 are the same (−2+3×(0.6×1)=−0.2). Because mean differences are collapsible, the true marginal treatment effects for A vs. C and B vs. C coincide with the corresponding conditional estimands. The true marginal effect for A vs. B in S=2 is a composite of that for A vs. C and B vs. C, which cancel out. Consequently, the true marginal mean difference for A vs. B in S=2 is zero.
Note that all the methods being compared conduct the same unadjusted analysis to estimate the marginal treatment effect of B vs. C. Because the competitor study is a randomized trial, this estimate should be unbiased with respect to the corresponding marginal estimand in S=2. Therefore, differences in performance between the methods will arise from the comparison between A and C, for which marginal and conditional estimands are non-null.
Each simulated dataset is analyzed using the following methods:
Matching-adjusted indirect comparison (MAIC). The trial assignment model in Eq. 2 contains main effect terms for all three effect modifiers — only covariate means are balanced. The objective function in Eq. 3 is minimized using BFGS [41]. The weights estimated by Eq. 4 are used to fit a weighted simple linear regression of outcome on treatment to the index trial IPD.
Two-stage matching-adjusted indirect comparison (2SMAIC). We follow the same steps as for the standard MAIC. In addition, the treatment assignment model in Eq. 7 is fitted to the index study IPD, including main effect terms for all three baseline covariates. Propensity score estimates are generated by Eq. 8 and combined with the weights generated by Eq. 4 as per Eq. 9. The resulting weights are used to fit a weighted simple linear regression of outcome on treatment to the index trial IPD.
Truncated matching-adjusted indirect comparison (T-MAIC). This approach is identical to MAIC but the highest estimated weights (Eq. 4) are truncated using a 95th percentile cutpoint, following Susukida et al. [75, 76], Webster-Clark et al. [77], and Lee et al. [31]. Specifically, all weights above the 95th percentile are replaced by the value of the 95th percentile.
Truncated two-stage matching-adjusted indirect comparison (T-2SMAIC). This approach is identical to 2SMAIC but all the estimated weights (Eq. 9) larger than the 95th percentile are set equal to the 95th percentile.
All approaches use the ordinary non-parametric bootstrap to estimate the variance of the A vs. C marginal treatment effect. 2,000 resamples of each simulated dataset are drawn with replacement [50, 78]. Due to patient-level data limitations for the competitor study, only the IPD of the index trial are resampled in the implementation of the bootstrap. The average marginal mean difference for A vs. C in S=2 is computed as the average across the bootstrap resamples. Its standard error is the standard deviation across these resamples. For the "one-stage" MAIC approaches, each bootstrap iteration re-estimates the trial assignment model. For the "two-stage" MAIC approaches, both the trial assignment and the treatment assignment model are re-estimated in each iteration.
All methods perform the indirect treatment comparison in a final stage, where the results of the study-specific analyses are combined. The marginal mean difference for A vs. B is obtained by directly substituting the point estimates \(\hat {\Delta }_{10}^{(2)}\) and \(\hat {\Delta }_{20}^{(2)}\) in Eq. 1. Its variance is estimated by adding the point estimates of the variance for the within-study treatment effect estimates. Wald-type 95% confidence interval estimates are constructed using normal distributions.
We generate 5,000 simulated datasets per simulation scenario. For each scenario and analysis method, the following performance metrics are computed over the 5,000 replicates: (1) bias in the estimated treatment effect; (2) empirical standard error (ESE); (3) mean square error (MSE); and (4) empirical coverage rate of the 95% confidence interval estimates. These metrics are defined explicitly in prior work [7, 72].
The bias evaluates aim 1 of the simulation study. It is equal to the average treatment effect estimate across the simulations because the true estimand is zero (\(\Delta _{12}^{(2)}=0\)). The ESE targets aim 2 and is the standard deviation of the treatment effect estimates over the 5,000 runs. The MSE represents the average squared bias plus the variance across the simulated replicates. It measures overall efficiency (aim 3), accounting for both bias (aim 1) and precision (aim 2). Coverage assesses aim 4, and is computed as the percentage of estimated 95% confidence intervals that contain the true value of the estimand.
We have used 5,000 replicates per scenario based on the analysis method and scenario with the largest long-run variability (standard MAIC with n=140 and poor overlap). Assuming \(\text {SD}(\hat {\Delta }_{12}^{(2)}) \leq 0.53\), the Monte Carlo standard error (MCSE) of the bias is at most \(\sqrt {\text {Var}(\hat {\Delta }_{12}^{(2)})/N_{sim}}=\sqrt {0.28/5000}=0.007\) under 5,000 simulations per scenario, and the MCSE of the coverage, based on an empirical coverage rate of 95% is \(\left (\sqrt {(95 \times 5)/5000}\right)\%=0.31\%\), with the worst-case being 0.71% under 50% coverage. These are considered adequate levels of simulation uncertainty.
Performance measures for all methods and simulation scenarios are reported in Fig. 1. The strong overlap settings are at the top (in ascending order of index trial sample size), followed by the moderate overlap settings and the poor overlap settings at the bottom. For each data-generating mechanism, there is a ridgeline plot visualizing the spread of point estimates for the marginal A vs. B treatment effect over the 5,000 simulation replicates. Below each plot, a table summarizing the performance metrics of each method is displayed. MCSEs for each metric, used to quantify the simulation uncertainty, have been computed and are presented in parentheses alongside the average of each performance measure. These are considered negligible due to the large number of simulated datasets per scenario. In Fig. 1, Cov denotes the empirical coverage rate of the 95% confidence interval estimates.
Simulation study results. Point estimates of the treatment effect and performance metrics for all methods and simulation scenarios
In the most extreme scenario (n=140 and poor covariate overlap), weights could not be estimated for 1 of the 5,000 simulated datasets. This was due to total separation: empirically, all the values observed in the index trial for one of the baseline covariates were below the competitor study mean. Therefore, there were no feasible solutions minimizing the objective function in Eq. 3. The affected replicate was discarded, and 4,999 simulated datasets were analyzed in the corresponding scenario. With respect to the treatment assignment model, empirical overlap between treatment arms was always excellent due to randomization in the index trial.
Even with the small index trial sample sizes, bias is similarly low for MAIC and 2SMAIC without truncation in all simulation scenarios. There is a slight increase in bias as the ESS after weighting decreases, with the bias of highest magnitude occurring with n=140 and poor covariate overlap (the scenario with the lowest ESS after weighting) for MAIC (-0.041) and 2SMAIC (-0.031). In absolute terms, the bias of 2SMAIC is smaller than that of MAIC in all simulation scenarios. For 2SMAIC, it is within Monte Carlo error of zero in all scenarios except in the most extreme setting, mentioned earlier, and in the setting with n=200 and moderate overlap (-0.008). Of all methods, 2SMAIC produces the lowest bias in every simulation scenario.
Weight truncation increases absolute bias in all scenarios. T-MAIC and T-2SMAIC consistently exhibit greater bias than MAIC and 2SMAIC. When overlap is strong, truncation only induces bias very slightly. As overlap is reduced, the bias induced by truncation is more noticeable, particularly in the n=140 settings. For instance, the bias for T-MAIC and T-2SMAIC in the scenarios with poor overlap is substantial (for n=140: 0.157 and 0.160, respectively; for n=200, 0.149 and 0.153). For the truncated methods, the magnitude of the bias also appears to increase as the ESS after weighting decreases.
As expected, all methods incur precision losses as the number of subjects in the index trial and covariate overlap decrease. Despite enforcing randomization in the index trial, 2SMAIC increases precision, as measured by the ESE, with respect to MAIC in every simulation scenario. Reductions in ESE are more dramatic in the n=140 settings than in the n=200 settings. This is attributed to a greater chance of empirical covariate imbalances with smaller sample sizes. Interestingly, reduced covariate overlap seems to minimize the effect of incorporating the second (treatment assignment) stage. This is likely due to precision gains being offset by the presence of extreme weights, which lead to large reductions in ESS and inflate the ESE. The same trends are revealed for T-2SMAIC with respect to T-MAIC across the simulation scenarios. Both "two-stage" versions have reduced ESEs compared to their "one-stage" counterparts in all scenarios.
Weight truncation decreases the ESE across all simulation scenarios for one-stage and two-stage MAIC. This is to be expected as the influence of outlying weights is reduced. When overlap is strong, truncation offers only a small improvement in precision. This has little impact in comparison to the inclusion of a second stage in MAIC. For instance, under strong overlap and n=140, the ESE for MAIC and 2SMAIC is 0.516 and 0.386, respectively; compared to ESEs of 0.489 and 0.371 for the corresponding truncated versions.
The precision gains of weight truncation become more considerable as overlap weakens and the extremity of the weights increases. When overlap is poor, truncation reduces the ESE more sharply than the incorporation of a second stage in MAIC. For example, under poor overlap and n=140, the ESE of MAIC and 2SMAIC is 0.767 and 0.703, respectively, and that of the truncated versions is 0.563 and 0.519. Unsurprisingly, the combination of incorporating the second stage and truncating the weights is most effective at variance reduction. As n decreases, precision seems to be more markedly reduced for the one-stage approaches than for the two-stage approaches, and for the untruncated approaches than for the truncated ones.
Where covariate overlap is strong, T-2SMAIC has the highest precision, followed by 2SMAIC, T-MAIC and MAIC. Where covariate overlap is moderate or poor, T-2SMAIC has the highest precision, followed by T-MAIC, 2SMAIC and MAIC.
As per the ESE, MSE values decrease for all methods as the index trial sample size and covariate overlap increase. In agreement with the trends for precision, the two-stage versions of MAIC increase efficiency with respect to the corresponding one-stage methods in all scenarios, particularly in the n=140 settings. Efficiency gains for the two-stage approaches are stronger where covariate overlap is strong and become less noticeable as covariate overlap weakens, due to extreme weights. For instance, with strong overlap and n=200, MSEs for MAIC and 2SMAIC are 0.205 and 0.127, respectively. With poor overlap and n=200, these are 0.459 and 0.393, respectively.
Differences in MSE between methods are driven more by comparative precision than bias. This is expected in the strong overlap scenarios, where the bias for all methods is negligible, but also occurs in the poor overlap scenarios. The precision gains of truncation more than counterbalance the increase in bias when the variability of the weights is high. As overlap decreases, the relative efficiency of the truncated versus the untruncated approaches is markedly improved. For example, with poor overlap and n=200, the MSE of T-MAIC and T-2SMAIC is 0.263 and 0.233, respectively (compared to MSEs of 0.459 and 0.393 for MAIC and 2SMAIC).
T-2SMAIC is the most efficient method and MAIC is the least efficient method across all simulation scenarios in terms of MSE. Where covariate overlap is strong, T-2SMAIC yields the highest efficiency, followed by 2SMAIC, T-MAIC and MAIC. Where overlap is poor, T-2SMAIC has the highest efficiency, followed by T-MAIC, 2SMAIC and MAIC. Where overlap is moderate, 2SMAIC and T-MAIC have comparable efficiency.
From a frequentist perspective, 95% confidence interval estimates should include the true estimand 95% of the time. Namely, empirical coverage rates should equal the nominal coverage rates to ensure appropriate type I error rates for testing a "no effect" null hypothesis. Theoretically, due to our use of 5,000 Monte Carlo simulations per scenario, empirical coverage rates are statistically significantly different to the desired 0.95 if they are under 0.944 or over 0.956.
Empirical coverage rates for MAIC are statistically significantly different to the nominal coverage rate in all but one scenario: that with strong overlap and n=200. Where covariate overlap is strong or moderate, all other methods exhibit empirical coverage rates that are very close to the advertised nominal values (all differences are not significantly different, except for T-MAIC in the scenario with strong overlap and n=140).
There is discernible undercoverage for all methods when overlap is poor. This is particularly the case for the approaches without truncation. For instance, for the smallest sample size (n=140) with poor overlap, the empirical coverage rate is 0.900 for MAIC and 0.917 for 2SMAIC. These anti-conservative inferences could arise from the use of normal distribution-based confidence intervals when the ESS after weighting is small. While the large-sample normal approximation produces asymptotically valid inferences, a reasonable alternative in small ESS scenarios could be the use of a t-distribution. An open question is how to choose the degrees of freedom of the t-distribution.
Interestingly, coverage drops are larger for the untruncated approaches than for the truncated approaches as overlap weakens. This is surprising because the truncated methods induce sizeable bias in the poor overlap settings, and one would have expected coverage rates to be degraded further by this bias. Weight truncation has improved coverage rates in another simulation study in a different context [31]. This warrants further investigation. Overcoverage is not a problem for any of the methods as the empirical coverage rates never rise above 0.956.
Limitations of simulation study
In all simulation scenarios, two-stage methods offer enhanced precision and efficiency with respect to one-stage methods. These gains are likely linked to the prognostic strength of the baseline covariates included in the treatment assignment model. We have assumed, as is typically the case in practice, that the baseline covariates are prognostic of outcome. Less notable increases in precision and efficiency are expected when covariate-outcome associations are lower.
All approaches depend on the critical assumption of conditional transportability over trials. Given the somewhat arbitrary and unclear process driving selection into different studies in our context (in reality, there is not a formal assignment process determining whether subjects are in study sample S=1 or S=2), I have not specified a true trial assignment mechanism in the simulation study. Nevertheless, the true outcome-generating mechanism imposes linearity and additivity assumptions in the covariate-outcome associations and the treatment-by-covariate interactions. Conditional transportability holds because the trial assignment model balances means for all the covariates that modify the marginal treatment effect of A vs. C.
In real-life scenarios, it is entirely possible that more complex relationships underlie the outcome-generating process. These would potentially require balancing higher-order moments, covariate-by-covariate interactions and non-linear transformations of the covariates. In practice, sensitivity analyses will be required to explore whether there are discrepancies in the results produced by different model specifications.
The methods evaluated in this article focus on correcting for imbalances in baseline covariates, i.e., the 'P' in the PICO (Population, Intervention, Comparator, Outcome) framework [79]. Nevertheless, there are other kinds of differences which may bias indirect treatment comparisons, e.g. in comparator or endpoint definitions. The methodologies that have been evaluated in this article cannot adjust for these types of differences.
Contributions in light of recent simulation studies
Prior simulation studies in the context of anchored indirect treatment comparisons have concluded that outcome regression is more precise and efficient than weighting when the conditional outcome-generating mechanism is known [23, 24]. This is likely to remain the case despite the performance gains of 2SMAIC and the truncated approaches with respect to MAIC.
Nevertheless, there is one caveat. In these studies, the (one-stage) MAIC trial assignment model only accounts for covariates that are marginal effect modifiers. The reason for this is that including prognostic covariates that are not effect modifiers deteriorates precision without improving the potential for bias reduction. Conversely, the outcome regression approaches have included all prognostic covariates in the outcome model, making use of this prognostic information to increase precision and efficiency. Therefore, the equipoise or fairness in previous comparisons between weighting and outcome regression is debatable.
With 2SMAIC, weighting approaches can now make use of this prognostic information by including the relevant covariates in the treatment assignment model. Future simulation studies comparing weighting and outcome regression should involve 2SMAIC as opposed to its one-stage counterpart, particularly in these "perfect information" scenarios.
Extension to observational studies
Almost invariably, anchored MAIC has been applied in a setting where the index trial is randomized. In this setting, the inclusion of the treatment assignment model leads to efficiency gains by increasing precision. Any reduction in bias will be, at most, modest due to the internal validity of the index trial. Nevertheless, in situations where the index study is observational, the treatment assignment model can be useful to reduce internal validity bias due to confounding.
Transporting the results of a non-randomized study from S=1 to S=2 requires further untestable assumptions. Additional barriers are: (1) susceptibility to unmeasured confounding; and (2) positivity issues. Due to randomization, there is typically excellent overlap between treatment arms in RCTs. However, theoretical (deterministic) violations of positivity may occur in observational study designs [34, 60, 80], e.g. subjects with certain covariate values may have a contraindication for receiving one of the treatments, resulting in a null probability of treatment assignment.
In addition to these conceptual problems, "chance" violations of positivity may occur with small sample sizes or high-dimensional data due to sampling variability, in both randomized and non-randomized studies. These have not been observed in this simulation study. Near-violations of positivity between treatment arms may lead to extreme inverse probability of treatment weights [81], further inflating variance in 2SMAIC.
Finally, it is worth noting that observational study designs have traditionally been more prone than RCTs to additional causes of internal validity bias, e.g. missing outcome data, measurement error or protocol deviations [82].
Approaches for variance reduction
Weight truncation is a relatively informal but easily implemented method to improve precision by restricting the contribution of extreme weights. The choice of a 95th percentile cutoff is based on prior literature and is somewhat arbitrary, but worked well in this simulation study. Alternative threshold values could be considered.
Lower thresholds will further reduce variance at the cost of introducing more bias and shifting the target population or estimand definition further [32, 83]. The ideal truncation level will vary on a case-by-case basis and can be set empirically, e.g. by progressively truncating the weights [32, 84]. Density plots are likely helpful to assess the dispersion of the weights and identify an optimal cutoff point. Weight truncation is likely of little utility where there is sufficient overlap and the weights are well-behaved. Efficiency gains are expected to decrease with larger sample sizes, as the induced bias could potentially offset the reduction of variance.
We have only explored two strategies to improve efficiency: (1) modeling the trial assignment mechanism; and (2) truncating the weights that are above a certain level. Nevertheless, there are other approaches that could be used in practical applications, either on their own or combined with the procedures explored in this article. Weight trimming [85] is closely related to weight truncation. It involves excluding the subjects with outlying weights, thereby sharing many of the limitations of truncation: setting arbitrary cutoff points, and changing the target population even further. Trimming is unappealing because it directly throws away information, discarding data from some individuals, and likely losing precision with respect to truncation.
The use of stabilized weights is often recommended to gain precision and efficiency [32, 86], particularly when the weights are highly variable. In the implementations of MAIC in this article, the fitted weighted outcome model is considered to be "saturated" (i.e., cannot be misspecified) because it is a marginal model of outcome on a time-fixed binary treatment [87]. For saturated models, stabilized and unstabilized weights give identical results [87]. Nevertheless, weight stabilization is encouraged when the weighted outcome model is unsaturated, e.g. with dynamic (time-varying) or continuous-valued treatment regimens [44, 88].
Another approach that has been used to gain efficiency is overlap weighting [89, 90]. It also changes the target estimand, estimating treatment effects in a subsample with good overlap. While the approach is worth consideration, it is challenging to implement in our context because IPD are unavailable for the competitor study.
In the Background section, I referred to the weight estimation procedure by Jackson et al. [22], which satisfies the method of moments while maximizing the ESS, thereby reducing the dispersion of the weights. 2SMAIC is a modular framework and this approach could be used instead of the standard method of moments to estimate the trial assignment odds weights. Different weighting modules could be incorporated to account for missing outcomes [35], treatment switching [91, 92] and other forms of non-adherence to the protocol [36] in the index trial.
I have introduced 2SMAIC, an extension of MAIC that combines a model for the treatment assignment mechanism in the index trial with a model for the trial assignment mechanism. The first model accounts for covariate differences between treatment arms, producing inverse probability weights that can balance the treatment groups of the index study. The second model accounts for effect modifier differences between studies, generating odds weights that achieve balance across trials and allow us to transport the marginal effect for A vs. C from S=1 to S=2. In 2SMAIC, both weights are combined to attain balance between the treatment arms of the index trial and across the studies.
The statistical performance of 2SMAIC has been investigated in scenarios where the index study is an RCT. We find that the addition of a second (treatment assignment) stage increases precision and efficiency with respect to the standard one-stage MAIC. It does so without inducing bias and being less prone to undercoverage. Efficiency and precision gains are prominent when the index trial has a small sample size, in which case it is subject to empirical imbalances in prognostic baseline covariates. Two-stage MAIC accounts for these chance imbalances through the treatment assignment model, mitigating the precision loss coming with decreasing sample sizes. Precision and efficiency gains are attenuated when there is poor overlap between the target populations of the studies, due to the high extremity of the estimated weights.
The inclusion of weight truncation approaches has been evaluated for the first time in the context of MAIC. The one-stage and two-stage approaches produced very little bias before truncation was applied. Where covariate overlap was strong and the variability of the weights tolerable, truncation only improved precision and efficiency slightly, while inducing bias. The benefits of truncation become more apparent in situations with weakening overlap, where it diminishes the influence of extreme weights, substantially improving precision and even coverage with respect to the untruncated approaches.
Due to bias-variance trade-offs, precision improvements always come at the cost of bias. In this simulation study, the trade-off favors variance reduction over the induced bias, with truncation improving efficiency in all scenarios. Nevertheless, truncation is likely unnecessary when the weights are well-behaved and the ESS after weighting is sizeable. The combination of a second stage and weight truncation is most effective in improving precision and efficiency in all simulation scenarios.
When covariate overlap is poor, undercoverage is an issue for all methods, particularly for the untruncated approaches. Novel outcome regression-based techniques [21, 23,24,25, 93] may be preferable in these situations. The development of doubly robust approaches that combine outcome modeling with a model for the trial assignment weights is also attractive, as these would give researchers two chances for correct model specification.
In the absence of a common comparator group, unanchored comparisons contrast the outcomes of single treatment arms between studies. Because one of the stages relies on estimating the treatment assignment mechanism in the index study, the two-stage approaches are not applicable in the unanchored case. This is a limitation, as many applications of covariate-adjusted indirect comparisons are in this setting [10], both in published studies and in health technology appraisals.
Finally, I address a misconception that has arisen recently in the literature [25, 94]. It is believed that MAIC replicates the unadjusted analysis that would be performed in a hypothetical "ideal RCT" because it targets a marginal estimand, and that MAIC cannot make use of information on prognostic covariates. While all approaches to MAIC target marginal estimands, these produce covariate-adjusted estimates of the marginal effect. The standard one-stage approach to MAIC accounts for covariate differences across studies. The two-stage approaches introduced in this article generate covariate-adjusted estimates that also account for imbalances between treatment arms in the index trial, as is the case in covariate-adjusted analyses of RCTs.
The files required to generate the data, run the simulations, and reproduce the results are available at http://github.com/remiroazocar/Maic2stage.
A Correction to this paper has been published: https://doi.org/10.1186/s12874-022-01753-z
This assumption is strong and untestable. Nevertheless, it is weaker than that required by unanchored comparisons. Unanchored comparisons compare absolute outcome means as opposed to relative effect estimates. Therefore, these rely on the conditional exchangeability of the absolute outcome mean under active treatment (conditional constancy of absolute effects) [5, 6, 40, 59]. This requires capturing all factors that are prognostic of outcome given active treatment.
The files required to run the simulations are available at http://github.com/remiroazocar/Maic2stage.
2SMAIC:
ALD:
Aggregate-level data
ESE:
Empirical standard error
ESS:
Effective sample size
IPD:
Individual patient data
HTA:
MAIC:
MCSE:
Monte Carlo standard error
MSE:
Mean square error
PICO:
Population, Intervention, Comparator, Outcome
T-MAIC:
Truncated matching-adjusted indirect comparison
T-2SMAIC:
Truncated two-stage matching-adjusted indirect comparison
Vreman RA, Naci H, Goettsch WG, Mantel-Teeuwisse AK, Schneeweiss SG, Leufkens HG, Kesselheim AS. Decision making under uncertainty: comparing regulatory and health technology assessment reviews of medicines in the united states and europe. Clin Pharmacol Ther. 2020; 108(2):350–7.
Sutton A, Ades A, Cooper N, Abrams K. Use of indirect and mixed treatment comparisons for technology assessment. Pharmacoeconomics. 2008; 26(9):753–67.
Bucher HC, Guyatt GH, Griffith LE, Walter SD. The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin Epidemiol. 1997; 50(6):683–91.
Dias S, Sutton AJ, Ades A, Welton NJ. Evidence synthesis for decision making 2: a generalized linear modeling framework for pairwise and network meta-analysis of randomized controlled trials. Med Dec Making. 2013; 33(5):607–17.
Phillippo D, Ades T, Dias S, Palmer S, Abrams KR, Welton N. Nice dsu technical support document 18: methods for population-adjusted indirect comparisons in submissions to nice. Sheffield: NICE Decision Support Unit; 2016.
Phillippo DM, Ades AE, Dias S, Palmer S, Abrams KR, Welton NJ. Methods for population-adjusted indirect comparisons in health technology appraisal. Med Dec Making. 2018; 38(2):200–11.
Remiro-Azócar A, Heath A, Baio G. Methods for population adjustment with limited access to individual patient data: A review and simulation study. Res Synth Methods. 2021; 12(6):750–75.
Remiro-Azócar A, Heath A, Baio G. Conflating marginal and conditional treatment effects: Comments on "assessing the performance of population adjustment methods for anchored indirect comparisons: A simulation study". Stat Med. 2021; 40(11):2753–8.
Remiro-Azócar A, Heath A, Baio G. Effect modification in anchored indirect treatment comparisons: Comments on "matching-adjusted indirect comparisons: Application to time-to-event data". Stat Med. 2022; 41(8):1541–53.
Phillippo DM, Dias S, Elsada A, Ades A, Welton NJ. Population adjustment methods for indirect comparisons: A review of national institute for health and care excellence technology appraisals. Int J Technol Assess Health Care. 2019;35(3):221–8.
Signorovitch JE, Wu EQ, Andrew PY, Gerrits CM, Kantor E, Bao Y, Gupta SR, Mulani PM. Comparative effectiveness without head-to-head trials. Pharmacoeconomics. 2010; 28(10):935–45.
Signorovitch J, Erder MH, Xie J, Sikirica V, Lu M, Hodgkins PS, Wu EQ. Comparative effectiveness research using matching-adjusted indirect comparison: an application to treatment with guanfacine extended release or atomoxetine in children with attention-deficit/hyperactivity disorder and comorbid oppositional defiant disorder. Pharmacoepidemiol Drug Saf. 2012; 21:130–7.
Signorovitch JE, Sikirica V, Erder MH, Xie J, Lu M, Hodgkins PS, Betts KA, Wu EQ. Matching-adjusted indirect comparisons: a new tool for timely comparative effectiveness research. Value Health. 2012; 15(6):940–7.
Hatswell AJ, Freemantle N, Baio G. The effects of model misspecification in unanchored matching-adjusted indirect comparison: results of a simulation study. Value Health. 2020; 23(6):751–9.
Cheng D, Ayyagari R, Signorovitch J. The statistical performance of matching-adjusted indirect comparisons: Estimating treatment effects with aggregate external control data. Ann Appl Stat. 2020; 14(4):1806–33.
Wang J. On matching-adjusted indirect comparison and calibration estimation. arXiv preprint arXiv:2107.11687. 2021.
Petto H, Kadziola Z, Brnabic A, Saure D, Belger M. Alternative weighting approaches for anchored matching-adjusted indirect comparisons via a common comparator. Value Health. 2019; 22(1):85–91.
Kühnast S, Schiffner-Rohe J, Rahnenführer J, Leverkus F. Evaluation of adjusted and unadjusted indirect comparison methods in benefit assessment. Methods Inf Med. 2017; 56(03):261–7.
Weber D, Jensen K, Kieser M. Comparison of methods for estimating therapy effects by indirect comparisons: A simulation study. Med Dec Making. 2020; 40(5):644–54.
Jiang Y, Ni W. Performance of unanchored matching-adjusted indirect comparison (maic) for the evidence synthesis of single-arm trials with time-to-event outcomes. BMC Med Res Methodol. 2020; 20(1):1–9.
Phillippo DM, Dias S, Ades A, Welton NJ. Assessing the performance of population adjustment methods for anchored indirect comparisons: A simulation study. Stat Med. 2020; 39(30):4885–911.
Jackson D, Rhodes K, Ouwens M. Alternative weighting schemes when performing matching-adjusted indirect comparisons. Res Synth Methods. 2021; 12(3):333–46.
Remiro-Azócar A, Heath A, Baio G. Parametric g-computation for compatible indirect treatment comparisons with limited individual patient data. arXiv preprint arXiv:2108.12208. 2021.
Remiro-Azócar A, Heath A, Baio G. Marginalization of regression-adjusted treatment effects in indirect comparisons with limited patient-level data. arXiv preprint arXiv:2008.05951. 2020.
Phillippo DM, Dias S, Ades AE, Welton NJ. Target estimands for efficient decision making: Response to comments on "assessing the performance of population adjustment methods for anchored indirect comparisons: A simulation study". Stat Med. 2021; 40(11):2759–63.
Ho DE, Imai K, King G, Stuart EA. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Polit Anal. 2007; 15(3):199–236.
Rubin DB. Estimating causal effects from large data sets using propensity scores. Ann Intern Med. 1997; 127(8):757–63.
Belger M, Brnabic A, Kadziola Z, Petto H, Faries D. Inclusion of multiple studies in matching adjusted indirect comparisons (maic). Value Health. 2015; 18(3):33.
Phillippo DM, Dias S, Ades A, Welton NJ. Equivalence of entropy balancing and the method of moments for matching-adjusted indirect comparison. Res Synth Methods. 2020; 11(4):568–72.
Elliott MR, Little RJ. Model-based alternatives to trimming survey weights. J Off Stat. 2000; 16(3):191–210.
Lee BK, Lessler J, Stuart EA. Weight trimming and propensity score weighting. PloS ONE. 2011; 6(3):18174.
Cole SR, Hernán MA. Constructing inverse probability weights for marginal structural models. Am J Epidemiol. 2008; 168(6):656–64.
Moore KL, Neugebauer R, van der Laan MJ, Tager IB. Causal inference in epidemiological studies with strong confounding. Stat Med. 2012; 31(13):1380–404.
Léger M, Chatton A, Le Borgne F, Pirracchio R, Lasocki S, Foucher Y. Causal inference in case of near-violation of positivity: comparison of methods. Biom J. 2022. In press.
Seaman SR, White IR. Review of inverse probability weighting for dealing with missing data. Stat Methods Med Res. 2013; 22(3):278–95.
Cain LE, Cole SR. Inverse probability-of-censoring weights for the correction of time-varying noncompliance in the effect of randomized highly active antiretroviral therapy on incident aids or death. Stat Med. 2009; 28(12):1725–38.
Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med. 2004; 23(19):2937–60.
Hahn J. On the role of the propensity score in efficient semiparametric estimation of average treatment effects. Econometrica. 1998;66(2):315–31.
Westreich D, Edwards JK, Lesko CR, Stuart E, Cole SR. Transportability of trial results using inverse odds of sampling weights. Am J Epidemiol. 2017; 186(8):1010–4.
Dahabreh IJ, Robertson SE, Steingrimsson JA, Stuart EA, Hernan MA. Extending inferences from a randomized trial to a new target population. Stat Med. 2020; 39(14):1999–2014.
Nocedal J, Wright S. Numerical optimization. New York: Springer Science and Business Media; 2006.
Kish L. Survey Sampling. New York: Wiley; 1965.
Schafer JL, Kang J. Average causal effects from nonrandomized studies: a practical guide and simulated example. Psychol Methods. 2008; 13(4):279.
Robins JM, Hernan MA, Brumback B. Marginal structural models and causal inference in epidemiology. Epidemiology. 2000; 11(5):550–60.
Fay MP, Graubard BI. Small-sample adjustments for wald-type tests using sandwich estimators. Biometrics. 2001; 57(4):1198–206.
Chen Z, Kaizar E. On variance estimation for generalizing from a trial to a target population. arXiv preprint arXiv:1704.07789. 2017.
Tipton E, Hallberg K, Hedges LV, Chan W. Implications of small samples for generalization: Adjustments and rules of thumb. Eval Rev. 2017; 41(5):472–505.
Raad H, Cornelius V, Chan S, Williamson E, Cro S. An evaluation of inverse probability weighting using the propensity score for baseline covariate adjustment in smaller population randomised controlled trials with a continuous outcome. BMC Med Res Methodol. 2020; 20(1):1–12.
Zeileis A. Object-oriented computation of sandwich estimators. J Stat Softw. 2006; 16:1–16.
Efron B, Tibshirani RJ. An introduction to the bootstrap. New York: CRC press; 1994.
Sikirica V, Findling RL, Signorovitch J, Erder MH, Dammerman R, Hodgkins P, Lu M, Xie J, Wu EQ. Comparative efficacy of guanfacine extended release versus atomoxetine for the treatment of attention-deficit/hyperactivity disorder in children and adolescents: applying matching-adjusted indirect comparison methodology. CNS Drugs. 2013; 27(11):943–53.
Hartman E, Grieve R, Ramsahai R, Sekhon JS. From sample average treatment effect to population average treatment effect on the treated: combining experimental with observational studies to estimate population treatment effects. J R Stat Soc Ser A (Stat Soc). 2015; 178(3):757–78.
Rubin DB. Randomization analysis of experimental data: The fisher randomization test comment. J Am Stat Assoc. 1980; 75(371):591–3.
VanderWeele TJ, Hernan MA. Causal inference under multiple versions of treatment. J Causal Infer. 2013; 1(1):1–20.
VanderWeele TJ. Concerning the consistency assumption in causal inference. Epidemiology. 2009; 20(6):880–3.
Hernán MA, VanderWeele TJ. Compound treatments and transportability of causal inference. Epidemiology (Cambridge, Mass.) 2011; 22(3):368.
O'Muircheartaigh C, Hedges LV. Generalizing from unrepresentative experiments: a stratified propensity score approach. J R Stat Soc Ser C (Appl Stat). 2014; 63(2):195–210.
Zhang Z, Nie L, Soon G, Hu Z. New methods for treatment effect calibration, with applications to non-inferiority trials. Biometrics. 2016; 72(1):20–29.
Rudolph KE, van der Laan MJ. Robust estimation of encouragement design intervention effects transported across sites. J R Stat Soc Ser B (Stat Methodol). 2017; 79(5):1509–25.
Westreich D, Cole SR. Invited commentary: positivity in practice. Am J Epidemiol. 2010; 171(6):674–7.
Stuart EA. Matching methods for causal inference: A review and a look forward. Stat Sci Rev J Inst Math Stat. 2010; 25(1):1.
Nie L, Zhang Z, Rubin D, Chu J. Likelihood reweighting methods to reduce potential bias in noninferiority trials which rely on historical data to make inference. Ann Appl Stat. 2013; 7(3):1796–813.
Brookhart MA, Schneeweiss S, Rothman KJ, Glynn RJ, Avorn J, Stürmer T. Variable selection for propensity score models. Am J Epidemiol. 2006; 163(12):1149–56.
Shortreed SM, Ertefaie A. Outcome-adaptive lasso: variable selection for causal inference. Biometrics. 2017; 73(4):1111–22.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983; 70(1):41–55.
Senn S. Testing for baseline balance in clinical trials. Stat Med. 1994; 13(17):1715–26.
Li X, Ding P. Rerandomization and regression adjustment. J R Stat Soc Ser B (Stat Methodol). 2020; 82(1):241–68.
Morris TP, Walker AS, Williamson EJ, White IR. Planning a method for covariate adjustment in individually-randomised trials: a practical guide. Trials. 2022;23:328.
Williamson EJ, Forbes A, White IR. Variance reduction in randomised trials by inverse probability weighting using the propensity score. Stat Med. 2014; 33(5):721–37.
Hernán MA, Robins JM. Estimating causal effects from epidemiological data. J Epidemiol Community Health. 2006; 60(7):578–86.
Holland PW. Statistics and causal inference. J Am Stat Assoc. 1986; 81(396):945–60.
Morris TP, White IR, Crowther MJ. Using simulation studies to evaluate statistical methods. Stat Med. 2019; 38(11):2074–102.
Team, R Core, et al. R: A language and environment for statistical computing. R Foundation for Statistical Computing: Vienna; 2013.
Thompson DD, Lingsma HF, Whiteley WN, Murray GD, Steyerberg EW. Covariate adjustment had similar benefits in small and large randomized controlled trials. J Clin Epidemiol. 2015; 68(9):1068–75.
Susukida R, Crum RM, Hong H, Stuart EA, Mojtabai R. Comparing pharmacological treatments for cocaine dependence: Incorporation of methods for enhancing generalizability in meta-analytic studies. Int J Methods Psychiatr Res. 2018; 27(4):1609.
Susukida R, Crum RM, Stuart EA, Mojtabai R. Generalizability of the findings from a randomized controlled trial of a web-based substance use disorder intervention. Am J Addict. 2018; 27(3):231–7.
Webster-Clark MA, Sanoff HK, Stürmer T, Peacock Hinton S, Lund JL. Diagnostic assessment of assumptions for external validity: an example using data in metastatic colorectal cancer. Epidemiology (Cambridge, Mass.) 2019; 30(1):103.
Carpenter J, Bithell J. Bootstrap confidence intervals: when, which, what? a practical guide for medical statisticians. Stat Med. 2000; 19(9):1141–64.
Richardson WS, Wilson MC, Nishikawa J, Hayward RS, et al.The well-built clinical question: a key to evidence-based decisions. Acp j club. 1995; 123(3):12–13.
Petersen ML, Porter KE, Gruber S, Wang Y, Van Der Laan MJ. Diagnosing and responding to violations in the positivity assumption. Stat Methods Med Res. 2012; 21(1):31–54.
Li F, Thomas LE, Li F. Addressing extreme propensity scores via the overlap weights. Am J Epidemiol. 2019; 188(1):250–7.
Deeks JJ, Dinnes J, D'Amico R, Sowden AJ, Sakarovitch C, Song F, Petticrew M, Altman D, et al.Evaluating non-randomised intervention studies. Health Technol Assess (Winchester, England). 2003; 7(27):1–173.
Xiao Y, Moodie EE, Abrahamowicz M. Comparison of approaches to weight truncation for marginal structural cox models. Epidemiol Methods. 2013; 2(1):1–20.
Kish L. Weighting for unequal pi. J Off Stat. 1992; 8(2):183.
Crump RK, Hotz VJ, Imbens GW, Mitnik OA. Dealing with limited overlap in estimation of average treatment effects. Biometrika. 2009; 96(1):187–99.
Austin PC, Stuart EA. Moving towards best practice when using inverse probability of treatment weighting (iptw) using the propensity score to estimate causal treatment effects in observational studies. Stat Med. 2015; 34(28):3661–79.
Shiba K, Kawahara T. Using propensity scores for causal inference: pitfalls and tips. J Epidemiol. 2021; 31:457–63.
Robins JM, Hernán MA. Estimation of the causal effects of time-varying exposures. Longitudinal Data Anal. 2009; 553:599.
Zeng S, Li F, Wang R, Li F. Propensity score weighting for covariate adjustment in randomized clinical trials. Stat Med. 2021; 40(4):842–58.
Desai RJ, Franklin JM. Alternative approaches for confounding adjustment in observational studies using weighting based on the propensity score: a primer for practitioners. BMJ. 2019;367:l5657.
Robins JM, Finkelstein DM. Correcting for noncompliance and dependent censoring in an aids clinical trial with inverse probability of censoring weighted (ipcw) log-rank tests. Biometrics. 2000; 56(3):779–88.
Latimer NR, Abrams K, Lambert P, Crowther M, Wailoo A, Morden J, Akehurst R, Campbell M. Adjusting for treatment switching in randomised controlled trials–a simulation study and a simplified two-stage method. Stat Methods Med Res. 2017; 26(2):724–51.
Phillippo DM, Dias S, Ades A, Belger M, Brnabic A, Schacht A, Saure D, Kadziola Z, Welton NJ. Multilevel network meta-regression for population-adjusted treatment comparisons. J R Stat Soc Ser A (Stat Soc). 2020; 183(3):1189–210.
Remiro-Azócar A. Target estimands for population-adjusted indirect comparisons. In press, Stat Med. 2022.
No financial support was provided for this research.
Medical Affairs Statistics, Bayer plc, 400 South Oak Way, Reading, UK
Antonio Remiro-Azócar
Department of Statistical Science, University College London, 1-19 Torrington Place, London, UK
ARA conceived the research idea, developed the methodology, performed the analyses, prepared the figures, and wrote and reviewed the manuscript. The author read and approved the final manuscript.
Correspondence to Antonio Remiro-Azócar.
ARA is employed by Bayer plc. The author declares that he has no competing interests.
The original online version of this article was revised: Following publication of the original article [1], the authors reported an error in equation and in text. Equation must be as follows: \(y_{i} = \beta_{0} + \boldsymbol{x}_{i}\boldsymbol{\beta}_{\boldsymbol{1}} + \left(\beta_{t} + \boldsymbol{x}_{i}\boldsymbol{\beta}_{\boldsymbol{2}} \right)\mathbbm{1}(t_{i}=1) + \epsilon_{i},\). The following text must be: Correct specification of the S=2 covariate distribution. The original article has been updated.
Additional file 1.
Supplementary Material.
Remiro-Azócar, A. Two-stage matching-adjusted indirect comparison. BMC Med Res Methodol 22, 217 (2022). https://doi.org/10.1186/s12874-022-01692-9
Indirect treatment comparison
Covariate adjustment
Covariate balance
Inverse probability of treatment weighting
Evidence synthesis | CommonCrawl |
Prediction of microRNA-disease associations based on distance correlation set
Haochen Zhao2, 5,
Linai Kuang1, 2, 5,
Lei Wang1, 2, 3, 5Email authorView ORCID ID profile,
Pengyao Ping2, 5,
Zhanwei Xuan2, 5,
Tingrui Pei2, 5 and
Zhelun Wu4
BMC BioinformaticsBMC series – open, inclusive and trusted201819:141
Accepted: 3 April 2018
Recently, numerous laboratory studies have indicated that many microRNAs (miRNAs) are involved in and associated with human diseases and can serve as potential biomarkers and drug targets. Therefore, developing effective computational models for the prediction of novel associations between diseases and miRNAs could be beneficial for achieving an understanding of disease mechanisms at the miRNA level and the interactions between diseases and miRNAs at the disease level. Thus far, only a few miRNA-disease association pairs are known, and models analyzing miRNA-disease associations based on lncRNA are limited.
In this study, a new computational method based on a distance correlation set is developed to predict miRNA-disease associations (DCSMDA) by integrating known lncRNA-disease associations, known miRNA-lncRNA associations, disease semantic similarity, and various lncRNA and disease similarity measures. The novelty of DCSMDA is due to the construction of a miRNA-lncRNA-disease network, which reveals that DCSMDA can be applied to predict potential lncRNA-disease associations without requiring any known miRNA-disease associations. Although the implementation of DCSMDA does not require known disease-miRNA associations, the area under curve is 0.8155 in the leave-one-out cross validation. Furthermore, DCSMDA was implemented in case studies of prostatic neoplasms, lung neoplasms and leukaemia, and of the top 10 predicted associations, 10, 9 and 9 associations, respectively, were separately verified in other independent studies and biological experimental studies. In addition, 10 of the 10 (100%) associations predicted by DCSMDA were supported by recent bioinformatical studies.
According to the simulation results, DCSMDA can be a great addition to the biomedical research field.
MiRNA-disease association predictions
Distance correlation set
Disease-lncRNA-miRNA network
Similarity measure
For a long time, RNA was considered a DNA-to-protein gene sequence transporter [1]. The sequencing of the human genome indicates that only approximately 2% of the sequences in human RNA are used to encode proteins [2]. Furthermore, numerous studies performing biological experiments have indicated that noncoding RNA (ncRNA) plays an important role in numerous critical biological processes, such as chromosome dosage compensation, epigenetic regulation and cell growth [3–5]. MicroRNAs (miRNAs) are endogenous single-stranded ncRNA molecules approximately 22 nt in length that regulate the expression of target genes by base pairing with the 3′-untranslated regions (UTRs) of the target genes [6, 7]. Recently, several studies have reported that more than one-third of genes are regulated by miRNAs [8], and more than 1000 miRNAs have been identified using various experimental methods and computational models [9, 10]. In addition, accumulating evidence indicates that many microRNAs (miRNAs) are involved in and associated with human diseases, such as myocardial disease, Alzheimer's disease, cardiovascular disease and heart disease [11–14]. Therefore, identifying disease-miRNA associations could not only improve our knowledge of the underlying disease mechanism at the miRNA level but also facilitate disease biomarker detection and drug discovery for disease diagnosis, treatment, prognosis and prevention. However, compared with the rapidly increasing number of newly discovered miRNAs, only a few miRNA-disease associations are known [15, 16]. Developing efficient, successful computational approaches that predict potential miRNA-disease associations is challenging and urgently needed.
Recently, several heterogeneous biological datasets, such as HMDD and miR2Disease, have been constructed [17–19], and several computational methods are used to predict potential miRNA-disease associations based these datasets [20–22]. For example, Jiang et al. developed a scoring system to assess the likelihood that a microRNA is involved in a specific disease phenotype based on the assumption that functionally related microRNAs tend to be associated with phenotypically similar diseases [23]. K. Han et al. developed a prediction method called DismiPred that combines functional similarity and common association information to predict potential miRNA-disease associations based on the central hypothesis offered in several previous studies that miRNAs with similar functions are often involved in similar diseases [24]. Furthermore, Xuan et al. proposed a method called HDMP to predict potential disease-miRNA associations based on weighted k most similar neighbours [25] and developed a method for predicting potential disease-associated microRNAs based on random walk (MIDP) [26]. Chen et al. proposed a prediction method called RWRMDA by implementing random walk on the miRNA functional similarity network and further proposed a model called RLSMDA based on semi-supervised learning by integrating a disease-disease semantic similarity network, miRNA-miRNA functional similarity network, and known human miRNA-disease associations for the prediction of potential disease-miRNA associations [27]. In 2016, based on the assumption that functionally similar miRNAs tend to be involved in similar diseases, Chen et al. developed a prediction model called WBSMDA by integrating known miRNA-disease associations, miRNA functional similarity networks, disease semantic similarity networks, and Gaussian interaction profile kernel similarity networks to uncover potential disease-miRNA associations [28].
In the abovementioned computational models, known miRNA-disease associations are required. However, few lncRNA-disease associations have been recorded in several biological datasets, such as MNDR and LncRNADisease [29, 30], and several studies have shown that lncRNA-miRNA associations are involved in and associated with human diseases [31–33]. Thus, in this article, a new model based on the Distance Correlation Set for MiRNA-Disease Association inference (DCSMDA) was developed to predict potential miRNA-disease associations by integrating known lncRNA-disease and lncRNA-miRNA associations, the semantic similarity and functional similarity of the disease pairs, the functional similarity of the miRNA pairs, and the Gaussian interaction profile kernel similarity for the lncRNA, miRNA and disease. Compared with existing state-of-the-art models, the advantage of DCSMDA is its integration of the similarity of the disease pairs, lncRNA pairs, miRNA pairs, and introduction of the distance correlation set; thus, DCSMDA does not require known miRNA-disease associations. Moreover, leave-one-out cross-validation (LOOCV) was implemented to evaluate the performance of DCSMDA based on known miRNA-disease associations downloaded from the HMDD database, and DCSMDA achieved a reliable area under the ROC curve (AUC) of 0.8155. Moreover, case studies of lung neoplasms, prostatic neoplasms and leukaemia were implemented to further evaluate the prediction performance of DCSMDA, and 9, 10 and 9 of the top 10 predicted associations in these three important human complex diseases have been confirmed by recent biological experiments. In addition, a case study identifying the top 10 lncRNA-disease associations showed that 10 of the 10 (100%) associations predicted by DCSMDA were supported by recent bioinformatical studies and the latest HMDD dataset, effectively demonstrating that DCSMDA had a good prediction performance in inferring potential disease-miRNA associations.
To evaluate the prediction performance of DCSMDA, first, our method was compared with other state-of-the-art methods in the framework of the LOOCV, and then, we analyzed the stability of DCSMDA using three lncRNA-disease datasets. Second, we analyzed the effect of the pre-determined threshold parameter b. Finally, several additional experiments were performed to validate the feasibility of our method.
Performance comparison with other methods
Since our method is unsupervised (i.e., known miRNA-disease associations are not used in the training) and the few proposed prediction models for the large-scale forecasting of the associations between miRNAs and diseases are simultaneously based on known miRNA-lncRNA associations and known lncRNA-disease associations, to validate the prediction performance of our novel model, we compared the prediction performance of DCSMDA with that of three state-of-the-art computational prediction models, including WBSMDA [28], RLSMDA [27] and HGLDA [31]; WBSMDA and RLSMDA are semi-supervised methods that do not require any negative samples, and HGLDA is an unsupervised method developed to predict potential lncRNA-disease associations by integrating known miRNA-disease associations and lncRNA-miRNA interactions.
To compare the performance of DCSMDA with that of WBSMDA and RLSMDA, we adopted the DS5 dataset and the framework of the LOOCV. While the LOOCV was implemented for these three methods, each known miRNA-disease association was left out in turn as the test sample, and we further evaluated how well this test association ranked relative to the candidate sample. Here, the candidate samples comprised all potential miRNA-disease associations without any known association evidence. Then, the testing samples with a prediction rank higher than the given threshold were considered successfully predicted. If the testing samples with a prediction rank higher than the given threshold were considered successfully predicted, then DCSMDA, RLSMDA and WBSMDA were checked in the LOOCV.
To compare the performance of DCSMDA with that of HGLDA, we adopted the DS3 dataset and the framework of the LOOCV. While the LOOCV was implemented for HGLDA, each known lncRNA-disease association was removed individually as a testing sample, and we further evaluated how well this test lncRNA-disease association ranked relative to the candidate sample. Here, the candidate samples comprised all potential lncRNA-disease associations without any known association evidence.
Thus, we could further obtain the corresponding true positive rates (TPR, sensitivity) and false positive rates (FPR, 1-specificity) by setting different thresholds. Here, sensitivity refers to the percentage of test samples that were predicted with ranks higher than the given threshold, and the specificity was computed as the percentage of negative samples with ranks lower than the threshold. The receiver-operating characteristic (ROC) curves were generated by plotting the TPR versus the FPR at different thresholds. Then, the AUCs were further calculated to evaluate the prediction performance of DCSMDA.
An AUC value of 1 represented a perfect prediction, while an AUC value of 0.5 indicated a purely random performance. The performance comparison in terms of the LOOCV results is shown in Fig. 1. In the LOOCV, the DCSMDA (when b was set to 6), RLSMDA, WBSMDA and HGLDA achieved AUCs of 0.8155, 0.7826, 0.7582 and 0.7621, respectively. DCSMDA predicted potential miRNA-disease associations without requiring known miRNA-disease associations. To the best of our knowledge, no methods that rely on known miRNA-disease associations exist. More importantly, considering that known disease-lncRNA associations remain very limited, the performance of DCSMDA can be further improved as additional known miRNA-disease associations are obtained in the future.
Performance comparisons between DCSMDA, RLSMDA and HGLDA in terms of ROC curve and AUC based on LOOCV
The stability analysis of DCSMDA
Because the current lncRNA-disease databases remain in their infancy and most existing methods are always evaluated using a specific dataset, the stability of the different datasets is ignored. To enhance the credibility of the prediction results, DCSMDA was further implemented using three different known lncRNA-disease association datasets, including DS1, DS2, and DS3, and the known lncRNA-miRNA association dataset DS4.
The comparison results of the ROC are shown in Fig. 2, and the corresponding AUCs are 0.8155, 0.8089 and 0.7642 when DCSMDA (b was set to 6) was evaluated in the framework of the LOOCV using the three different lncRNA-disease association datasets. DCSMDA achieved a reliable and effective prediction performance.
Comparison of different lncRNA-disease datasets to the prediction performance of DCSMDA
Effects of the pre-given threshold parameter b
In DCSMDA, the pre-determined threshold b plays a critical role, and the value of b influences the performance of predicting potential miRNA-disease associations. In this section, we implemented a series of comparison experiments to evaluate the effects of b on the prediction performance of DCSMDA. The LOOCV was implemented, experiments were performed, and b was assigned different values. Considering the time complexity, and that the value of SPM(i, j) always equals 6, when b ≥6, we set b to a value no greater than 6 in our experiments.
As shown in Fig. 3, DCSMDA showed an increasing trend in its prediction performance as the value of the pre-determined threshold parameter b increased and achieved the best prediction performance when b was set to 6. When b was set to 6, DCSMDA achieved an AUC of 0.8089 using DS3 and DS4. In the analysis, we found that the main reason was that the number of known miRNA-lncRNA associations and lncRNA-disease associations was small; thus, when b is set to a larger value, more nodes could be linked to each other in the miRNA-lncRNA-disease interactive network, improving the prediction performance of DCSMDA. Therefore, we finally set b = 6 in our experiments.
Comparison of effects of the pre-given threshold parameter b to the prediction performance of DCSMDA while b was assigned different values
Currently, cancer is the leading cause of death in humans worldwide [34–36], and the incidence of cancer is high in both developed and developing countries. Therefore, to estimate the effective predictive performance of DCSMDA, case studies of two important cancers and leukaemia were implemented. The prediction results were verified using recently published experimental studies (see Table 1).
DCSMDA was applied to case studies of three important cancers. In total, 10, 9 and 8 of the top 10 predicted pairs for these diseases were confirmed based on recent experimental studies
Evidence (PMID and PMCID)
hsa-mir-15a
hsa-mir-15b
hsa-mir-16
hsa-mir-195
hsa-mir-125a
hsa-mir-106b
Lung Neoplasms
Prostate cancer (prostatic neoplasms), which is the second leading cause of cancer-related death in males, is among the most common malignant cancers and the most commonly diagnosed cancer in men worldwide. In 2012, prostate cancer occurred in 1.1 million men and caused 307,000 deaths. Accumulating evidence shows that microRNAs are strongly associated with prostate cancer. Therefore, DCSMDA was implemented to predict potential prostate cancer-related miRNAs. Consequently, ten of the top ten predicted prostate cancer-related miRNAs were validated by recent biological experimental studies (see Table 1). For example, Junfeng Jiang et al. reconstructed five prostate cancer co-expressed modules using functional gene sets defined by Gene Ontology (GO) annotation (biological process, GO_BP) and found that hsa-mir15a (ranked 1st) regulated these five candidate modules [37]. Medina-Villaamil V et al. analyzed circulating miRNAs in whole blood as non-invasive markers in patients with localized prostate cancer and healthy individuals and found that hsa-mir-15b (ranked 2nd) showed a statistically significant differential expression between the different risk groups and healthy controls [38]. Furthermore, Chao Cai et al. confirmed the tumour suppressive role of hsa-mir-195 (ranked 4th) using prostate cancer cell invasion, migration and apoptosis assays in vitro and tumour xenograft growth, angiogenesis and invasion in vivo by performing both gain-of-function and loss-of-function experiments [39].
Lung cancer (lung neoplasms) has the poorest prognosis among cancers and is the largest threat to people's health and life. The incidence and mortality of lung cancer are rapidly increasing in China, and approximately 1.4 million deaths are due to lung cancer annually. Recent studies show that miRNAs play critical roles in the progression of lung cancer. Therefore, we used lung cancer as a case study and implemented DCSMDA; nine predicted lung cancer-associated miRNAs of the top ten prediction list were verified based on experimental reports. For example, Bozok Çetintaş V et al. analyzed the effects of selected miRNAs on the development of cisplatin resistance and found that hsa-mir-15a (ranked 1st) was among the most significantly downregulated miRNAs conferring resistance to cisplatin in Calu1 epidermoid lung carcinoma cells [40]. Hsa-mir-195, which ranked 2nd, was further confirmed to suppress tumour growth and was associated with better survival outcomes in several malignancies, including lung cancer [41]. Additionally, according to the biological experiments reported in several studies, hsa-mir-424 (ranked 3rd) plays an important role in lung cancer [42].
Leukaemia refers to a group of diseases that usually begin in the bone marrow and result in high numbers of abnormal white blood cells. The exact cause of leukaemia is unknown, and a combination of genetic factors and environmental factors is believed to play a role. In 2015, leukaemia presented in 2.3 million people and caused 353,500 deaths. Several studies suggest that miRNAs are effective prognostic biomarkers in leukaemia. For example, independent experimental observations showed relatively lower expression levels of mir-424 (ranked 1st) in TRAIL-resistant and semi-resistant acute myeloid leukaemia (AML) cell lines and newly diagnosed patient samples. The overexpression of mir-424 by targeting the 3′ UTR of PLAG1 enhanced TRAIL sensitivity in AML cells [43]. Hsa-mir-16 ranked 3rd, its expression was inversely correlated with Bcl2 expression in leukaemia, and both microRNAs negatively regulate B cell lymphoma 2 (Bcl2) at a posttranscriptional level. Bcl2 repression by these microRNAs induces apoptosis in a leukaemic cell line model [44]. The lncRNA H19 is considered an independent prognostic marker in patients with tumours. The expression of lncRNA H19 is significantly upregulated in bone marrow samples from patients with AML-M2. The results of the current study suggest that lncRNA H19 regulates the expression of inhibitor of DNA binding 2 (ID2) by competitively binding to hsa-mir-19b (ranked 8) and hsa-mir-19a (ranked 9), which may play a role in AML cell proliferation [45].
In addition, DCSMDA predicted all potential associations between the diseases and miRNAs in G3 simultaneously. In addition, notably, potential associations with a high predicted value can be publicly released and benefit from biological experimental validation. To further illustrate the effective performance of DCSMDA, the predicted results were sorted from best to worse, and the top 10 results were selected for analysis (see Table 2). Consequently, 100% of the results were confirmed by recent biological experiments and the HMDD dataset, and thus, DCSMDA can be used as an efficient computational tool in biomedical research studies.
The top 10 predicted miRNA-disease associations by DCSMDA
Carcinoma, Hepatocellular
HMDD
Accumulating evidence shows that miRNAs play a very important role in several key biological functions and signalling pathways. A large-scale systematic analysis of miRNA-disease data performed by combining relevant biological data is highly important for humans and attractive topics in the field of computational biology. However, only a few prediction models have been proposed for the large-scale forecasting of associations between miRNAs and diseases based on lncRNA information. To utilize the wealth of disease-lncRNA, miRNA-lncRNA and disease-lncRNA association data recorded in four datasets and recently published experimental studies, in this article, we proposed a novel prediction model called DCSMDA to infer the potential associations between diseases and miRNAs. We first constructed a miRNA-lncRNA-disease interactive network and further integrated a distance correlation set, disease semantic similarity, functional similarity and Gaussian interaction profile kernel similarity for DCSMDA. The important difference between DCSMDA and previous computational models is that DCSMDA does not rely on any known miRNA-disease associations and predicts disease-miRNA associations based only on known disease-lncRNA associations and known lncRNA-miRNA associations. To evaluate the prediction performance of DCSMDA, the validation frameworks of the LOOCV were implemented using the HMDD database. Furthermore, case studies were further implemented using three important diseases and the top 10 predicted miRNA-disease associations based on recently published experimental studies and databases. The simulation results showed that DCSMDA achieved a reliable and effective prediction performance. Hence, DCSMDA could be used as an effective and important biological tool that benefits the early diagnosis and treatment of diseases and improves human health in the future.
However, although DCSMDA is a powerful method for predicting novel relationships between diseases and miRNAs, there are several limitations in our method. First, the value of the threshold parameter b plays an important role in DCSMDA, and the selection of a suitable value for b is a critical problem that should be addressed in future studies. Second, although DCSMDA does not rely on any known experimentally verified miRNA-disease relationships, the performance of DCSMDA was not very satisfactory compared with that of several existing methods, such as LRSMDA and WBSMDA [27, 28]. Introducing more reliable measures for the calculations of the disease similarity, miRNA similarity, and lncRNA similarity and developing a more reliable similarity integration method could improve the performance of DCSMDA. Finally, DCSMDA cannot be applied to unknown diseases or miRNAs that are not present in the disease-miRNA or lncRNA-miRNA databases; such genes are poorly investigated and have no known disease-lncRNA and lncRNA-miRNA associations. The performance of DCSMDA will be further improved once more known associations are obtained.
In this article, we mainly achieved the following contributions: (1) we constructed a miRNA-lncRNA-disease interactive network based on common assumptions that similar diseases tend to show similar interaction and non-interaction patterns with lncRNAs, and similar miRNAs tend to show similar interaction and non-interaction patterns with lncRNAs; (2) the concept of a distance correlation set was introduced; (3) the sematic disease similarity, functionally similarity (including disease functionally similarity and miRNA functionally similarity) and Gaussian interaction profile kernel similarity (including disease Gaussian interaction profile kernel similarity, miRNA Gaussian interaction profile kernel similarity and lncRNA Gaussian interaction profile kernel similarity) were integrated; (4) the concept of an optimized matrix was introduced by integrating the Gaussian interaction profile kernel similarity of the miRNA pairs and disease pairs; (5) negative samples are not required in DCSMDA; and (6) DCSMDA can be applied to human diseases without relying on any known miRNA-disease associations.
Known disease-lncRNA associations
Because the number of lncRNA-disease associations is limited and many heterogeneous biological datasets have been constructed, we collected 8842 known disease-lncRNA associations from the MNDR dataset (http://www.bioinformatics.ac.cn/mndr/index.html) and 2934 known disease-lncRNA associations from the LncRNADisease dataset (http://www.cuilab.cn/lncrnadisease). Since the disease names in the LncRNADisease database differ from those in the MNDR dataset, we mapped the diseases in these two disease-lncRNA association datasets to their MeSH descriptors. After eliminating diseases without any MeSH descriptors, merging the diseases with the same MeSH descriptors and removing the lncRNAs that were not present in the lncRNA-miRNA dataset (DS4) used in this paper, 583 known lncRNA-disease associations (DS1) were obtained from the LncRNADisease dataset (see Additional file 1), and 702 known lncRNA-disease associations (DS2) were obtained from the MNDR dataset (see Additional file 2). Furthermore, after integrating the DS1 and DS2 datasets and removing the duplicate associations, we obtained the DS3 dataset, which included 1073 disease-lncRNA associations (see Additional file 3).
Known lncRNA-miRNA associations
To construct the lncRNA-miRNA network, the lncRNA-miRNA association dataset DS4 was obtained from the starBasev2.0 database (http://starbase.sysu.edu.cn/) in February 2, 2017 and provided the most comprehensive experimentally confirmed lncRNA-miRNA interactions based on large-scale CLIP-Seq data. After the data pre-processing (including the elimination of duplicate values, erroneous data, and disorganized data), removing the lncRNAs that did not exist in the DS3 dataset and merging the miRNA copies that produced the same mature miRNA, we finally obtained 1883 lncRNA-miRNA associations (DS4) (see Additional file 4).
Known disease-miRNA associations
To validate the performance of DCSMDA, the known human miRNA-disease associations were downloaded from the latest version of the HMDD database, which is considered the golden-standard dataset. In this dataset, after eliminating the duplicate associations and miRNA-disease associations involved with other diseases or lncRNAs not contained in the DS3 or DS 4 , we finally obtained 3252 high-quality lncRNA-disease associations (DS5) (see Additional file 5).
Construction of the disease-lncRNA-miRNA interaction network
To clearly demonstrate the process of constructing the disease-lncRNA-miRNA interaction network, we use the disease-lncRNA dataset DS3 and the lncRNA-miRNA dataset DS4 as examples. We defined L to represent all the different lncRNA terms in DS3 and DS4 and then constructed the disease-lncRNA-miRNA interactive network based on DS3 and DS4 according to the following 3 steps:
Step 1 (Construction of the disease-lncRNA network): Let D and L be the number of different diseases and lncRNAs obtained from DS3, respectively. S D = {d 1 , d 2 ,..., d D } represents the set of all D different diseases in DS3. S L = {l 1 , l 2 ,..., l L } represents the set of all L different lncRNAs in DS3, and for any given d i ∈ S D and l j ∈S L , we can construct the D*L dimensional matrix KAM1 as follows:
$$ KAM1\left(i,j\right)=\Big\{{\displaystyle \begin{array}{c}1\kern0.5em if\kern0.2em {d}_i\kern0.2em is\kern0.34em related\kern0.34em to\kern0.2em {l}_j\kern0.2em in\kern0.2em {DS}_3\\ {}0\kern7.8em otherwise\end{array}} $$
Step 2 (Construction of the lncRNA-miRNA network): Let M be the number of different miRNAs obtained from DS4. S M = {m 1 , m 2 ,..., m M } represents the set of all M different miRNAs in DS4, and for any given m i ∈S M and l j ∈S L , we can construct the M*L dimensional matrix KAM2 as follows:
$$ KAM2\left(i,j\right)=\left\{\begin{array}{c}1\kern0.5em if\ {m}_i\ is\ related\ to\ {l}_j\ in\ {DS}_4\\ {}0\kern5.25em otherwise\end{array}\right. $$
Step 3 (Constriction of the disease-lncRNA-miRNA interactive network): Based on the disease-lncRNA network and lncRNA-miRNA network, we can obtain the undirected graph G 3 = (V 3 , E 3 ), where V 3 = S D ∪S L ∪S M = {d 1 , d 2, ..., d D , l D + 1 , l D + 2 ..., l D + L , m D + L + 1 , m D + L + 2 ..., m D + L + M } is the set of vertices, E 3 is the edge set of G 3 , and d i ∈S D , l j ∈S L , mk∈SM. Here, an edge exists between d i and l j in E 3 KAM1(d i , l j ) = 1, an edge exists between l j and m k in E 3 if KAM2(m k , l j ) = 1. Then, for any given a, b∈V 3 , we can define the Strong Correlation (SC) between a and b as follows:
$$ SC\left(a,b\right)=\left\{\begin{array}{c}1\kern0.5em if\kern0.34em there\kern0.34em is\kern0.34em an\kern0.34em edge\kern0.34em between\kern0.2em a\kern0.2em and\kern0.2em b\\ {}0\kern11em otherwise\end{array}\right. $$
Notably, although we did not use any known disease-miRNA associations, the diseases and miRNAs can still be indirectly linked by integrating the edges between the disease nodes, the lncRNA nodes and edges between the miRNA nodes and lncRNA nodes in G 3 .
Disease semantic similarity
We downloaded the MeSH descriptors of the diseases from the National Library of Medicine (http://www.nlm.nih.gov/), which introduced the concept of Categories and Subcategories and provided a strict system for disease classification. The topology of each disease was visualized as a Directed Acyclic Graph (DAG) in which the nodes represented the disease MeSH descriptors, and all MeSH descriptors in the DAG were linked from more general terms (parent nodes) to more specific terms (child nodes) by a direct edge (see Fig. 4). Let DAG(A) = (A, T(A), E(A)), where A represents disease A, T(A) represents the node set, including node A and its ancestor nodes, and E(A) represents the corresponding edge set. Then, we defined the contribution of disease term d in DAG(A) to the semantic value of disease A as follows:
$$ \left\{\begin{array}{c}{D}_A(d)=1\kern16.8em if\kern0.3em d=A\\ {}{D}_A(d)=\max \left\{0.5\ast {D}_A\left({d}^{\ast}\right)|{d}^{\ast}\in children\kern0.3em of\kern0.3em d\right\}\kern0.3em if\kern0.3em d\ne A\end{array}\right. $$
The disease DAGs of Prostatic Neoplasms and Gastrointestinal Neoplasms
For example, the semantic value of the disease 'Gastrointestinal Neoplasms' shown in Fig. 4 is calculated by summing the weighted contribution of 'Neoplasms' (0.125), 'Neoplasms by Site' (0.25), 'Digestive System Diseases' (0.25), 'Digestive System Neoplasms' (0.5), 'Digestive System Neoplasms' (0.5) and 'Gastrointestinal Diseases' (0.5) to 'Gastrointestinal Neoplasms' and the contribution to 'Gastrointestinal Neoplasms' (1) by 'Gastrointestinal Neoplasms'.
Then, the sematic value of disease A can be obtained by summing the contribution from all disease terms in = DAG(A), and the semantic similarity between the two diseases d i and d j can be calculated as follows:
$$ SSD\left({d}_i,{d}_j\right)=\frac{\sum \limits_{d\in \left(T\left({d}_i\right)\cap T\left({d}_j\right)\right)}\left({D}_{d_i}(d)+{D}_{d_j}(d)\right)}{\sum \limits_{d\in T\left({d}_i\right)}{D}_{d_i}(d)+{\sum}_{d\in T\left({d}_j\right)}{D}_{d_j}(d)} $$
where SSD is the disease semantic similarity matrix.
MiRNA Gaussian interaction profile kernel similarity
Based on the assumption that similar miRNAs tend to show similar interaction and non-interaction patterns with lncRNAs, in this section, we introduce the Gaussian interaction profile kernel used to calculate the network topologic similarity between miRNAs and used the vector MLP(m i ) to denote the ith row of the adjacency matrix KAM2. Then, the Gaussian interaction profile kernel similarity for all investigated miRNAs can be calculated as follows:
$$ MGS\left({m}_i,{m}_j\right)=\exp \left(-\frac{M\ast {\left\Vert MLP\left({m}_i\right)- MLP\left({m}_j\right)\right\Vert}^2}{\sum \limits_{i=1}^M{\left\Vert MLP\left({m}_i\right)\right\Vert}^2}\right) $$
where parameter M is the number of miRNAs in DS4.
Disease Gaussian interaction profile kernel similarity
Based on the assumption that similar diseases tend to show similar interaction and non-interaction patterns with lncRNAs, the Gaussian interaction profile kernel similarity for all investigated diseases can be calculated as follows:
$$ DGS\left({d}_i,{d}_j\right)=\exp \left(-\frac{D\ast {\left\Vert DLP\left({d}_i\right)- DLP\left({d}_j\right)\right\Vert}^2}{\sum \limits_{i=1}^D{\left\Vert DLP\left({d}_i\right)\right\Vert}^2}\right) $$
where parameter D is the number of diseases in DS3, and DLP(d i ) represent the ith row of the matrix KAM1. Then, based on previous work [46], we can improve the predictive accuracy problems by logistic function transformation as follows:
$$ FDGS\left({d}_i,{d}_j\right)=\frac{1}{1+{e}^{-15\ast DGS\left({d}_i,{d}_j\right)+\log (9999)}} $$
lncRNA Gaussian interaction profile kernel similarity
Based on the assumption that similar lncRNAs tend to show similar interaction and non-interaction patterns with miRNAs and similar lncRNAs tend to show similar interaction and non-interaction patterns with diseases, the Gaussian interaction profile kernel similarity matrix for all investigated lncRNAs in DS3 can be computed in a similar way as that for disease, as follows:
$$ LGS1\left({l}_i,{l}_j\right)=\exp \left(-\frac{L\ast {\left\Vert LDP\left({l}_i\right)- LDP\left({l}_j\right)\right\Vert}^2}{\sum \limits_{i=1}^L{\left\Vert LDP\left({l}_i\right)\right\Vert}^2}\right) $$
where parameter L is the number of lncRNAs in DS3, and LDP(l i ) represents the ith column of the matrix KAM1.
Obviously, the Gaussian interaction profile kernel similarity for all investigated lncRNAs in DS4 can be computed as follows:
$$ LGS2\left({d}_i,{d}_j\right)=\exp \left(-\frac{L\ast \parallel LMP\left({l}_i\right)- LMP\left({l}_j\right){\parallel}^2}{\sum \limits_{i=1}^L\parallel LMP\left({l}_i\right){\parallel}^2}\right) $$
where LMP(l i ) represents the ith column of the matrix KAM2.
Disease functional similarity based on the lncRNAs
To calculate the functional similarity of the diseases, we first constructed the undirected graph G 1 = (V 1 , E 1 ) based on KAM1, where V 1 = S D ∪S M = {d 1 , d 2 , …, d D , l D + 1 , l D + 2 ,…, l D + M } is the set of vertices, E 1 is the set of edges, and for any two nodes a, b∈V 1 , an edge exists between a and b in E 1 if KAM1(a, b) = 1. Therefore, we can calculate the similarities between two disease nodes by comparing and integrating the similarities of the lncRNA nodes associated with these two disease nodes based on the assumption that similar diseases tend to show similar interaction and non-interaction patterns with lncRNAs. The procedure used to calculate the disease functional similarity is shown in Fig. 5.
The Flow chart of the disease functional similarity calculation model
Because different lncRNA terms in DS3 may relate to several diseases, assigning the same contribution value to all miRNAs is not suitable, and therefore, we defined the contribution value of each lncRNA as follows:
$$ C\left({l}_i\right)=\frac{\mathrm{The}\kern0.34em \mathrm{number}\kern0.34em \mathrm{of}\kern0.2em {l}_i-\mathrm{related}\kern0.34em \mathrm{edges}\ \mathrm{in}\ {E}_1}{\mathrm{The}\ \mathrm{number}\ \mathrm{of}\ \mathrm{all}\ \mathrm{edges}\ \mathrm{in}\ {E}_1} $$
Based on the definition of C(l i ), we can define the contribution value of each lncRNA to the functional similarity of each disease pair as follows:
$$ {CD}_{ij}\left({l}_k\right)=\Big\{{\displaystyle \begin{array}{c}1\kern2.30em if\kern0.3em lncRNA\kern0.3em {l}_k\kern0.2em related\kern0.34em to\kern0.2em {d}_i\kern0.2em and\kern0.2em {d}_j\kern0.2em simultaneously\\ {}C\left({l}_k\right)\kern6em if\kern0.34em lncRNA\kern0.3em {l}_k\kern0.2em only\kern0.34em related\kern0.34em to\kern0.2em {d}_i\kern0.2em or\kern0.2em {d}_j\end{array}}\operatorname{} $$
Finally, we can define the functional similarity between diseases d i and dj by integrating lncRNAs related to d i , d j or both as follows:
$$ FSD\left({d}_i,{d}_j\right)=\frac{\sum \limits_{l_k\in \left(D\left({d}_i\right)\cup D\left({d}_j\right)\right)}C{D}_{ij}\left({l}_k\right)}{\mid D\left({d}_i\right)\mid +\mid D\left({d}_j\right)\mid -\mid D\left({d}_i\right)\cap D\left({d}_j\right)\mid } $$
where D(d i ) and D(d j ) represent all lncRNAs related to di and d j in E 1 , respectively.
MiRNA functional similarity based on lncRNAs
Based on the assumption that similar miRNAs tend to show similar interaction and non-interaction patterns with lncRNAs, we can also calculate the miRNA functional similarity in the lncRNA-miRNA interactive network. Similar to the procedure used to calculate the disease functional similarity, first, we constructed the undirected graph G 2 = (V 2 , E 2 ), where V 2 = S M ∪S L = {m 1 , m 2 ,…, l M + 1 , l M + 2 ,…, l M + L } is the set of vertices, E 2 is the set of edges, and for any two nodes a, b ∈ V 2 , an edge exists between a and b in E 2 if KAM2(a, b) = 1. Then, we defined the contribution of each lncRNA to the functional similarity of each miRNA pair as follows:
$$ {CM}_{ij}\left({l}_k\right)=\Big\{{\displaystyle \begin{array}{c}1\kern1.20em if\kern0.34em lncRNA\kern0.3em {l}_k\kern0.2em related\kern0.2em {m}_i\kern0.2em and\kern0.2em {m}_j\kern0.2em simultaneously\\ {}C\left({l}_k\right)\kern5em if\kern0.34em lncRNA\kern0.3em {l}_k\kern0.2em only\kern0.34em related\kern0.2em {m}_i\kern0.2em or\kern0.2em {m}_j\end{array}}\operatorname{} $$
Additionally, we can define the functional similarity between m i and m j as follows:
$$ FSM\left({m}_i,{m}_j\right)=\frac{\sum \limits_{l_k\in \left(D\left({m}_i\right)\cup D\left({m}_j\right)\right)}C{M}_{ij}\left({m}_k\right)}{\mid D\left({m}_i\right)\mid +\mid \mathrm{D}\left({m}_j\right)\mid -\mid D\left({m}_i\right)\cap D\left({m}_j\right)\mid } $$
where D(m i ) represents all lncRNAs related to m i , and D(m j ) represents lncRNAs relate to m j in E 2 .
Integrated similarity
The processes used to calculate the integrated similarities of the diseases, lncRNAs and miRNAs are illustrated in Fig. 6. Combining the disease semantic similarity, the disease Gaussian interaction profile kernel similarity and the disease functional similarity mentioned above, we can construct the disease integrated similarity matrix FDD as follows:
$$ FDD=\frac{SSD+ FDGS+ FSD}{3} $$
Flow chart of calculation of diseases integrated similarity, lncRNA integrated similarity and miRNA integrated similarity
Additionally, based on the miRNA Gaussian interaction profile kernel similarity and the miRNA functional similarity, we can construct the miRNA integrated similarity matrix FMM as follows:
$$ FMM=\frac{MGS+ FSM}{2} $$
Furthermore, based on the Gaussian interaction profile kernel similarity matrices LGS1 and LGS2, we can construct the lncRNA integrated similarity matrix FLL as follows:
$$ FLL=\frac{LGS1+ LGS2}{2} $$
Prediction of disease-miRNA associations based on a distance correlation set
In this section, we developed a novel computational method, i.e., DCSMDA, to predict potential disease-miRNA associations by introducing a distance correlation set based on the following assumptions: similar diseases tend to show similar interaction and non-interaction patterns with lncRNAs, and similar lncRNAs tend to show similar interaction and non-interaction patterns with miRNAs. As illustrated in Fig. 7, the DCSMDA procedure consists of the following 5 major steps:
The procedures of DCSMDA
Step 1 (Construction of the adjacency matrix based on G 3 ): First, we construct a (D + L + M) * (D + L + M) Adjacency Matrix (AM) based on the undirected graph G 3 and SC, and then for any two nodes v i , v j ∈V 3 , we can define the AM(i, j) as follows:
$$ AM\left(i,j\right)=\left\{\begin{array}{c} SC\left({d}_i,{d}_j\right),\kern0.75em if\kern0.5em i\in \left[1,D\right]\ \mathrm{and}\ j\in \left[1,D\right].\kern6.25em \\ {} SC\left({d}_i,{l}_j\right),\kern0.75em if\kern0.5em i\in \left[1,D\right]\ \mathrm{and}\kern0.5em j\in \left[D,D+L\right].\kern4.75em \\ {} SC\left({d}_i,{m}_j\right),\kern1.25em if\kern0.5em i\in \left[1,D\right]\ \mathrm{and}\ j\in \left[D+L,D+L+M\right].\kern3em \\ {} SC\left({m}_i,{d}_j\right),\kern1em if\kern0.5em i\in \left[D,D+L\right]\ \mathrm{and}\ j\in \left[1,D\right].\kern4.75em \\ {} SC\left({m}_i,{m}_j\right),\kern1.25em if\kern0.5em i\in \left[D,D+L\right]\ \mathrm{and}\ j\in \left[\mathrm{D},D+L\right].\kern3.25em \\ {} SC\left({m}_i,{l}_j\right),\kern1.25em if\kern0.5em i\in \left[D,D+L\right]\ \mathrm{and}\ j\in \left[D+L,D+L+M\right].\kern1.75em \\ {} SC\left({l}_i,{d}_j\right),\kern1.25em if\kern0.5em i\in \left[D+L,D+L+M\right]\ \mathrm{and}\ j\in \left[1,D\right].\kern3em \\ {} SC\left({l}_i,{m}_j\right),\kern1.25em if\kern0.5em i\in \left[D+L,D+L+M\right]\ \mathrm{and}\ j\in \left[\mathrm{D},D+L\right].\kern1.75em \\ {} SC\left({l}_i,{m}_j\right),\kern1.25em if\kern0.5em i\in \left[D+L,D+L+M\right]\ \mathrm{and}\ j\in \left[D+L,D+L+M\right]\end{array}\right. $$
where i∈[1, D + L + M] and j∈[1, D + L + M], and to calculate the shortest distance matrix in step 2, we define AM (i, j) = 1 if i = j.
Step 2 (Construction of the shortest distance matrix based on adjacency matrix AM): First, we set parameter b to control the bandwidth of the distance correlation set and let b be a pre-determined positive integer, and then, we can obtain b matrices, such as AM 1 , AM 2 ,..., AM b , based on the above formula (19), and the Shortest Path Matrix is calculated as follows:
$$ SPM\left(i,j\right)=\left\{\ \begin{array}{c}1,\kern2.5em if\ AM\left(i,j\right)=1\\ {}k,\kern2.25em otherwise\kern1.25em \end{array}\right. $$
where i∈[1, D + M + L], j∈[1, D + M + L], k∈[2, b], and k satisfies the following: AM k (i, j)≠0, while AM 1(i, j) = AM 2(i, j) = … = AM k-1(i, j) = 0.
Step 3 (Calculation of distance correlation sets and distance coefficient of each node pair in G 3 ):
For each node v i ∈ V 3 , we can obtain distance correlation set DCS(i) according to the shortest distance matrix as follows:
$$ DCS(i)=\left\{{v}_j|r\ge SPM\left(i,j\right)>0\right\} $$
where DCS(i) of each node contains itself and all nodes with the shortest distance less than b.
For instance, in the disease-miRNA-lncRNA interaction network illustrated in Fig. 7, DCS (seed node) is all candidate nodes when b is set to 2.
Then, we can calculate the distance coefficient (DC) of the node pair (vi, vj) as follows:
$$ P\left(i,j\right)=\left\{\begin{array}{c} SPM{\left(i,j\right)}^{b+1}, if\ i\in DCS(j)\ or\ j\in DCS(i)\\ {}0,\kern3.5em otherwise\end{array}\right. $$
Furthermore, we can construct a Distance Correlation Matrix (DCM) based on the disease integrated similarity, the lncRNA integrated similarity, and the miRNA integrated similarity as follows:
$$ DCM\left(i,j\right)=\Big\{{\displaystyle \begin{array}{c}P\left(i,j\right)\ast \exp \left( FDD\left(i,j\right)\right),\kern7.9em if\kern0.5em i\in \left[1,D\right]\ \mathrm{and}\ j\in \left[1,D\right].\kern6.3em \\ {}P\left(i,j\right)\ast \exp \left( FLL\left(i,j\right)\right),\kern6em if\kern0.5em i\in \left[D,D+L\right]\ \mathrm{and}\ j\in \left[\mathrm{D},D+L\right].\kern4.75em \\ {}P\left(i,j\right)\ast \exp \left( FMM\left(i,j\right)\right),\kern0.5em if\kern0.5em i\in \left[D+L,D+L+M\right]\ \mathrm{and}\ j\in \left[D+L,D+L+M\right]\kern3em \\ {}P\left(\mathrm{i},\mathrm{j}\right)\ast \frac{SPM\left(i,j\right)}{b},\kern18.5em \mathrm{otherwise}\kern5.5em \end{array}}\operatorname{} $$
where i∈[1, D + L + M] and j∈[1, D + L + M].
Step 4 (Estimation of the association degree between a pair of nodes): Based on formula (23), we can estimate the association degree between vi and vj as follows:
$$ PM\left(i,j\right)=\frac{\sum \limits_{k=1}^{D+L+M} DCM\left(i,k\right)+{\sum}_{k=1}^{D+L+M} DCM\left(k,j\right)}{D+L+M} $$
Thus, we can obtain prediction matrix PM, where the entity PM (i, j) in row i column j represents the predicted association between node v i and v j .
Step 5 (Calculation of the final prediction result matrix between the miRNAs and diseases): Let \( PM=\left[\begin{array}{c}{C}_{11}\kern0.75em {C}_{12}\kern1em {C}_{13}\\ {}{C}_{21}\kern0.75em {C}_{22}\kern1em {C}_{23}\\ {}{C}_{31}\kern0.75em {C}_{32}\kern0.75em {C}_{33}\end{array}\right] \), where C11 is a D×D matrix, C12 is a D×L matrix, C13 is a D×M matrix, C21 is an L×D matrix, C 22 is an L ×L matrix, C 23 is an L×M matrix, C31 is an M×D matrix, C 32 is an M×L matrix and C 33 is an M ×M matrix. Obviously, C 13 is our predicted result, which provides the association probability between each disease and miRNA. A previous study [27] demonstrated that the Gaussian interaction profile kernel similarity is a high-efficiency tool for optimizing the result of prediction, and therefore, we used the miRNA Gaussian interaction profile kernel similarity and the disease Gaussian interaction profile kernel similarity to optimize the result of the DCSMDA as follows:
$$ FAD= FDD\ast {C}_{13}\ast FMM $$
where the matrix FAD denotes the relationship between the miRNA-disease pairs.
AUC:
Areas under ROC curve
DCSMDA:
Distance Correlation Set is developed to predict MiRNA-Disease Associations
FPR:
False positive rates
miRNA:
ncRNA:
Noncoding RNA
ROC:
Receiver-operating characteristics
TPR:
True positive rates
LOOCV:
Leave-One Out Cross Validation
The authors thank the anonymous referees for suggestions that helped improve the paper substantially.
The project is partly sponsored by the Construct Program of the Key Discipline in Hunan province, the National Natural Science Foundation of China (No.61640210, No.61672447), the CERNET Next Generation Internet Technology Innovation Project (No. NGII20160305), the Science & Education Joint Project of Hunan Natural Science Foundation (No.2017JJ5036), and the Upgrading Project of Industry-University- Research of Xiangtan University (No.11KZ|KZ03051).
All data generated or analyzed during this study are included in this published article [Additional file 1, Additional file 2, Additional file 3, Additional file 4 and Additional file 5].
HCZ conceived the study. HCZ, LAK and LW developed the method. PYP and ZWX implemented the algorithms. HCZ and TRP analyzed the data. LW supervised the study. HCZ and LW wrote the manuscript. ZLW, PYP and LW reviewed and improved the manuscript, ZLW provided supplementary data. All authors read and approved the final manuscript.
Additional file 1: The known lncRNA-disease associations for constructing the DS1. We list 583 known lncRNA-disease associations which were collected from LncRNAdisease dataset to construct the DS1. (XLS 58 kb)
Additional file 2: The known lncRNA-disease associations for constructing the DS2. We list 702 known lncRNA-disease associations which were collected from MNDR dataset to construct the DS2. (XLS 63 kb)
Additional file 3: The integrated lncRNA-disease associations for constructing the DS3. We list 1073 lncRNA-disease associations which were collected by integrating the datasets of DS1 and DS2. (XLS 83 kb)
Additional file 4: The known lncRNA-miRNA associations for constructing the DS4. We list 1883 known lncRNA-miRNA associations which were collected from starBasev2.0 database to construct the DS4. (XLS 123 kb)
Additional file 5: The known miRNA-disease associations for constructing the DS5. We list 3252 high-quality miRNA-disease associations which were collected from HMDD database to validate the performance of our method. (XLS 191 kb)
College of Computer Engineering & Applied Mathematics, Changsha University, Changsha, 410001, Hunan, People's Republic of China
Key Laboratory of Intelligent Computing & Information Processing (Xiangtan University), Ministry of Education, China, Xiangtan, 411105, Hunan, People's Republic of China
Department of Computer Science, Lakehead University, Thunder Bay, ON, P7B5E1, Canada
Department of Computer Science, Princeton University, Princeton, New Jersey, USA
College of Information Engineering, Xiangtan University, Xiangtan, Hunan, People's Republic of China
Wang Z, Gerstein M, Snyder M. RNA-Seq: a revolutionary tool for transcriptomics. Nat Rev Genet. 2009;10(1):57–63.View ArticlePubMedPubMed CentralGoogle Scholar
Crick FHC, Barnett L, Brenner S, Watts-Tobin RJ. General nature of the genetic code for proteins. Nature. 1961;192(4809):1227–32.View ArticlePubMedGoogle Scholar
Mattick JS, Makunin IV. Non-coding RNA. Hum Mol Genet. 2006;15(suppl_1):R17.View ArticlePubMedGoogle Scholar
Esteller M. Non-coding RNAs in human disease. Nat Rev Genet. 2011;12(12):861–74.View ArticlePubMedGoogle Scholar
Mattick JS, Rinn JL. Discovery and annotation of long noncoding RNAs. Nat Struct Mol Biol. 2015;22(1):5.View ArticlePubMedGoogle Scholar
Ambros V. The functions of animal micrornas. Nature. 2004;431(7006):350.View ArticlePubMedGoogle Scholar
Cheng AM, Byrom MW, Shelton J, Ford LP. Antisense inhibition of human mirnas and indications for an involvement of mirna in cell growth and apoptosis. Nucleic Acids Res. 2005;33(4):1290–7.View ArticlePubMedPubMed CentralGoogle Scholar
Taguchi Y. Inference of target gene regulation via mirnas during cell senescence by using the mirage server. Aging Dis. 2012;3(4):301.PubMedPubMed CentralGoogle Scholar
Peng H, Lan C, Zheng Y, Hutvagner G, Tao D, Li J. Cross disease analysis of co-functional microrna pairs on a reconstructed network of disease-gene-microrna tripartite. Bmc Bioinformatics. 2017;18(1):193.View ArticlePubMedPubMed CentralGoogle Scholar
Weber MJ. New human and mouse microrna genes found by homology search. FEBS J. 2005;272(1):59.View ArticlePubMedGoogle Scholar
Thum T, Gross C, Fiedler J, Fischer T, Kissler S, Bussen M, et al. Microrna-21 contributes to myocardial disease by stimulating map kinase signalling in fibroblasts. Nature. 2008;456(7224):980–4.View ArticlePubMedGoogle Scholar
Cogswell JP, Ward J, Taylor IA, Waters M, Shi Y, Cannon B, et al. Identification of mirna changes in alzheimer's disease brain and csf yields putative biomarkers and insights into disease pathways. J Alzheimers Dis. 2008;14(1):27–41.View ArticlePubMedGoogle Scholar
Corsten MF, Dennert R, Jochems S, Kuznetsova T, Devaux Y, Hofstra L, et al. Circulating microrna-208b and microrna-499 reflect myocardial damage in cardiovascular disease. Circ Cardiovasc Genet. 2010;3(6):499.View ArticlePubMedGoogle Scholar
Ikeda S, Kong SW, Lu J, Bisping E, Zhang H, Allen PD, et al. Altered microrna expression in human heart disease. Physiol Genomics. 2007;31(3):367–73.View ArticlePubMedGoogle Scholar
Lu M, Zhang Q, Deng M, Miao J, Guo Y, Gao W, et al. An analysis of human microrna and disease associations. PLoS One. 2008;3(10):e3420.View ArticlePubMedPubMed CentralGoogle Scholar
Chen X, Liu MX, Yan GY. Rwrmda: predicting novel human microrna-disease associations. Mol BioSyst. 2012;8(10):2792.View ArticlePubMedGoogle Scholar
Li Y, Qiu C, Tu J, Geng B, Yang J, Jiang T, et al. HMDD v2.0: a database for experimentally supported human microrna and disease associations. Nucleic Acids Res. 2014;42(Database issue):1070–4.View ArticleGoogle Scholar
Wang D, Wang J, Lu M, Song F, Cui Q. Inferring the human microrna functional similarity and functional network based on microrna-associated diseases. Bioinformatics. 2010;26(13):1644–50.View ArticlePubMedGoogle Scholar
Jiang Q, Wang Y, Hao Y, Juan L, Teng M, Zhang X, et al. Mir2disease: a manually curated database for microrna deregulation in human disease. Nucleic Acids Res. 2009;37(1):D98–104.View ArticlePubMedGoogle Scholar
Zou Q, Li J, Hong Q, Lin Z, Wu Y, Shi H, et al. Prediction of microrna-disease associations based on social network analysis methods. Biomed Res Int. 2015;2015(10):810514.PubMedPubMed CentralGoogle Scholar
You ZH, Wang LP, Chen X, et al. PRMDA: personalized recommendation-based MiRNA-disease association prediction[J]. Oncotarget. 2017;8(49):85568-83.Google Scholar
Shi H, Xu J, Zhang G, Xu L, Li C, Wang L, et al. Walking the interactome to identify human mirna-disease associations through the functional link between mirna targets and disease genes. BMC Syst Biol. 2013;7(1):1–12.View ArticleGoogle Scholar
Jiang Q, Hao Y, Wang G, Juan L, Zhang T, Teng M, et al. Prioritization of disease micrornas through a human phenome-micrornaome network. BMC Syst Biol. 2010;4(S1):S2.View ArticlePubMedPubMed CentralGoogle Scholar
Han K, Xuan P, Ding J, Zhao ZJ, Hui L, Zhong YL. Prediction of disease-related micrornas by incorporating functional similarity and common association information. Gen Mol Res Gmr. 2014;13(1):2009–19.View ArticleGoogle Scholar
Xuan P, Han K, Guo M, Guo Y, Li J, Ding J, et al. Prediction of micrornas associated with human diseases based on weighted k most similar neighbors. PLoS One. 2013;8(9):e70204.View ArticlePubMedPubMed CentralGoogle Scholar
Xuan P, Han K, Guo Y, Li J, Li X, Zhong Y, et al. Prediction of potential disease-associated micrornas based on random walk. Bioinformatics. 2015;31(11):1805–15.View ArticlePubMedGoogle Scholar
Chen X, Yan GY. Semi-supervised learning for potential human microrna-disease associations inference. Sci Rep. 2014;4:5501.View ArticlePubMedPubMed CentralGoogle Scholar
Chen X, Yan CC, Zhang X, You ZH, Deng L, Liu Y, et al. Wbsmda: within and between score for mirna-disease association prediction. Sci Rep. 2016;6:21106.View ArticlePubMedPubMed CentralGoogle Scholar
Wang Y, Chen L, Chen B, Li X, Kang J, Fan K, et al. Mammalian ncrna-disease repository: a global view of ncrna-mediated disease network. Cell Death Dis. 2013;4(8):e765.View ArticlePubMedPubMed CentralGoogle Scholar
Chen G, Wang Z, Wang D, Qiu C, Liu M, Chen X, et al. lncrna-disease: a database for long-non-coding rna-associated diseases. Nucleic Acids Res. 2013;41(Database issue):983–6.Google Scholar
Chen X. Predicting lncrna-disease associations and constructing lncrna functional similarity network based on the information of mirna. Sci Rep. 2015;5:13186.View ArticlePubMedPubMed CentralGoogle Scholar
Huang WT, Guo XQ, Dai JP, Chen RS. Microrna and lncrna in neurodegenerative diseases*: microrna and lncrna in neurodegenerative diseases. Prog Biochem Biophys. 2010;37(8):826–33.View ArticleGoogle Scholar
Guo L, Peng Y, Meng Y, et al. Expression profiles analysis reveals an integrated miRNA-lncRNA signature to predict survival in ovarian cancer patients with wild-type BRCA1/2. Oncotarget. 2017;8(40):68483.Google Scholar
Spiess PE, Dhillon J, Baumgarten AS, Johnstone PA, Giuliano AR. Pathophysiological basis of human papillomavirus in penile cancer: key to prevention and delivery of more effective therapies. CA-A Cancer J Clinicians. 2016;6:481–95.View ArticleGoogle Scholar
Ruprecht B, Zaal EA, Zecha J, Wu W, Berkers CR, Kuster B, Lemeer S. Lapatinib resistance in breast Cancer cells is accompanied by phosphorylation-mediated reprogramming of glycolysis. Cancer Res. 2017;77(8):1842–53.View ArticlePubMedGoogle Scholar
Barton MK. Local consolidative therapy may be beneficial in patients with oligometastatic non-small cell lung cancer. CA-A Cancer J Clinicians. 2017;2:89–90.View ArticleGoogle Scholar
Jiang J, Jia P, Zhao Z, Shen B. Key regulators in prostate cancer identified by co-expression module analysis. BMC Genomics. 2014;15(1):1015.View ArticlePubMedPubMed CentralGoogle Scholar
Medina-Villaamil V, Martínez-Breijo S, Portela-Pereira P, Quindós-Varela M, Santamarina-Caínzos I, Antón-Aparicio LM, et al. Circulating micrornas in blood of patients with prostate cancer. Actas Urol Esp. 2014;38(10):633–9.View ArticlePubMedGoogle Scholar
Cai C, Chen QB, Han ZD, Zhang YQ, He HC, Chen JH, et al. Mir-195 inhibits tumor progression by targeting rps6kb1 in human prostate cancer. Clin Cancer Res. 2015;21(21):4922.View ArticlePubMedPubMed CentralGoogle Scholar
Bozok ÇV, Tetik VA, Düzgün Z, Tezcanlı KB, Açıkgöz E, Aktuğ H, et al. Mir-15a enhances the anticancer effects of cisplatin in the resistant non-small cell lung cancer cells. Tumor Biol. 2016;37(2):1739–51.View ArticleGoogle Scholar
Liu B, Qu J, Xu F, Guo Y, Wang Y, Yu H, et al. Mir-195 suppresses non-small cell lung cancer by targeting chek1. Oncotarget. 2015;6(11):9445–56.PubMedPubMed CentralGoogle Scholar
Li H, Lan H, Zhang M, An N, Yu R, He Y, et al. Effects of mir-424 on proliferation and migration abilities in non-small cell lung cancer a549 cells and its molecular mechanism. Zhongguo Fei Ai Za Zhi. 2016;19:571–6.PubMedGoogle Scholar
Sun YP, Lu F, Han XY, et al. MiR-424 and miR-27a increase TRAIL sensitivity of acute myeloid leukemia by targeting PLAG1. Oncotarget. 2016;7(18):25276-90.Google Scholar
Cimmino A, Calin GA, Fabbri M, Iorio MV, Ferracin M, Shimizu M, et al. Mir-15 and mir-16 induce apoptosis by targeting bcl2. Proc Natl Acad Sci U S A. 2005;102(39):13944.View ArticlePubMedPubMed CentralGoogle Scholar
Zhao TF, Jia HZ, Zhang ZZ, Zhao XS, Zou YF, Zhang W, et al. Lncrna h19 regulates id2 expression through competitive binding to hsa-mir-19a/b in acute myelocytic leukemia. Mol Med Rep. 2017;16(3):3687.View ArticlePubMedGoogle Scholar
Vanunu O, Magger O, Ruppin E, Shlomi T, Sharan R. Associating genes and protein complexes with disease via network propagation. PLoS Comput Biol. 2010;6(1):e1000641.View ArticlePubMedPubMed CentralGoogle Scholar
Networks analysis | CommonCrawl |
Julian Stier
Computer Science • Kung Fu
pygarn: Graph Assembly Representations
May 20, 2022 in CS
From my work on generative models for graphs to learn distributions of graphs for Neural Architecture Search I started exploring graphs from an auto-regressive and an probabilistic perspective. Auto-regressive in this context means that the non-canonical representation used for a graph is based on a sequence in which the order matters and builds upon the previous step. The probabilistic perspective refers to a new idea for representing graphs not deterministically but with a representation which already supports some kind of fuzziness.
Problems with Graph Isomorphism
A graph is commonly represented visually, with an adjacency matrix or an adjancency list. The adjacency matrix and adjacency list representation have wide-spread purposes as data structures and provide very well-analysed runtimes for graph operations such as adding a vertex or removing an edge.
Figure 1. A visual representation of an exemplary graph of eight vertices and eight edges.
For the minimal graph example you can use python and networkx and the following code to instantiate the graph:
nx.draw(G)
G = nx.Graph()
G.add_nodes_from(np.arange(8))
G.add_edges_from([(0,1), (1,2), (1,3), (3,4), (4,5), (3,6), (6, 7), (7,1)])
We can obtain an adjacency list representation with:
list(nx.adjlist.generate_adjlist(G))
and obtain ['0 1', '1 2 3 7', '2', '3 4 6', '4 5', '5', '6 7', '7'] or we can get an adjacency matrix representation with:
nx.adjacency_matrix(G).todense()
matrix([[0, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 1],
[0, 1, 0, 0, 0, 0, 1, 0]], dtype=int64)
Now let's just look at a graph, which has two more vertices added to the tail of this small chain coming out of the circle of four vertices:
G.add_nodes_from([8, 9])
G.add_edges_from([(5,8), (8,9)])
such that we obtain now the following two representations:
['0 1', '1 2 3 7', '2', '3 4 6', '4 5', '5 8', '6 7', '7', '8 9', '9']
matrix([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]], dtype=int64)
Note, that the list representation was increased with two entries with two vertices and two edges and the adjacency matrix representation increased from $8\times 8$ to $10\times 10$ in its dimensions.
In deep learning, much is about automatically learning good feature representations. The issue with graphs, however is quite present in this example: for the adjacency list representation consider the following example:
G2 = nx.relabel_nodes(G, {i: 9-i for i in G.nodes})
nx.adjacency_matrix(G2).todense()
for which we obtain ['9 8', '8 7 6 2', '7', '6 5 3', '5 4', '4 1', '3 2', '2', '1 0', '0']. The graph G2 is obviously isomorphic to G1 as we just relabelled it. Also, the list representation did not change in structure except for relabelling the values. But here's a crux: If we use the adjacency list, the labelling values have a very important effect in this representation. A change in a single value means that the structure can inherently change. If we replace '5 8' in G with '5 9', we obtain an isomorphic graph but if we change '3 4 6' to '3 4 5' we break up the circle of four vertices and create a triangle at the same time with other vertices. This is very sensitive when you compare it to images which are composed of pixel-values in a grid: changing a value slightly only affects a very local region of the image and only changes the visual appearing with a small change in color (depending on the used color space). If we use the adjacency matrix, the whole dimensions expand - although we could look at the smaller graph of eight vertices as being embedded in the larger graph, but then we would need to consider infinitely large matrices which gets us into total different mathematical fields and issues.
Deep Learning and Graphs
Now what's the actual set of problems we'd like to tackle? With generative models for graphs one goal is to draw random graphs from an unknown family of graphs or a partially described distribution of graphs: $g \sim P(G)$. This notation can be seen in analogy to e.g. $n \sim \mathcal{N}(\mu=0;\sigma=1.0)$, so drawing a number from a standard gaussian distribution. How is such a $n$ obtained in practice? An example is the Box-Muller transform to transform two uniformly sampled numbers into realizations of a normally-distributed random variable – and the uniformly sampled numbers then come from the machine's pseudorandom number generator (PRNG).
There exist majorly three types of generative models for graphs: first, the probabilistic models from network science such as Erdős-Rényi, Barabasi-Albert, Watts-Strogatz or Kleinberg. These have specific rules and only few probabilistic influences such as draws from a binomial distribution $x_i ∼ B(1,p)$ in which $x_i$ decide whether a edge is present or not. Fascinatingly, certain patterns emerge from these models and they define distinguishable families of graphs although a graph can be in multiple families with some probability – from a set theoretical perspective these models define fuzzy sets. Secondly, statistical models which allow to approximate a given graph with a resulting graph with similar properties such as when comparing its degree distribution, diameter or spectrum. An example is the KronFit algorithm. Thirdly, statistical models which allow to learn a distribution over graphs given a set of exemplary graphs $g\in \{g_1,\dots,g_d\} = D \sim P(G)^d$ - these are of high interest since the advancements of geometric deep learning and various message passing techniques. The idea is often that we have a natural process for which we can observe $D = D_{train}\cup D_{test}$ and we want to learn a model $f$ on $D_{train}$ which is able to sample similar graphs as coming from this natural process. Such models are then evaluated with comparing a generated set of graphs $\hat{D}$ with an unseen test set $D_{test}$ - due to the complexity and size of graphs this metric is often based on property distributions over all graphs or most recently on comparisons in graph manifolds.
Now, some of the recent deep learning models for generating graphs are based on an auto-regressive representation of a graph - something which was already cast in the GraphRNN (see You2018) paper. I called it later construction sequences in the context of DeepGG (see Stier2021) in which we analysed an extension and an inherent structural bias towards scale-freeness of DGMG (see Li2018).
This finally brings me to my research on what I currently call assembly representations for graphs which might not only be interesting to sample a single graph from an unknown or partially described distribution but also interesting in context of temporal or evolutionary formulations of graphs. The starting point is the idea to generalize the concept of letting a graph emerge from an initial state with successive application of operations. If we think back to the adjacency list representation we see a successive application of adding a vertex and all edges for that vertex. For the adjacency representation, we can look at it as $n$ successive vectors of size $n$ for which we define whether an edge is present or not. In both cases the order or the vertex labelling matters for the overall representation. The adjacency list representation could be even compacted to take the order of the vertices into account and by this omit the particular source vertex (however, you need empty edge lists then for unconnected vertices).
Now, given any sequence of operations we can assemble a graph by following the sequence and apply the operation on the current state of the graph – as each step depends on the previous one we obviously have an auto-regressive representation but as we now saw, we can even say that every introcued representation is in some way auto-regressive.
Why would that be of interest? The operation used for assembling the object which we observe from a natural process casts a bias on how the object is fundamentally emerging and while a deep learning model might learn representations for chemicals or molecules that way it might inherently have advantages or disadvantages (without valuing the bias). Possibly, if we now the predicates for how the chemical compound is assembled in an operational way we might have a better representation to learn about this process and might be able to improve discriminative or generative models for these compounds.
Graph Operations
Below, I collected some notation to think about such assembly representations. One interesting part of it is to not represent the assembled objects deterministically but to already let the representation reflect fuzzy sets. This allows for operations that are way more high-level than very specific operations and do not need too much parameterization in the sequence. Imagine the previously used construction sequences with [ N N E 0 1 N E 0 2 ] which is a deterministic way of representing the graph $\circ~–~\circ~–~\circ$ while a twine such as [ V V E V E ] represents an isomorphic (exactly the same) graph – V and N denote adding a vertex and E adding an edge. To know which kind of example representation we talk about, we work with these symbols N and E for the deterministic (parameterized versions) and with V and E for the more probabilistic versions for now. Only when it gets larger, a twine starts to get fuzzy (depending on the operations), e.g. [ V V E V E E V E ] represents now two possible graphs
o--o--o---o ,
o--o--o ,
With this in mind, we now have a fuzzy representation for sets of graphs and one that can be really short. How short? For each graph we would like to represent we could have an own representation. Then we would encode all the information in our underlying "alphabet" - the set of operations used for the assembly. Of course, this is not of much use, as a language which has a symbol for each word is too complex. Assumably, there is a "good" tradeoff between operation set size and representative quality. If we look at the construction sequences we only have two operations and we intuitively know that each graph can be represented with this notation. Can we find other operation sets to achieve this property of universality? More generally, for a possibly infinite target set of objects (graphs), can we find a finite set of operations such that the union of all finite assembling sequences are a superset of this target set or even equivalent to it? It would be further desirable to restrict the number of operations. Two operations - adding a vertex and adding an edge - seems to be clearly a minimum, but could we further try to optimize the operation set such that the sequence lengths get minimal in some sense? Or could we even achieve that certain minimal sequence lengths have only restrictive overlaps between each other?
For the above two graphs we could have a deterministic sequence of $\{$[ N N E 0 1 N E 0 2 E 1 2 N E 2 3 ], [ N N E 0 1 N E 0 2 E 1 2 N E 1 3 ]$\}$ but we could also represent it with $\{$[ N N N N E 3 2 E 2 1 E 2 0 E 1 0 ], [ N N N N E 2 0 E 2 1 E 1 0 E 3 1 ]$\}$.
$g$ a single graph
$V_g$ finite set of vertices of a graph $g$ with $|g| = |V_g|$ being the number of its vertices
$E_g \subset V_g\times V_g$ finite set of edges for $g$
$G$ set of graphs; $G^{\geq v}$ denotes $\{g ~|~ g\in G \land |g| \geq v\} \subseteq G$
$\mathcal{G}$ the infinite set of all graphs
$o: \mathcal{G}\rightarrow\mathcal{G}$ an operation (mapping) transforming a graph into another graph
$O$ a (finite) set of operations
$[o_1, o_2, o_3]$ "twine": a finite sequence of graph operations
$g[o]$ "wad": the set of graphs resulting from all mappings when applying $o$ on an initial graph $g$; if $g$ is the empty graph we write $\circ [o]$.
$g [o_1, o_2, o_3]$ the successive application of three operators of a twine on an inital graph g: $o_3(o_2(o_1(g)))$
$\mathcal{S}^{O^s}$ $ = \{[a_1, a_2, \dots, a_s] ~|~ a_i \in O, \forall i\in\{1,\dots,s\}\}$ the set of all possible sequences of length $s$ over successive applications of operations from $O$
$\mathcal{S}^{O}$ $= \bigcup_{j\in\mathbb{N}} \mathcal{S}^{O^j}$ the infinite set of all possible any-length sequences of operations $O$
$G[\mathcal{S}^O]$ $= \{ g~t ~|~ t\in\mathcal{S}^O, g\in G\}$ the infinite "wad" of an finite operation set and a set of source graphs. For a single graph we can write $g[\mathcal{S}^O]$
$\mathcal{S}^O(g)$ $= \{ t\in\mathcal{S}^O ~|~ g\in\circ~t: \}$ the set of all any-length sequences of operations $O$ which assemble $g$
Thoughts & Questions:
can we divide mappings into ones that keep stable in the size of their wad, grow logarithmically or linearly with respect to the vertices or edges or super-linearly?
is there a finite $O$ which is small (!) such that $\circ[\mathcal{S}^O] = \mathcal{G}$? (= generating set)
is there a finite $O$ which is small (!) such that $\circ[\mathcal{S}^O] = G$? (= generating set but not universal)
min twines: given a generating set $O$ and a graph $g$, let the minimal twines be $T^O(g) = \{ t\in\mathcal{S}^O ~|~ g\in\circ t \land \forall t_2\in\mathcal{S}^O: |t| \leq |t_2|\}$ - can $T^O(g)$ be finite and small? can $T^O(g)$ be empty? The idea of minimal twines is to have the set of minimal-length assembly description for a graph under the considered operation set.
sharp twines: $ST^O(g) = \{t\in T^O(g) ~|~ \text{such that} ~ |\circ~t| \leq |\circ~t_2| ~ \forall t,t_2\in T^O(g) \}$. The idea of sharp (minimal) twines is to have a set of minimal-length assembly descriptions which then also are small sets of graphs to reduce fuzzyness or to measure how distinctive one reaches a particular graph.
DeepGG: A deep graph generator
@inproceedings{stier2021deepgg,
title={DeepGG: A deep graph generator},
author={Stier, Julian and Granitzer, Michael},
booktitle={International Symposium on Intelligent Data Analysis},
organization={Springer}
Graphrnn: Generating realistic graphs with deep auto-regressive models
@inproceedings{you2018graphrnn,
title={Graphrnn: Generating realistic graphs with deep auto-regressive models},
author={You, Jiaxuan and Ying, Rex and Ren, Xiang and Hamilton, William and Leskovec, Jure},
booktitle={International conference on machine learning},
pages={5708--5717},
organization={PMLR}
Learning deep generative models of graphs
@article{li2018learning,
title={Learning deep generative models of graphs},
author={Li, Yujia and Vinyals, Oriol and Dyer, Chris and Pascanu, Razvan and Battaglia, Peter},
journal={arXiv preprint arXiv:1803.03324},
Pruning Neural Networks with PyTorch | CommonCrawl |
My research interests are in the areas of Comparative Development, Long-Run Growth, Evolutionary Economics, Technological Change and Innovation, Economic Growth and Development, Macroeconomics and Monetary Economics.
Profiles: Scholar, SSRN, Ideas, SocArxiv, ORCID, Semantic Scholar
Published or Forthcoming
The Origins of the Division of Labor in Pre-Industrial Times, joint with Emilio Depetris-Chauvin. Journal of Economic Growth, 2020, Vol. 25(3), 297-340. (WP) (Data)
This research explores the historical roots of the division of labor in pre-modern societies. Exploiting a variety of identification strategies and a novel ethnic level dataset combining geocoded ethnographic, linguistic and genetic data, it shows that higher levels of intra-ethnic diversity were conducive to economic specialization in the pre-modern era. The findings are robust to a host of geographical, institutional, cultural and historical confounders, and suggest that variation in intra-ethnic diversity is a key predictor of the division of labor in pre-modern times.
Linguistic Traits and Human Capital Formation, joint with Oded Galor and Assaf Sarid. AEA Papers and Proceedings, 2020. 110: 309-13. (WP)
This research establishes the influence of linguistic traits on human behavior. Exploiting variations in the languages spoken by children of migrants with identical ancestral countries of origin, the analysis indicates that the presence of periphrastic future tense, and its association with long-term orientation has a significant positive impact on educational attainment, whereas the presence of sex-based grammatical gender, and its association with gender bias, has a significant adverse impact on female educational attainment.
Distance to the Pre-industrial Technological Frontier and Economic Development, Journal of Economic Growth, 2018, Vol. 23(2), 175-221. (WP) (Data) (HMI Project) (Ungated Paper) (Ungated Replication Files) (slides),
This research explores the effects of distance to the pre-industrial technological frontiers on comparative economic development in the course of human history. It establishes theoretically and empirically that distance to the frontier had a persistent non-monotonic effect on a country's pre-industrial economic development. In particular, advancing a novel measure of the travel time to the technological frontiers, the analysis establishes a robust persistent U-shaped relation between distance to the frontier and pre-industrial economic development across countries. Moreover, it demonstrates that countries, which throughout the last two millennia were relatively more distant from these frontiers, have higher contemporary levels of innovation and entrepreneurial activity, suggesting that distance from the frontier may have fostered the emergence of a culture conducive to innovation, knowledge creation, and entrepreneurship.
Culture, Diffusion, and Economic Development: The Problem of Observational Equivalence, joint with Ani Harutyunyan. Economics Letters, 2017, Vol. 158(C):94-100.
This research explores the direct and barrier effects of culture on economic development. It shows both theoretically and empirically that whenever the technological frontier is at the top or bottom of the world distribution of a cultural value, there exists an observational equivalence between absolute cultural distances and cultural distances relative to the frontier, preventing the identification of its direct and barrier effects. Since the technological frontier usually has the "right" cultural values for development, it tends to be in the extremes of the distribution of cultural traits, generating observational equivalence and confounding the analysis. These results highlight the difficulty of disentangling the direct and barrier effects of culture. The empirical analysis finds suggestive evidence for direct effects of individualism and conformity with hierarchy, and barrier effects of hedonism.
The Agricultural Origins of Time Preference, joint with Oded Galor. The American Economic Review, 2016, 106(10):3064–3103. (Working Paper) (CSI Project)(Slides)
This research explores the origins of the distribution of time preference across regions. Exploiting a natural experiment associated with the expansion of suitable crops for cultivation in the course of the Columbian Exchange, the research establishes that pre-industrial agro-climatic characteristics that were conducive to higher return to agricultural investment, triggered selection, adaptation and learning processes that have had a persistent positive effect on the prevalence of long-term orientation in the contemporary era. Furthermore, the research establishes that these agro-climatic characteristics have had a culturally-embodied impact on economic behavior such as technological adoption, education, saving, and smoking.
"Can a Nation's Soil Explain Its Economic Fortunes?" The Atlantic
"The Origins of Patience" Zeeconomics
Optimal consumption under uncertainty, liquidity constraints, and bounded rationality, Journal of Economic Dynamics and Control, 2014, Vol. 39: 237-254 (Working paper) (code)
I study how boundedly rational agents can learn a "good" solution to an infinite horizon optimal consumption problem under uncertainty and liquidity constraints. Using an empirically plausible theory of learning I propose a class of adaptive learning algorithms that agents might use to choose a consumption rule. I show that the algorithm always has a globally asymptotically stable consumption rule, which is optimal. Additionally, I present extensions of the model to finite horizon settings, where agents have finite lives and life-cycle income patterns. This provides a simple and parsimonious model of consumption for large agent based models.
Adaptive Consumption Behavior, joint with Peter Howitt. Journal of Economic Dynamics and Control, 2014, Vol. 39: 37-61 (NBER working paper) (appendix)
In this paper we propose and study a theory of adaptive consumption behavior under income uncertainty and liquidity constraints. We assume that consumption is governed by a linear function of wealth, whose coefficients are revised each period by a procedure that places few informational or computational demands on the consumer. We show that under a variety of settings the procedure converges quickly to a set of coefficients with low welfare cost relative to a fully optimal nonlinear consumption function.
Isolation and Development, joint with Oded Galor and Quamrul Ashraf. Journal of the European Economic Association, April/May 2010, Vol. 8, No. 2-3: 401-412. (Data)
This paper exploits cross-country variation in the degree of geographical isolation, prior to the advent of sea-faring and airborne transportation technologies, to examine its impact on the course of economic development across the globe. The empirical investigation establishes that prehistoric geographical isolation has generated a persistent beneficial effect on the process of development and contributed to the contemporary variation in the standard of living across countries.
Optimización y Dinámica, joint with Sergio Monsalve, in Sergio Monsalve (ed.) "Matemáticas básicas para economistas. (Con notas históricas y contextos económicos), No. 3., Editorial Universidad Nacional de Colombia, Bogotá, 2010
Two-sided Matching Models, joint with Marilda Sotomayor, in R.A. Meyers (ed.) "Encyclopedia of Complexity and Systems Science", Springer-Verlag, New York 2008. Reprinted in Computational Complexity, 2012
Comportamiento Asintótico y Selección de Equilibrios en Juegos Evolutivos, Editorial Universidad Externado de Colombia, Bogotá, 2006
Equilibrios Múltiples y Tasa Natural de Interés, Academia Colombiana de Ciencias Económicas, Bogotá, 2005
Recent Working Papers
Millet, Rice, and Isolation: Origins and Persistence of the World's Most Enduring Mega-State, joint with James Kung, Louis Putterman, and Shuang Shi
We propose and empirically test a theory for the endogenous formation and persistence of large states, using China as an example. We suggest that the relative timing of the emergence of agricultural societies and their distance to each other set off a race between autochthonous state-building projects and the expansion of neighboring (proto-)states. Using a novel dataset on the Chinese state's historical presence, the timing of agricultural adoption, social complexity, climate, and geography across $1\times1$ degree grid cells in East Asia, we provide empirical support for this hypothesis. Specifically, we find that on average, cells that adopted agriculture earlier or were close to the earliest archaic state in East Asia (Erlitou) remained longer under Sinitic control. In contrast, earlier adoption of agriculture decreased the persistent control of the Chinese state in cells farther than 2.8 weeks of travel from Erlitou.
Expanding the Measurement of Culture with a Sample of Two Billion Humans, joint with Nick Obradovich, Ignacio Martín, Ignacio Ortuño-Ortín, Edmond Awad, Manuél Cebrián, Rubén Cuevas, Klaus Desmet, Iyad Rahwan and Ángel Cuevas.
Culture has played a pivotal role in human evolution. Yet, the ability of social scientists to study culture is limited by the currently available measurement instruments. Scholars of culture must regularly choose between scalable but sparse survey-based methods or restricted but rich ethnographic methods. Here, we demonstrate that massive online social networks can advance the study of human culture by providing quantitative, scalable, and high-resolution measurement of behaviorally revealed cultural values and preferences. We employ publicly available data across nearly 60,000 topic dimensions drawn from two billion Facebook users across 225 countries and territories. We first validate that cultural distances calculated from this measurement instrument correspond to traditional survey-based and objective measures of cross-national cultural differences. We then demonstrate that this expanded measure enables rich insight into the cultural landscape globally at previously impossible resolution. We analyze the importance of national borders in shaping culture, explore unique cultural markers that identify subnational population groups, and compare subnational divisiveness to gender divisiveness across countries. The global collection of massive data on human behavior provides a high-dimensional complement to traditional cultural metrics. Further, the granularity of the measure presents enormous promise to advance scholars' understanding of additional fundamental questions in the social sciences. The measure enables detailed investigation into the geopolitical stability of countries, social cleavages within both small and large-scale human groups, the integration of migrant populations, and the disaffection of certain population groups from the political process, among myriad other potential future applications.
Borderline Disorder: (De facto) Historical Ethnic Borders and Contemporary Conflict in Africa, joint with Emilio Depetris-Chauvin
We explore the effect of historical ethnic borders on contemporary non-civil conflict in Africa. Exploiting variations across artificial regions (i.e., grids of $50\times50$km) within an ethnicity's historical homeland, we document that both the intensive and extensive margins of contemporary conflict are concentrated close to historical ethnic borders. Following a theory-based instrumental variable approach, which generates a plausibly exogenous ethno-spatial partition of Africa, we find that grid cells with historical ethnic borders have 27 percentage points higher probability of conflict and 7.9 percentage points higher probability of being the initial location of a conflict. We uncover several key underlying mechanisms: competition for agricultural land, population pressure, cultural similarity and weak property rights.
Geographical Roots of the Coevolution of Cultural and Linguistic Traits, joint with Oded Galor and Assaf Sarid
This research explores the geographical origins of the coevolution of cultural and linguistic traits in the course of human history, relating the geographical roots of long-term orientation to the structure of the future tense, the agricultural determinants of gender bias to the presence of sex-based grammatical gender, and the ecological origins of hierarchical orientation to the existence of politeness distinctions. The study advances the hypothesis and establishes empirically that: (i) geographical characteristics that were conducive to higher natural return to agricultural investment contributed to the existing cross-language variations in the structure of the future tense, (ii) the agricultural determinants of gender gap in agricultural productivity fostered the existence of sex-based grammatical gender, and (iii) the ecological origins of hierarchical societies triggered the emergence of politeness distinctions.
"There's a nifty economic link between farming and grammar." Wall Street Journal
Geographical Origins of Language Structures, joint with Oded Galor and Assaf Sarid
This research explores the geographical origins of the coevolution of cultural and linguistic traits in the course of human history, relating the geographical roots of long-term orientation to the structure of the future tense, the agricultural determinants of gender bias to the presence of sex-based grammatical gender, and the ecological origins of hierarchical orientation to the existence of politeness distinctions. The study advances the hypothesis and establishes empirically that: (i) variations in geographical characteristics that were conducive to higher natural return to agricultural investment contributed to the existing cross-language variations in the structure of the future tense, (ii) the agricultural determinants of gender gap in agricultural productivity fostered the existence of sex-based grammatical gender, and (iii) the ecological origins of hierarchical societies triggered the emergence of politeness distinctions.
Geographical Origins and Economic Consequences of Language Structures, joint with Oded Galor and Assaf Sarid
This research explores the economic causes and consequences of language structures. It advances the hypothesis and establishes empirically that variations in pre-industrial geographical characteristics that were conducive to higher returns to agricultural investment, gender gaps in agricultural productivity, and the emergence of hierarchical societies, are at the root of existing cross-language variations in the structure of the future tense and the presence of grammatical gender and politeness distinctions. Moreover, the research suggests that while language structures have largely reflected past human experience and ancestral cultural traits, they have independently affected human behavior and economic outcomes.
The Origins and Long-Run Consequences of the Division of Labor, joint with Emilio Depetris-Chauvin
This research explores the historical roots and persistent effects of the division of labor in pre-modern societies. Exploiting a novel ethnic-level dataset, which combines geocoded ethnographic, linguistic and genetic data, it advances the hypothesis and establishes empirically that population diversity had a positive effect on the division of labor, which translated into persistent differences in economic development. Specifically, it establishes that pre-modern economic specialization was conducive to pre-modern statehood, urbanization and social hierarchy. Moreover, it demonstrates that higher levels of pre-modern economic specialization are associated with greater skill-biased occupational heterogeneity, economic complexity and economic development in the contemporary era.
Population Diversity, Division of Labor and the Emergence of Trade and State, joint with Emilio Depetris-Chauvin
This research explores the emergence and prevalence of economic specialization and trade in pre-modern societies. It advances the hypothesis, and establishes empirically that population diversity had a positive causal effect on economic specialization and trade. Based on a novel ethnic level dataset combining geocoded ethnographic and genetic data, this research exploits the exogenous variation in population diversity generated by the "Out-of-Africa" migration of anatomically modern humans to causally establish the positive effect of population diversity on economic specialization and the emergence of trade-related institutions, which, in turn, facilitated the historical formation of states. Additionally, it provides suggestive evidence that regions historically inhabited by pre-modern societies with high levels of economic specialization have a larger occupational heterogeneity and are more developed today.
Land Productivity and Economic Development: Caloric Suitability vs. Agricultural Suitability, joint with Oded Galor
This paper establishes that the Caloric Suitability Index (CSI) dominates the commonly used measure of agricultural suitability in the examination of the effect of land productivity on comparative economic development. The analysis demonstrates that the agricultural suitability index does not capture the large variation in the potential caloric yield across equally suitable land, reflecting the fact that land suitable for agriculture is not necessarily suitable for the most caloric-intensive crops. Hence, in light of the instrumental role played by caloric yield in sustaining and supporting population growth, and given importance of pre-industrial population density for the subsequent course of economic development, the Caloric Suitability Index dominates the conventional measure in capturing the effect of land productivity on pre-colonial population density and the subsequent course of economic development.
Culture, Diffusion, and Economic Development, joint with Ani Harutyunyan
This research explores the effects of culture on technological diffusion and economic development. It shows that culture's direct effects on development and barrier effects to technological diffusion are, in general, observationally equivalent. In particular, using a large set of measures of cultural values, it establishes empirically that pairwise differences in contemporary development are associated with pairwise cultural differences relative to the technological frontier, only in cases where observational equivalence holds. Additionally, it establishes that differences in cultural traits that are correlated with genetic and linguistic distances are statistically and economically significantly correlated with differences in economic development. These results highlight the difficulty of disentangling the direct and barrier effects of culture, while lending credence to the idea that common ancestry generates persistence and plays a central role in economic development.
The Voyage of Homo-œconomicus: some economic measures of distance
Leviathan or The Matter, Form and Power of Politeness, joint with Oded Galor and Assaf Sarid
The Neolithic and Life Expectancy: A Double-Edged Sword, joint with Oded Galor and Raphaël Franck
State formation, joint with Emilio Depetris-Chauvin
Culture, Genetics, and Development, joint with Emilio Depetris-Chauvin and Ani Harutyunyan
The Economic Origins and Consequences of Hierarchy and Power Distance, joint with Oded Galor
The Neolithic Revolution and Comparative Development, joint with Oded Galor
Geography, Trade-specific Human Capital and Comparative Development, joint with Stelios Michalopoulos
Frontiers of Development, joint with Stelios Michalopoulos and Martin Fiszbein
Reinventing the Wheel: Duplication of Effort, Competition, and Scale Effects in a Schumpeterian Growth Model
The Sugar and Spice Theory of Economic Growth
Durable Goods and the Dynamics of Innovation and Adoption of Technology
Vested Interests, Media Capture and Resistance to Technology Adoption, joint with Ruben Durante
A Tale of a Thousand Cities, joint with Ruben Durante and David Weil | CommonCrawl |
npj 2d materials and applications
Article | Open | Published: 08 May 2019
Pressure dependence of direct optical transitions in ReS2 and ReSe2
Robert Oliva ORCID: orcid.org/0000-0002-9378-40481 na1,
Magdalena Laurien2 na1,
Filip Dybala1,
Jan Kopaczek1,
Ying Qin3,
Sefaattin Tongay3,
Oleg Rubel ORCID: orcid.org/0000-0001-5104-56022 &
Robert Kudrawiec1
npj 2D Materials and Applicationsvolume 3, Article number: 20 (2019) | Download Citation
Electronic properties and materials
Spintronics
Superconducting properties and materials
Two-dimensional materials
The ReX2 system (X = S, Se) exhibits unique properties that differ from other transition metal dichalcogenides. Remarkably, its reduced crystal symmetry results in a complex electronic band structure that confers this material in-plane anisotropic properties. In addition, multilayered ReX2 presents a strong 2D character even in its bulk form. To fully understand the interlayer interaction in this system, it is necessary to obtain an accurate picture of the electronic band structure. Here, we present an experimental and theoretical study of the electronic band structure of ReS2 and ReSe2 at high-hydrostatic pressures. The experiments are performed by photoreflectance spectroscopy and are analyzed in terms of ab initio calculations within the density functional theory. Experimental pressure coefficients for the two most dominant excitonic transitions are obtained and compared with those predicted by the calculations. We assign the transitions to the Z k-point of the Brillouin zone and other k-points located away from high-symmetry points. The origin of the pressure coefficients of the measured direct transitions is discussed in terms of orbital analysis of the electronic structure and van der Waals interlayer interaction. The anisotropic optical properties are studied at high pressure by means of polarization-resolved photoreflectance measurements.
The ReX2 crystals (X = S, Se) are semiconductors from the family of two-dimensional layered transition metal dichalcogenides (TMDCs) that exhibit special properties. Rhenium-based TMDCs have received increasing interest during the last few years owing to their large in-plane anisotropic properties. These properties result from their particular band structure and reduced crystal symmetry, as well as a strong 2D character that has been attributed to weak van der Waals interlayer bonding even in their bulk form.1,2 Besides the large fundamental interest, ReX2 has also shown to be a highly interesting technological material for many potential applications, including photodetectors,3,4,5,6,7,8 solar cells,9 photonics,10 flexible electronics,11 and field-effect transistors.12,13,14,15,16 Remarkably, the small interlayer coupling of ReX2 opens an exciting field of new possibilities, as it may allow to design bulk devices that retain 2D functionalities only present in single-layered materials.17 To fully exploit the applications of ReX2 for developing novel optoelectronic devices, it is crucial to further characterize its fundamental properties.
Optical modulation spectroscopy is a very powerful method to study the optical properties of semiconductors. Owing to its differential-like character, interband-related features are highly enhanced and background signal is suppressed, thus allowing to accurately measure direct optical transitions.18 So far, different modulation spectroscopies have shown to be very useful for studying the optical transitions of ReX2: piezoreflectance,19 electrolyte electroreflectance,20 thermoreflectance,21 and polarization-dependent measurements22,23,24 revealed two and three excitonic transitions for ReS2 and ReSe2, respectively. These works provided evidence that these excitons, which exhibit a strongly polarized dipole character, were confined within single layers.
However, the extent to which ReX2 behaves as stacked decoupled layers has recently been a topic of intense debate.2,22,23,25,26,27 On the one hand, direct photoreflectance (PR) measurements on the electronic dispersion found that ReX2 indeed exhibits a significant degree of electronic coupling.22 This result is also supported by angle-resolved photoemission experiments (ARPES), which showed that there exists a significant electronic dispersion along the van der Waals gap.25,26 Also, recent calculations show that the fundamental bandgap shrinks by 32.7% in ReX2 from monolayer to bulk, and the interlayer binding energy is similar to other TMDCs such as MoS2.28
On the other hand, optical, vibrational, and structural measurements indicate that ReX2 exhibits a strong 2D character. For instance, photoluminescence experiments revealed that the emission energy of ReS2 is almost independent to the number of layers (∆E ≈ −50 meV from one monolayer to bulk) in contrast with other G6-TMDCs (e.g., ∆E ≈ −600 meV for MoS2).2 For the case of ReSe2, it was shown that it retains a direct bandgap regardless of its crystal thickness, with excitons strongly confined within single layers for bulk crystals, indicating a weak interlayer interaction.23 Moreover, the Raman spectrum of monolayer ReS2 is almost identical to that of bulk, which evidences an ultraweak interlayer coupling.27 Also, low-frequency Raman measurements showed that interlayer force constant in ReX2 is significantly smaller than other G6-TMDCs (by a factor of ≈40%).29 One of the most direct ways to probe interlayer interaction is to modulate the interlayer distance from high-pressure (HP) measurements. In this regard, HP X-ray diffraction measurements show that the bulk modulus of ReX2 (23–31 GPa)30,31 is significantly lower than group 6 TMDCs (57–72 GPa).32,33,34,35 HP Raman measurements on ReS2 showed a twofold decreased pressure coefficient of the out-of-plane A1g phonon mode with respect to other TMDCs,2 reinforcing the decoupled behavior in bulk ReS2.2 Also, the large pressure metallization of ReS2 (70 GPa) in comparison with MoS2 (19 GPa) has been attributed to the larger interlayer coupling in MoS2.36 In spite of the fundamental properties of this crystal system being relatively well-known at ambient pressure, HP optical measurements are highly desirable to evaluate the degree of electronic interlayer coupling in ReX2.
HP optical measurements are widely employed to obtain detailed structural and band structure information of semiconductors.37 Moreover, HP optical measurements provide a highly useful benchmark to test first-principles calculations (such as those based on density functional theory) on challenging systems such as TMDCs. For the case of ReX2, which exhibits weak interlayer forces at ambient pressure, HP optical measurements would shed new light into the role of orbital composition and van der Waals bonding on the excitonic energies and their pressure dependence. To date, the amount of HP optical studies on ReX2 is scarce. The pressure dependence of the bandgap has only been experimentally investigated for ReS2 by means of photoluminescence and absorption.2,36 These works found that the bandgap of ReS2 does not increase with pressure and an almost-direct-to-indirect bandgap transition takes place around 27 kbar. At higher pressures, calculations suggest that ReS2 exhibits a metallization and superconducting state.38
Despite the previous investigations, there are still many questions that remain to be addressed with regard to the HP optical properties of ReX2. First, an experimental assignment of the different excitonic transitions around the bandgap is desirable. So far, piezoreflectance measurements on the ReSe2−xSx alloy suggested that the nature of the direct band edges is similar for each compositional end member,39 but electronic dispersion calculations together with ARPES measurements suggested that the first direct electronic transitions take place either at the Z high-symmetry point of the Brillouin zone (BZ) or away from the zone center, far from any particular high-symmetry direction.25,26,28,40,41 Second, while the orbital composition of the states of ReS2 has been described for different numbers of layers,28 the interplay of orbital composition on the pressure dependence on the electronic band structure has not yet been investigated. Finally, the anisotropic properties of ReX2 at high pressure remain to be explored.
To address these questions, we conduct PR measurements at high-hydrostatic pressure on thin ReS2 and ReSe2 exfoliated flakes. Polarization-dependent measurements performed at different pressures are used to energetically resolve the different excitonic transitions that exhibit very similar energies. Our results show that the two main direct transitions for ReS2 and ReSe2 exhibit a negative-pressure coefficient, in contrast to other TMDCs, such as MoS2, MoSe2, WS2, or WSe2.42 Such findings provide valuable information to assess the degree of electronic interlayer coupling and the role of orbital composition on the energies of the band edge states. We discuss the experimental results in light of ab initio band structure calculations. These calculations are performed using different functionals and considering different hydrostatic pressures. We find good agreement between the experimental and calculated pressure coefficients for the two main transitions. The experimentally observed transitions are assigned by inspecting the calculated electronic dispersion curves along a large grid of k-points in the whole 3D-BZ. Finally, we discuss the negative sign of the measured pressure coefficients in terms of orbital contributions to the states of the valence and conduction band of each transition and van der Waals interaction.
We conducted PR measurements in order to determine the pressure dependence of the first two direct optical transitions in ReX2. To ensure the reproducibility of the experimental results, samples obtained from different sources and grown in different conditions are used for the experiments. The PR spectra obtained for ReS2 and ReSe2 at different pressure values are shown in Fig. 1. Two main features can be observed for all samples, which correspond to the direct excitonic transitions A and B. These excitonic transitions have been previously reported at ambient pressure from modulated spectroscopies for ReS219,21 and ReSe2.43
Photoreflectance spectra obtained at different pressures for ReS2 (sample I and II) and ReSe2 (sample III and IV). Straight lines around the fitted transition energies are shown as a guide to the eye for transitions A and B. Both features decrease in energy with increasing pressure for all studied samples. Fittings are shown as dotted gray curves
The strongest transition (i.e., B for ReS2 and A for ReSe2) is clearly visible at all pressures. The weakest transition merges at high (low) pressure for ReS2 (ReSe2) as a consequence of different pressure coefficients between the A and B transitions. It is worth noting that weaker, energetically close transitions have been previously reported from low-temperature and polarization measurements for ReS220 and ReSe2.23,24 We are able to resolve these transitions from polarization-dependent measurements at different pressures (shown in the Supplementary Information, S.I.). Our polarization measurements allow us to conclude that the relative amplitude and angular dependence for each transition are preserved at the studied pressure range (i.e., up to 20 kbar). Hence, the sample orientation and structural stability are maintained throughout the studied pressure range.
The energy of each transition was obtained from the PR spectra by fitting the Aspnes formula,44 given by
$$\frac{{\Delta R}}{R}\left( E \right) = {\mathrm{Re}}\left[ {\mathop {\sum }\limits_{j = 1}^n C_je^{i\theta _j}\left( {E - E_j + i{\mathrm{\Gamma }}_j} \right)^{ - m}} \right],$$
where n, C, and θ are the number of transitions, amplitude, and phase of the resonance, Ej and Γ are the energy and broadening parameter of the transition, respectively. For excitonic transitions we take m = 2. Two transitions are enough to successfully reproduce all the spectra shown in Fig. 1 (dotted curves). Note that the differences in line shape between different samples of the same compound are accounted for by different phase values of the resonance, defined by different built-in electric fields and differences in chopper settings. However, the fitted energy values of the transitions are not affected by these differences. For the fitting procedure, we left all parameters unfixed for the spectrum obtained at ambient pressure, while only the amplitude and the energy of the transition were left as free parameters for spectra at higher pressures, since these are expected to change with pressure. The pressure dependence of the energy of each transition is plotted in Fig. 2 for both samples.
The energy of the fitted transitions in the photoreflectance experiments is plotted as a function of pressure for ReS2 (up) and ReSe2 (bottom). The fitted energies of transitions A and B are shown in red and blue colors, respectively. Linear fits have been performed for both transitions and fitted values are included in the figure
As can be seen in Fig. 2, the energy of the transition A (red symbols) decreases with increasing pressure at a different rate than the energy of the transition B (blue symbols). Note that the fitted energies of the transition B between different samples of the same material are scattered to a certain degree. This can be attributed to two factors: i) sample misorientations and ii) uncertainties in the fitting procedure that naturally arise for weak PR features energetically close (≈60 meV) to a strong PR transition with a relatively strong broadening parameter (typically ≈20 meV). Despite uncertainties in the fitted energies of transition B, the pressure coefficient of different samples of the same material was consistent. For ReS2, the fitted pressure coefficient of the B transition, −4.2 meV/kbar, is much larger than that of the A transition, −2.3 meV/kbar. The latter value is in agreement with previous HP photoluminescence (PL) measurements, which yielded a pressure coefficient of −2.0 meV/kbar.36 In contrast, for ReSe2, the pressure coefficient of the A transition, −3.5 meV/kbar, is more pronounced than that of the B transition, −1.3 meV/kbar. The latter result is qualitatively in agreement with the reported absorption measurements, which show a redshift of the absorption edge with increasing pressure.2 The fact that the pressure coefficient of the A transitions is much larger for ReSe2 and smaller for ReS2 (with respect to the B transition), evidences that the origin of the transitions is different for each material, as discussed in the next section.
First-principles calculations were carried out in order to assign the experimentally observed transitions and to provide further insight into the electronic and optical properties of ReX2. The electronic band structure and the optical matrix element were calculated for a k-path intersecting the first BZ in a three-dimensional manner. Figure 3 shows the electronic dispersion curves for 0 kbar (black curves) and 20 kbar (red curves), as obtained from density functional theory (DFT) calculations within the meta-generalized gradient approximation (GGA) strongly constrained and appropriately normed (SCAN) functional.45 The normalized values of the optical matrix element as well as the bandgap value along the k-path are also shown. The first and second direct transitions for ReS2 (ReSe2) are at Z (J1) and K1 (Z) points, respectively. Note that the matrix element maxima correspond with the bandgap minima, which indicate that these transitions are optically active. The calculated quasi-direct gaps for ReS2 and ReSe2 are 1.2 and 1.15 eV, respectively. These values are smaller than the measured optical gaps, around 1.5 and 1.31 eV, respectively. The discrepancies between calculations and experiments are accounted for by the systematic bandgap underestimation of the meta-GGA functional (SCAN) used. This functional neither accounts for independent particle effects nor for excitonic effects.46 To better reproduce the experimental values, we performed calculations at a higher level of theory using the hybrid functional HSE06.47 At this level of theory, our calculations are able to predict the bandgap energies more accurately, around 1.42 and 1.43 eV (including the excitonic binding energy) for ReS2 and ReSe2, respectively. More detail on the HSE06 calculations can be found in the S.I.
Electronic dispersion curves for ReS2 (left) and ReSe2 (right) as calculated using the SCAN functional at zero pressure (black curves) and 20 kbar (red curves). The corresponding matrix elements have been calculated for each k-point, the stronger transitions are located around Z and K1 for ReS2 and around Z and J1 points for ReSe2. In the lower panels, the direct bandgap energy is plotted along the studied wave vectors
Owing to the complex band structure of ReX2, which exhibits very close direct and indirect bandgaps in energy and position in the k-space,25,28 some computational considerations should be taken into account. For instance, a number of theoretical works predict the valence band maximum (VBM) of bulk ReS2 to be either at Γ2,36,48 or at Z.49 This may result from choosing only a few high-symmetry paths for the calculation of the band structure, or choosing functionals that fail to capture the details of the complex electronic structure of ReX2. Hence, in order to accurately describe the band structure of complex materials like ReX2, it is important to consider the whole 3D-BZ, and to devote careful attention to the choice of the functional. Recent contributions show that the choice of the functional influences the number and location of VBM and conduction band minimum (CBM) in ReX2.25,41,49 The results of the meta-GGA SCAN functional employed here seem to reproduce the experimental results well and with low-computational cost.
The location of the fundamental direct gap of ReS2, predicted by our calculations to be at Z, is in agreement with recent direct measurements of the band dispersion using ARPES25,40,41 and a recent theoretical study employing quasiparticle approximations.28 For the case of ReSe2, our high-density k-mesh calculations predict an indirect fundamental bandgap of 1.10 eV, with both the VBM and the CBM located away from high-symmetry points (named J2 and J3, respectively, coordinates shown in Table S1 of the S.I.). This is in agreement with recent studies, which also found an indirect bandgap with the VBM close to the J2 point.26,49 Several other studies that take only high-symmetry k-paths into account for evaluating the band structure predict either direct or indirect bandgaps for ReSe2 near the Z or Γ point.23,28,38,50,51 It has also been suggested that the indirect and the direct bandgap are close in energy, and the discussion about the nature of the fundamental bandgap for ReSe2 is still ongoing.49,50,51,52 At higher pressure, an overall narrowing of the bandgap takes place along the whole BZ with increasing pressure, as previously evidenced in theoretical studies.38,53 This trend can be seen in the lowest panel of Fig. 3 and results in an enhancement of the indirect nature of the fundamental gap for both ReS2 and ReSe2 at HP.
To assign the A and B transitions, we compare the experimental and calculated pressure dependence of the first two direct transitions. This is shown in Fig. 4, where the pressure dependence of the variation of energy is plotted for both, calculated bandgaps (crosses) and measured excitonic transitions (full symbols). The figure plots the variation of energy rather than absolute values. This allows to directly compare theoretical calculations with experimental results, neglecting energetic differences arising from the DFT bandgap underestimation and excitonic binding energy. As can be seen in the figure, our calculations predict distinct pressure coefficients for each transition, which is also observed experimentally. The calculated pressure coefficients of ReS2 (−1 and −3.4 meV/kbar−1) and ReSe2 (−4.2 and −0.7 meV/kbar−1) slightly differ from the measured values. We attribute the differences to the effect of structural distortion at high pressure on the position of the maximum of the matrix elements in the reciprocal space. Taking this into account, the differences between calculated and experimental pressure coefficients agree within the experimental and calculated errors (which are lower than ±0.3 and ±1.2 meV/kbar, respectively). Most importantly, the qualitative trend is reproduced in our calculations, namely a negative-pressure coefficient of distinct magnitude for both transitions. After comparing the pressure coefficients (see Fig. 4), transitions A (red symbols) and B (blue symbols) are unambiguously assigned to the Z (J1) and K1 (Z) k-point for ReS2 (ReSe2), respectively. The calculated pressure coefficients within the SCAN functional can be reproduced by HSE06 calculations (differences in the pressure coefficients are below ±0.5 meV/kbar, as shown in Tables S3 and S4 in the S.I.), which further supports the provided assignation.
Increment of exciton energy versus pressure plotted for the transitions A (red color) and B (blue color) from measurements on ReS2 (top) and ReSe2 (bottom), as well as the calculated values of the transport bandgap using the SCAN functional (crosses). The straight lines are linear fits to the experimental values. Pressure coefficients of each transition are included
The current assignation of the transition A at the J1 k-point for ReSe2 is in contrast with previous assumptions that all the excitonic transitions took place around the Z point of the BZ.23 This result should be taken into account for future work on the compositional dependence of the bandgap of the ReSe2−xSx alloy, since J1 is away from either Z or K1 (coordinates are shown in Table S1 of the S.I.). Previous absorption54 and piezoreflectance39 measurements along the entire composition range found evidence that the nature of the bandgaps is similar for the ReSe2−xSx compositional end members. However, while we found that both direct excitonic transitions are similar in energy (the transition energy in J1 is only ≈40 meV below that at Z), they belong to different k-points between different compositional end members. Hence, it is expected that the compositional dependence of the lowest direct transition (i.e., transition A) exhibits a crossover from the J1 for ReSe2 to Z for ReS2.
Owing to its different crystallographic structure, the optoelectronic properties of ReX2 are drastically different from those of group 6 TMDCs. Remarkably, the pressure coefficient of the first direct optical transition is negative, in contrast to other TMDCs. Figure 5 shows the pressure coefficient of the first direct optical transition for MX2 TMDCs (M = Mo, W, and Re and X = S and Se), as measured by HP PR spectroscopy elsewhere,42 together with the present experimental results for ReX2. While MoX2 and WX2 exhibit positive-pressure coefficients, this is not the case for ReX2, which exhibits negative-pressure coefficients. As a general trend, a closing of the bandgap with increasing pressure (i.e., negative-pressure coefficient) is expected for all TMDCs, since all TMDCs metallize at HP (metallization takes place around 350 kbar for ReX2). Still, while their indirect bandgaps decrease with pressure, all group 6 TMDCs exhibit a positive-pressure coefficient of the direct gaps.53,55 Such striking difference is accounted for by the particular crystallographic structure of ReX2, and the particular electronic configuration of Re: with respect to group 6 transition metals, rhenium compounds possess one more valence electron, and the valence and conduction band states are importantly characterized by the Re-d orbitals. To investigate the physical origin of the negative direct pressure coefficient, and its connection with the reduced van der Waals interaction in ReX2, we performed an orbital analysis of the states associated with the A and B transitions.
Histogram showing the pressure coefficient of the first direct optical transitions of MoX2 and WX2 published elsewhere42 and ReX2 (X = S and Se), as obtained from high-pressure photoreflectance measurements
The orbital composition of the states of the A and B direct transitions is shown in Table 1 for ReS2 and ReSe2. The CBM and VBM of ReS2 are dominated by Re-\({\it{d}}_{{\it{z}}^2}\) orbitals, in agreement with recent calculations,25 while for ReSe2, the orbital contributions are more diverse. In the table, Re-\({\it{d}}_{{\it{z}}^2}\) and X-pz orbital contributions, which importantly contribute to the band edge states25 and show out-of-plane character,40,52 are highlighted since these are expected to be highly sensitive to the interlayer interaction. The z-axis denotes the out-of-plane direction, so that z orbitals are located at least partially within the van der Waals gap. With increasing pressure, states with large contributions from Re-\({\it{d}}_{{\it{z}}^2}\) and X-pz orbitals destabilize, and therefore rise in energy with increasing pressure. Such destabilization has been attributed to Coulomb repulsion of antibonding p orbitals between interlayer chalcogen atoms for MoS2.56,57,58 Similarly, \({\it{d}}_{{\it{z}}^2}\) orbitals are fairly delocalized and directed perpendicular to the layers. The role of orbital contribution to the bandgap dependence on interlayer distance is well studied for other TMDCs, and is the state-of-the-art explanation for the direct-to-indirect bandgap crossover of MoS2 at its transition from monolayer to bulk.56,59 Quantitatively, increasing pressure has a similar effect on the electronic structure as increasing the number of layers, i.e., pressure results in a stronger interaction of electrons along the van der Waals gap and a reduction of interlayer distance. In fact, the pressure and strain dependence of the bandgap of MoX2 has been explained in terms of orbital contributions to the bandgap states.60,61 Hence, larger contributions of the Re-\({\it{d}}_{{\it{z}}^2}\) and X-pz orbitals in the VBM with respect to the CBM would result in a narrowing of the bandgap with increasing pressure, which is the case for ReX2 as discussed in detail below.
Table 1 Calculated orbital composition of the important extrema (transitions A and B) of the electronic band structure of ReS2 and ReSe2
The orbital interplay on the bandgap reduction of ReX2 with increasing pressure/strain has been previously hinted11,36, but never evaluated from orbital analysis. For the case of ReS2, the highest contribution to the analyzed states arises from Re-\({\it{d}}_{{\it{z}}^2}\). In Table 1, it can be seen that larger contributions of the pz orbital take place in the VBM with respect to the CBM. Hence, at higher pressures, the VBM experiences a stronger destabilization than the CBM, resulting in a narrowing of the bandgap, as observed experimentally. Furthermore, the transition at K1 (i.e., transition B) exhibits a significantly higher contribution from the S-pz states, which accounts for its more negative pressure coefficient with respect to the transition at Z (i.e., transition A), as observed experimentally (see Fig. 4). Similarly, for the case of ReSe2, the contributions from the Se-pz and Re-\({\it{d}}_{{\it{z}}^2}\) orbitals to the VBM are large, which implies a large redshift of the transition at J1 (i.e., transition A) with increasing pressure, in agreement with the experimentally observed large negative pressure coefficients for the transition A (see Fig. 4). In contrast, the transition at Z (i.e., transition B) shows only moderate contributions of Se-pz in the VBM. To account for the negative pressure coefficient of transition B, we suggest that the Re-dyz orbitals might play a significant role. In conclusion, the bandgap narrowing with increasing pressure on the transitions A and B of ReX2 is mainly accounted for by an increased contribution of X-pz orbitals in the VBM.
So far, it has been shown that the negative-pressure coefficients observed for the direct transitions in ReX2 can be qualitatively explained from orbital theory. However, the value of the pressure coefficient could be influenced by the reduced van der Waals interactions present in ReX2 with respect to other TMDCs. To elucidate whether ReX2 exhibits a decreased van der Waals interaction with respect to MoS2, we compare the effect of orbital interplay on the pressure coefficient between both compounds. For MoS2, a negative pressure coefficient of the indirect bandgap has been predicted to be in the range −3.79 to −7.9 meV/kbar.42 Such a low-pressure coefficient is a consequence of a strong blueshift of the VBM at Γ, where the orbital contributions from \({\it{d}}_{{\it{z}}^2}\) and pz orbitals are strong, i.e., 60% Mo-\({\it{d}}_{{\it{z}}^2}\) + 30% S-pz.56 However, the contribution of the \({\it{d}}_{{\it{z}}^2}\) and pz to the CBM (at K) is in the same order of magnitude, i.e., 86% Mo-\({\it{d}}_{{\it{z}}^2}\) + 9% S-pxy + 5% S-pz. Since the direct pressure coefficients of ReS2 and ReSe2 (i.e., −4.2 meV/kbar and −3.5 meV/kbar) are similar to the indirect pressure coefficient in MoS2, we conclude that the decreased van der Waals interactions in ReX2 (as evidenced by HP XRD30,31 and low-frequency Raman measurements29) do not play a significant role in its pressure coefficient.
To summarize, we performed HP PR measurements on ReS2 and ReSe2 samples obtained from different sources and grown on different conditions. Our results reveal that two main excitonic transitions decrease in energy with increasing pressure for each material. For the case of ReS2, the obtained pressure coefficients for the A and B transitions are −2.3 and −4.2 meV/kbar, respectively, and for ReSe2, −3.5 and −1.3 meV/kbar, respectively. Polarization-resolved measurements allowed to measure a third transition for ReS2, as well as determining the crystal orientation and assessing the structural stability up to 20 kbar in ReX2.
The electronic band structure of ReS2 and ReSe2 was calculated from ab initio calculations within the density functional theory, using the meta-GGA SCAN functional. We probed the whole BZ in order to explore all the possible direct transitions around the bandgap. The calculations were performed at different pressure values, which allowed the comparison of the experimental and theoretical results and assignment of each transition. For ReS2, the transitions A and B were assigned to Z and K1 k-points of the BZ, whereas for ReSe2, the A and B transitions were assigned to the J1 and Z points, respectively (with both K1 and J1 located away from the high-symmetry k-points). The negative pressure coefficients measured in ReX2 were explained in terms of orbital analysis. This allowed us to conclude that the destabilization of the pz orbital with increasing pressure is mostly responsible for the measured pressure coefficients. This work evidences that ReX2 does not exhibit a strong electronic decoupling and hence the optoelectronic properties of few-layered ReX2 could be drastically different from the bulk form.
Two samples of different origins were used for each set of ReS2 and ReSe2 materials. One sample for each material was commercially obtained from HQgraphene, which consisted of thin flakes mechanically exfoliated from synthetic bulk crystals (99.995% purity). These are here labeled as samples I and III for ReS2 and ReSe2, respectively. The ReS2 (sample II) and ReSe2 (sample IV) samples were synthesized by the chemical vapor transport growth technique using Re (99.9999% purity), S, or Se (99.9999% purity) pieces. These precursors were mixed at atomic stoichiometric ratios and sealed into 0.5-in. diameter and 9-in.-long quartz tubes at 10−6 Torr. Extra ReI3 was added as a transport agent to initiate the crystal growth and successfully transport Re, S, and Se atomic species. Closely following Re–S–Se binary-phase diagrams, we have synthesized crystals with temperature variation (drop) of 50 °C over 5 weeks to complete the growth. Samples were cooled down to room temperature and ampoules were opened in a chemical glove box. The use of two samples grown under different conditions for each material allows to further validate the reproducibility of the here-presented experimental results.
To perform the HP hydrostatic measurements, the samples were mounted inside a UNIPRESS piston cylinder cell. The chosen pressure hydrostatic medium was Daphne 7474, which remained hydrostatic and transparent during the whole measurement, up to pressures of 18 kbar. The pressure was determined by measuring the resistivity of an InSb probe, which provides a 0.1-kbar sensitivity. A sapphire window in the press allowed optical access to perform PR measurements. For the PR measurements, a single grating of 0.55-m focal length and a Si pin diode were used to disperse and detect the light reflected from the samples. A chopped (270 Hz) 405-nm laser line was pumped into the sample, together with a probe tungsten lamp (power of 150 W). Phase-sensitivity detection of the PR signal was processed with a lock-in amplifier. Further details on the experimental setup can be found elsewhere.62 All measurements were performed at ambient temperature and pressures up to ≈18 kbar. At this pressure range, no phase transition was observed and only the Td crystal structure was investigated.
Computational details
Ab initio calculations on the the DFT level were carried out using the Vienna Ab initio Simulation Package (VASP),63,64 with the projector-augmented wave65 potentials as implemented by Kresse and Joubert.66 The SCAN45 semilocal exchange-correlation functional was employed. SCAN belongs to the meta-general-gradient-approximation (meta-GGA) functionals and has shown to produce more accurate results than conventional GGA functionals at a very comparable computational cost.45,46,67 In particular, SCAN is recommended for electronic structure prediction of materials with heterogeneous bond types67 (e.g., covalent and van der Waals) as well as layered materials.46 It is therefore well suited for the band structure prediction of ReX2. In addition, a revised Vydrov–van Voorhis (rVV10) long-range van der Waals interaction68,69,70 was used.
Structure information of ReS2 and ReSe2 was taken from Murray et al.71 and Alcock and Kjekshus,72 respectively. Structure relaxation was undertaken with a Monkhorst–Pack73 k-mesh of 5 × 5 × 5 with the above-mentioned basis set and functionals. Seven electrons were considered for the valence of Re (5d5 6s2). The cutoff energy for the plane-wave expansion was set to 323.4 and 282.8 eV for ReS2 and ReSe2, respectively, which is 25% above the recommended values in the pseudopotential files. Relevant properties (pressure coefficient, bandgaps, and band character) were carefully checked for convergence with the kinetic energy cutoff as it can be seen in Figures S12–S17 of the S.I. Structures were relaxed until the total energy change and the band structure energy change dropped below 10−7 eV, and the residual atomic forces were less than 0.02 eV/Å in their absolute value. Crystallographic information files with atomic structures at 0 and 20 kbar, as used in the calculations, can be accessed through the Cambridge crystallographic data center (CCDC deposition numbers 1862132–1862135).
For calculations of the band structure and optical properties, spin–orbit interaction was taken into account. The cutoff energy was set to normal accuracy, which is 258.7 eV for ReS2 and 226.2 eV for ReSe2. High-density gamma-centered k-mesh calculations (34 × 34 × 34) were performed to investigate possible VBM and CBM located off the symmetry points.
All data derived from the experiments and calculations of this study are available from the corresponding author upon reasonable request.
Jariwala, B. et al. Synthesis and characterization of ReS2 and ReSe2 layered chalcogenide single crystals. Chem. Mater. 28, 3352–3359 (2016).
Tongay, S. et al. Monolayer behaviour in bulk ReS2 due to electronic and vibrational decoupling. Nat. Commun. 5, 3252 (2014).
Zhang, E. et al. ReS2-based field-effect transistors and photodetectors. Adv. Funct. Mater. 25, 4076–4082 (2015).
Qin, J.-K. et al. Photoresponse enhancement in monolayer ReS2 phototransistor decorated with CdSe–CdS–ZnS quantum dots. ACS Appl. Mater. Interfaces 9, 39456–39463 (2017).
Yang, S. et al. High-performance few-layer Mo-doped ReSe2 nanosheet photodetectors. Sci. Rep. 4, 5442 (2014).
Liu, E. et al. High responsivity phototransistors based on few‐layer ReS2 for weak signal detection. Adv. Funct. Mater. 26, 1938–1944 (2016).
Zhang, E. et al. Tunable ambipolar polarization-sensitive photodetectors based on high-anisotropy ReSe2 nanosheets. ACS Nano 10, 8067–8077 (2016).
Najmzadeh, M., Ko, C., Wu, K., Tongay, S. & Wu, J. Multilayer ReS2 lateral p–n homojunction for photoemission and photodetection. Appl. Phys. Express 9, 055201 (2016).
Cho, A.-J., Namgung, S. D., Kim, H. & Kwon, J.-Y. Electric and photovoltaic characteristics of a multi-layer ReS2/ReSe2 heterostructure. APL Mater. 5, 076101 (2017).
Wu, K. et al. Domain architectures and grain boundaries in chemical vapor deposited highly anisotropic ReS2 monolayer films. Nano Lett. 16, 5888–5894 (2016).
Yang, S. et al. Tuning the optical, magnetic, and electrical properties of ReSe2 by nanoscale strain engineering. Nano Lett. 15, 1660–1666 (2015).
Liu, E. et al. Integrated digital inverters based on two-dimensional anisotropic ReS2 field-effect transistors. Nat. Commun. 6, 6991 (2015).
Corbet, C. M., Sonde, S. S., Tutuc, E. & Banerjee, S. K. Improved contact resistance in ReSe2 thin film field-effect transistors. Appl. Phys. Lett. 108, 162104 (2016).
Corbet, C. M. et al. Field effect transistors with current saturation and voltage gain in ultrathin ReS2. ACS Nano 9, 363–370 (2015).
Mohammed, O. B. et al. ReS2-based interlayer tunnel field effect transistor. J. Appl. Phys. 122, 245701 (2017).
Yang, S. et al. Layer-dependent electrical and optoelectronic responses of ReSe2 nanosheet transistors. Nanoscale 6, 7226–7231 (2014).
Hafeez, M., Gan, L., Bhatti, A. S. & Zhai, T. Rhenium dichalcogenides (ReX2, X = S or Se): an emerging class of TMDs family. Mater. Chem. Front. 1, 1917–1932 (2017).
Kudrawiec, R. & Misiewicz, J. Optical modulation spectroscopy. Semicond. Res. 95, 95–124 (2012).
Ho, C. H., Liao, P. C., Huang, Y. S. & Tiong, K. K. Temperature dependence of energies and broadening parameters of the band-edge excitons of ReS2 and ReSe2. Phys. Rev. B 55, 15608–15613 (1997).
Ho, C. H., Huang, Y. S., Chen, J. L., Dann, T. E. & Tiong, K. K. Electronic structure of ReS2 and ReSe2 from first-principles calculations, photoelectron spectroscopy, and electrolyte electroreflectance. Phys. Rev. B 60, 15766–15771 (1999).
Ho, C. H., Lee, H. W. & Wu, C. C. Polarization sensitive behaviour of the band-edge transitions in ReS2 and ReSe2 layered semiconductors. J. Phys. 16, 5937 (2004).
Aslan, O. B., Chenet, D. A., van der Zande, A. M., Hone, J. C. & Heinz, T. F. Linearly polarized excitons in single- and few-layer ReS2 crystals. ACS Photonics 3, 96–101 (2016).
Arora, A. et al. Highly anisotropic in-plane excitons in atomically thin and bulklike 1T′-ReSe2. Nano Lett. 17, 3202–3207 (2017).
Jian, Y.-C., Lin, D.-Y., Wu, J.-S. & Huang, Y.-S. Optical and electrical properties of Au- and Ag-doped ReSe2. Jpn. J. Appl. Phys. 52, 04CH06 (2013).
Biswas, D. et al. Narrow-band anisotropic electronic structure of ReS2. Phys. Rev. B 96, 085205 (2017).
Hart, L. S. et al. Electronic bandstructure and van der Waals coupling of ReSe 2 revealed by high-resolution angle-resolved photoemission spectroscopy. Sci. Rep. 7, 5145 (2017).
Feng, Y. et al. Raman vibrational spectra of bulk to monolayer Re2 with lower symmetry. Phys. Rev. B 92, 054110 (2015).
Echeverry, J. P. & Gerber, I. C. Theoretical investigations of the anisotropic optical properties of distorted 1T ReS2 and ReSe2 monolayers, bilayers, and in the bulk limit. Phys. Rev. B 97, 075123 (2018).
Lorchat, E., Froehlicher, G. & Berciaud, S. Splitting of interlayer shear modes and photon energy dependent anisotropic raman response in N-Layer ReSe2 and ReS2. ACS Nano 10, 2752–2760 (2016).
Hou, D. et al. High pressure X-ray diffraction study of ReS2. J. Phys. Chem. Solids 71, 1571–1575 (2010).
Kao, Y.-C. et al. Anomalous structural phase transition properties in ReSe2 and Au-doped ReSe2. J. Chem. Phys. 137, 024509 (2012).
Wang, X. et al. Pressure-induced iso-structural phase transition and metallization in WSe2. Sci. Rep. 7, 46694 (2017).
Nayak, A. P. et al. Pressure-induced semiconducting to metallic transition in multilayered molybdenum disulphide. Nat. Commun. 5, 3731 (2014).
Zhao, Z. et al. Pressure induced metallization with absence of structural transition in layered molybdenum diselenide. Nat. Commun. 6, 7312 (2015).
Bandaru, N. et al. Structural stability of WS2 under high pressure. Int. J. Mod. Phys. B 28, 1450168 (2014).
Yan, Y. et al. Associated lattice and electronic structural evolutions in compressed multilayer ReS2. J. Phys. Chem. Lett. 8, 3648–3655 (2017).
Suski, T. & Paul, W. High pressure in semiconductor physics I and II. Semiconductors and Semimetals. Vols. 54 and 55, (Academic Press, 1998).
Zhou, D. et al. Pressure-induced metallization and superconducting phase in ReS 2. npj Quant. Mater. 2, 19 (2017).
Ho, C. H., Huang, Y. S., Liao, P. C. & Tiong, K. K. Piezoreflectance study of band-edge excitons of ReS2−xSex single crystals. Phys. Rev. B 58, 12575–12578 (1998).
Webb, J. L. et al. Electronic band structure of ReS2 by high-resolution angle-resolved photoemission spectroscopy. Phys. Rev. B 96, 115205 (2017).
Eickholt, P. et al. Location of the valence band maximum in the band structure of anisotropic 1T ReSe2. Phys. Rev. B 97, 165130 (2018).
Dybała, F. et al. Pressure coefficients for direct optical transitions in MoS2, MoSe2, WS2, and WSe2 crystals and semiconductor to metal transitions. Sci. Rep. 6, 26663 (2016).
Hu, S. Y. et al. Growth and characterization of tungsten and molybdenum-doped ReSe2 single crystals. J. Alloy. Compd. 383, 63–68 (2004).
Aspnes, D. E. Third-derivative modulation spectroscopy with low-field electroreflectance. Surf. Sci. 37, 418–442 (1973).
Sun, J., Ruzsinszky, A. & Perdew, J. P. Strongly constrained and appropriately normed semilocal density functional. Phys. Rev. Lett. 115, 036402 (2015).
Buda, I. G. et al. Characterization of thin film materials using SCAN meta-GGA, an accurate nonempirical density functional. Sci. Rep. 7, 44766 (2017).
Krukau, A. V., Vydrov, O. A., Izmaylov, A. F. & Scuseria, G. E. Influence of the exchange screening parameter on the performance of screened hybrid functionals. J. Chem. Phys. 125, 224106 (2006).
Gehlmann, M. et al. Direct observation of the band gap transition in atomically thin ReS2. Nano Lett. 17, 5187–5192 (2017).
Gunasekera, S. M., Wolverson, D., Hart, L. S. & Mucha-Kruczynski, M. Electronic band structure of rhenium dichalcogenides. J. Elec Mater. 47, 4314–4320 (2018).
Wolverson, D., Crampin, S., Kazemi, A. S., Ilie, A. & Bending, S. J. Raman spectra of monolayer, few-layer, and bulk ReSe2: an anisotropic layered semiconductor. ACS Nano 8, 11154–11164 (2014).
Zhao, H. et al. Interlayer interactions in anisotropic atomically thin rhenium diselenide. Nano Res. 8, 3651–3661 (2015).
Hart, L. S. et al. Electronic bandstructure and van der Waals coupling of ReSe2 revealed by high-resolution angle-resolved photoemission spectroscopy. Sci. Rep. 7, 5145 (2017).
Zhuang, Y. et al. Deviatoric stresses promoted metallization in rhenium disulfide. J. Phys. D 51, 165101 (2018).
Ho, C. H., Huang, Y. S., Liao, P. C. & Tiong, K. K. Crystal structure and band-edge transitions of ReS2-xSex layered compounds. J. Phys. Chem. Solids 60, 1797–1804 (1999).
Naumov, P. G. et al. Pressure-induced metallization in layered ReSe 2. J. Phys. 30, 035401 (2018).
Samadi, M. et al. Group 6 transition metal dichalcogenide nanomaterials: synthesis, applications and future perspectives. Nanoscale Horiz. 3, 90–204 (2018).
Li, T. & Galli, G. Electronic properties of MoS2 nanoparticles. J. Phys. Chem. C 111, 16192–16196 (2007).
Sorkin, V., Pan, H., Shi, H., Quek, S. Y. & Zhang, Y. W. Nanoscale transition metal dichalcogenides: structures, properties, and applications. Crit. Rev. Solid State Mater. Sci. 39, 319–367 (2014).
Jin, W. et al. Direct measurement of the thickness-dependent electronic band structure of MoS2 using angle-resolved photoemission spectroscopy. Phys. Rev. Lett. 111, 106801 (2013).
Johari, P. & Shenoy, V. B. Tuning the electronic properties of semiconducting transition metal dichalcogenides by applying mechanical strains. ACS Nano 6, 5449–5456 (2012).
Fan, X., Chang, C.-H., Zheng, W. T., Kuo, J.-L. & Singh, D. J. The electronic properties of single-layer and multilayer MoS2 under high pressure. J. Phys. Chem. C 119, 10189–10196 (2015).
Kudrawiec, R. & Misiewicz, J. Photoreflectance spectroscopy of semiconductor structures at hydrostatic pressure: a comparison of GaInAs/GaAs and GaInNAs/GaAs single quantum wells. Appl. Surf. Sci. 253, 80–84 (2006).
Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169–11186 (1996).
Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15–50 (1996).
Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994).
Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758–1775 (1999).
Sun, J. et al. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional. Nat. Chem. 8, 831–836 (2016).
Vydrov, O. A. & Van Voorhis, T. Nonlocal van der Waals density functional: the simpler the better. J. Chem. Phys. 133, 244103 (2010).
Sabatini, R., Gorni, T. & de Gironcoli, S. Nonlocal van der Waals density functional made simple and efficient. Phys. Rev. B 87, 041108 (2013).
Peng, H., Yang, Z.-H., Perdew, J. P. & Sun, J. Versatile van der Waals density functional based on a meta-generalized gradient approximation. Phys. Rev. X 6, 041005 (2016).
Murray, H. H., Kelty, S. P., Chianelli, R. R. & Day, C. S. Structure of rhenium disulfide. Inorg. Chem. 33, 4418–4420 (1994).
Alcock, N. W. et al. The crystal structure of ReSe2. Acta Chem. Scand. 19, 79–94 (1965).
Monkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B 13, 5188–5192 (1976).
This work was supported by the National Science Centre (NCN) Poland OPUS 11 no. 2016/21/B/ST3/00482. R.O. acknowledges the support by POLONEZ 3 no. 2016/23/P/ST3/04278. This project is carried out under POLONEZ program, which has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 665778. F.D. acknowledges the support from NCN under Fuga 3 grant no. 2014/12/S/ST3/00313. We thank Xavier Rocquefelte for discussions regarding the structure of transition metal dichalcogenides. M.L. and O.R. would like to acknowledge the funding provided by the Natural Sciences and Engineering Research Council of Canada under the Discovery Grant Program RGPIN-2015-04518. The computations were performed using Compute Canada (Calcul Quebec and Compute Ontario) resources, including the infrastructure funded by the Canada Foundation for Innovation. Finally, S.T. acknowledges funding provided by National Science Foundation DMR-1552220 and DMR-1838443.
These authors contributed equally: Robert Oliva, Magdalena Laurien
Department of Experimental Physics, Faculty of Fundamental Problems of Technology, Wroclaw University of Science and Technology, Wybrzeże Wyspiańskiego 27, 50-370, Wrocław, Poland
Robert Oliva
, Filip Dybala
, Jan Kopaczek
& Robert Kudrawiec
Department of Materials Science and Engineering, McMaster University, JHE 359, 1280 Main Street West, Hamilton, ON, L8S 4L8, Canada
Magdalena Laurien
& Oleg Rubel
Department of Materials Science and Engineering, University of California, Berkeley, CA, 94720, USA
Ying Qin
& Sefaattin Tongay
Search for Robert Oliva in:
Search for Magdalena Laurien in:
Search for Filip Dybala in:
Search for Jan Kopaczek in:
Search for Ying Qin in:
Search for Sefaattin Tongay in:
Search for Oleg Rubel in:
Search for Robert Kudrawiec in:
R.O. wrote the paper, took part in PR measurements, and analyzed PR data, M.L. carried out first-principles calculations and contributed to the drafting of the discussion, F.D. and J.K. performed the high-pressure PR experiments, Y.Q. and S.T. grew the samples, O.R. planned and supervised the calculations, and R.K. planned the research and coordinated it. All authors discussed the results and commented on the paper.
Correspondence to Robert Oliva.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
https://doi.org/10.1038/s41699-019-0102-x
Three-Dimensional Optical Anisotropy of Low-Symmetry Layered GeS
Zongbao Li
, Yusi Yang
, Xia Wang
, Ding-Jiang Xue
& Jin-Song Hu
ACS Applied Materials & Interfaces (2019)
npj 2D Materials and Applications menu | CommonCrawl |
Socioeconomic factors associated with diarrheal diseases among under-five children of the nomadic population in northeast Ethiopia
Wondwoson Woldu1,
Bikes Destaw Bitew2 &
Zemichael Gizaw2
Diarrheal disease remains the leading cause of morbidity and mortality among under-five children worldwide. Every day, more than 4000 children lose their lives due to diarrhea. In Ethiopia, diarrhea is the second killer of under-five children next to pneumonia.
A cross-sectional study was conducted to assess the prevalence of under-five diarrhea and socioeconomic factors among the nomadic people in Hadaleala District. A total of 704 under-five children were included in this study, and subjects were recruited by the multistage cluster sampling technique. Data were collected by a pre-tested questionnaire. The multivariable logistic regression analysis was used to identify socioeconomic variables associated with childhood diarrhea.
The 2-week period prevalence of diarrhea among under-five children was 26.1% (95% CI 22.9, 29.3%). The highest prevalence (37.5%) of diarrhea occurred among children aged between 12.0 and 23.0 months. The occurrence of diarrheal disease was associated with the presence of two (AOR = 4.3, p < 0.001) and three (AOR = 22.4, p < 0.001) under-five children in each household. The age of the children ranged between 6.0 and 11.0 months (AOR = 4.8, p < 0.001), 12.0 and 23.0 months (AOR = 6.0, p < 0.001), and 24.0 and 35.0 months (AOR = 2.5, p < 0.05), illiterate mothers (AOR = 2.5, p < 0.05), and poor households (AOR = 1.6, p < 0.05).
Diarrhea prevalence was quite high among under-five children in Hadaleala District, and it was significantly concentrated among children aged between 12.0 and 23.0 months. The number of under-five children, age of children, mothers' education, and household economic status were significantly associated with childhood diarrhea. To minimize the magnitude of childhood diarrhea, implementing various prevention strategies such as health education, child care, personal hygiene, and household sanitation which can be integrated with the existing national health extension program are essential.
In 2010, 7.6 million under-five children died worldwide, nearly 21,000 under-five children every day. The highest rates of child mortality are seen in sub-Saharan Africa, where 1 in 8 children dies before age 5, more than 17 times the average for developed regions. Each year, 1.5 million deaths occur due to diarrheal disease [1]. Diarrheal disease remains the leading cause of morbidity and mortality in children under 5 years of age. Every day, more than 4000 children lose their life due to diarrhea [2]. The vast majority of these deaths are among children who live in low- and middle-income countries [3].
In Ethiopia, diarrheal disease is a major public health problem. The 2010 report of the Ministry of Finance and Economic Development (MOFED) indicated that 20% of childhood deaths in the country were due to diarrhea. The 2011 Demographic and Health Survey of Ethiopia (EDHS) findings also showed that 13% of the children had diarrhea in the 2 weeks preceding the survey at the national level [4, 5].
The Afar Region is one of the poorest, least developed, and under-serviced regions of Ethiopia and has the highest child mortality rate. It is estimated that 6459 children under the age of five still die each year, and this mortality rate is 123/1000 live births [6]. The communities that live in Hadaleala District are nomads. Nomads migrate mostly in search of pasture and water. The community has been suffering from shortages of water, hygiene, and sanitation facilities. The main sources of water for the community are river, streams, ponds, and wells that provide water for domestic use and for animals. During 2015, safe water and sanitation coverage of the district was 35 and 12%, respectively [7]. The community depends on livestock as a major subsistence economic activity, based on traditional pastoralist systems tending camels, goats, cattle, sheep, and donkeys [7]. All these situations are possible risk factors for the occurrence of mainly childhood diarrheal disease and other water-, hygiene-, and sanitation-related communicable diseases.
Childhood diarrhea has resulted from interactions of socioeconomic factors. Literatures show that the education status of family members, occupational status of mothers and fathers, family size, number of under-five children, household economic status, age of children, and other socioeconomic factors contribute to diarrheal disease. According to the literatures, socioeconomic factors have a role in the occurrence of communicable diseases through their indirect link with the quality of life, access to healthcare facilities, access to adequate water and environmental sanitation, the opportunity to use different hygienic methods, and awareness and behavior relating to disease prevention [5, 8–14]. Though the health burden of diarrheal diseases is widely recognized at the global level, there is limited information on its prevalence and the socioeconomic factors contributing to its occurrence among the nomadic population of Ethiopia. This study was therefore designed to assess the prevalence of under-five diarrheal disease and socioeconomic factors among nomadic people in Hadaleala District, Afar Region, northeast Ethiopia. The result of this study could help the national, regional, and zonal level policy makers, health institutions at each level, and the community to design and implement strategies to prevent or minimize childhood diarrheal disease. Furthermore, it may also serve as a baseline data for further studies and local consumption.
Study design and settings
A community-based cross-sectional study was conducted among the nomadic populations in Hadaleala District, Afar Region, northeast Ethiopia in May, 2015. Hadaleala District is one of the districts of Hariresu Zone, Afar National Regional State. It is located at 341 km southwest of the regional capital, Semera, and 268 km north of Addis Ababa, the capital city of Ethiopia. It has an area of 1272 km2 divided into 11 rural kebeles (the smallest administrative units in Ethiopia) with a total population of 42,845 as projected for the year 2015. It has 7516 households with an average household size of 5.7 persons per house. Under-five children account for 10.1% (4328) of the total population. As the population lives in a very scattered manner, the average population density is 14 persons/km2 . Furthermore, the economy of the district is based on livestock and crop production [7]. Due to the dispersed pasture and water resources, the nomadic communities in the district are mobile.
The sample size was determined using the single population proportion formula by considering the following assumptions: p = 31.0% (the 2-week period prevalence of diarrhea among under-five children in Arba Minch District) [9], 95% confidence interval, and a 5% margin of error (d),
$$ n=\frac{{\left({z}_{\raisebox{1ex}{$\alpha $}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\right)}^2p\left(1-p\right)}{d^2}=\frac{(1.96)^20.31\left(1-0.31\right)}{0.05^2}=328 $$
Considering the design effect of 2 and 5% non-response rate, the final sample size was 689 mother–child pair.
Sampling procedure
The multistage cluster sampling technique was used to select study participants from the nomadic population. The clusters were villages with defined geographical boundaries. Out of a total of 11 kebeles, 6 were selected by the simple random sampling technique. The 6 selected kebeles were clustered into 39 villages, and 17 villages were selected by the systematic random sampling technique. Finally, all households (704) with under-five children were included in the study. For households which had more than one child each, the younger one was selected for the study.
Measurement of outcome variable
Childhood diarrheal disease, the primary outcome variable of this study, is defined as having three or more loose or watery stools in 24 h [15, 16]. The prevalence of childhood diarrheal disease within the 2-week period prior to data collection was calculated as the total number of diarrhea cases divided by 704 (the total number of under-five children participating in the study). Household economic status was also calculated by using tropical livestock unit (TLU) [17]. During the survey, the number and species of livestock were assessed. The main categories of domestic livestock included in this study were large ruminants (cattle and camels), small ruminants (sheep and goats), non-ruminant grazing animals (asses, mules, and horses collectively known as equines), and chickens. To determine the household economic status in relation to domestic livestock, the TLU conversion factors were used (Table 1). TLU was determined as (1.0 × number of camels) + (0.7 × number of cattle) + (0.1 × number of sheep) + (0.1 × number of goats) + (0.8 × number of horses) + (0.7 × number of mules) + (0.5 × number of asses) + (0.01 × number of chickens). A below 5 TLU score indicated that the household was poor. A TLU score of 5 to 12.99 showed the household was medium in economic status, and rich households scored 13 and above TLU [18].
Table 1 Species of domestic livestock and TLU conversion factors to determine households' economic status in Hadaleala District, Afar Region, northeast Ethiopia, May, 2015
Data collection tools and procedures
A pre-tested structured questionnaire was used to collect data. The questionnaire was prepared in English and translated to the local language and back translated to English to maintain the consistency of the questions. The tool was pre-tested out of the study area in a community which had similar characteristics prior to the actual data collection. To improve the quality of the data, eight diploma graduate nurses and two environmental health officers who were fluent enough in both Amharic and Afarigna (local languages) and working in the district were involved in the data collection process. After the pretest and training of data collectors, the data collectors visited all households in the selected clusters. When the data collectors found under-five children during the visits, they interviewed the mothers about the variables. The youngest (at the time of the survey) children were included in the study when there were more than one under-five children in the household. Finally, the collected data were checked and corrected by the data collectors immediately after finalizing the questionnaire. Supervisors daily checked the completeness, quality, and consistency of information collected, and the correctness of the information was checked by recollecting data from 5% of the households which provided the original data.
Data management and statistical analysis
Data were entered using the EPI-INFO version 3.5.3 statistical package and exported to SPSS version 20 for further analysis. Cross tabulation was used to describe socioeconomic characteristics and childhood diarrhea. Categorical data were presented as frequency counts or percentages and compared using the Pearson chi-square. Continuous data were summarized as means or medians with ± standard deviations and interquartile ranges. The univariable logistic regression analysis was used to choose variables for the multivariable logistic regression analysis, and variables which had less than 0.2 p values by the univariable analysis were then analyzed by the multivariable logistic regression for controlling the possible effects of confounders.
Socioeconomic characteristics of respondents
A total of 704 under-five children and their mothers participated in the study with a 100% response rate. Nearly one third, 229 (32.5%), of the children were aged above 35.0 months. The median age of the children was 24.0 months, and the interquartile range (IQR) was 11.0–38.0 months. The majority, 425 (60.4%), of the households had only one child, and more than half, 378 (53.7%), of the children were male. More than half, 362 (51.4%), of the mothers were aged between 25.0 and 34.0 years. The median age of the mothers was 29.0 years, and the IQR was 24.0–43.0 years. The great majority, 687 (97.6%), of the mothers were currently engaged. Six hundred thirty-three (89.9%) mothers were Afar by ethnicity. Six hundred twenty-four (88.6%) mothers had no formal education. The majority, 668 (94.9%), of the mothers were housewives by occupation. About 456 (64.8%) households were economically poor. More than half, 371 (52.7%), of the households had more than five family members (Table 2).
Table 2 Socioeconomic information of households (n = 704) in Hadaleala District, Afar Region, northeast Ethiopia, May, 2015
Prevalence of diarrheal disease among under-five children
A total of 184 children had diarrhea in the 2-week period prior to data collection. Therefore, the 2-week period prevalence of diarrhea among under-five children was found to be 26.1% (95% CI 22.9, 29.3%). Moreover, 81 children had diarrhea at the time of data collection, and therefore, the point prevalence was found to be 11.5% (95% CI 9.1, 13.8%). The highest prevalence of diarrhea, 69 (37.5%), occurred among children aged 12.0–23.0 months (Fig. 1). Sixty-one (8.7%) of the mothers reported that they had diarrhea in the 2 weeks preceding the survey. More than half, 102 (55.4%), of the children who had diarrhea obtained treatment from public health facilities (Table 3).
Diarrheal cases with respect to age among under-five children in Hadaleala District, Afar Region, northeast Ethiopia, May, 2015
Table 3 Occurrence of diarrheal disease among under-five children (n = 704) and their mothers and measures taken in Hadaleala District, Afar Region, northeast Ethiopia, May, 2015
Factors associated with diarrheal disease among under-five children
Table 4 presents the results of the logistic regression analysis on socioeconomic variables, like the number, age, and sex of the children, the educational and occupational status of the mothers, and the economic status of the households. The occurrence of diarrheal disease was associated with the number and age of under-five children in the households. The occurrence of diarrhea was 4.3 times more likely to be higher among households with two children compared with households with only one child [AOR = 4.3, 95% CI = (2.9, 6.3)]. Similarly, the likelihood of diarrhea occurrence was also 22.4 times higher among households with three children compared with households who had one child [AOR = 22.4, 95% CI = (7.8, 64.5)]. Children aged between 6.0 and 11.0 months had 4.8 times more chance to have diarrhea than children aged under 6 months [AOR = 4.8, 95% CI = (2.1, 10.8)]. Similarly, the occurrence of diarrhea among under-five children aged between 12.0–23.0 and 24.0–35.0 months was 6.0 and 2.5 times more likely to be higher compared with children aged under 6 months [AOR = 6.0, 95% CI = (2.9, 12.2)] and [AOR = 2.5, 95% CI = (1.2, 5.4)], respectively.
Table 4 Socioeconomic factors associated with childhood diarrhea among under-five children in Hadaleala District, Afar Region, northeast Ethiopia, May, 2015
Besides, childhood diarrheal disease was statistically associated with the educational status of mothers and household economic status. The likelihood of diarrhea occurrence was 2.5 times higher among children whose mothers had no formal education compared with their counterparts [AOR = 2.5, 95% CI = (1.2, 5.2)]. The occurrence of diarrhea was 1.6 times higher among children whose families were economically poor compared with children whose families had medium income [AOR = 1.6, 95% CI = (1.0, 2.2)].
This study investigated the prevalence of diarrheal disease and socioeconomic factors among under-five children of a nomadic community. The 2-week period prevalence of diarrheal disease among under-five children in Hadaleala District was 26.1% (95% CI 22.9, 29.3%). The finding of this study is slightly lower than the finding of a study conducted in Arba Minch District, 31.0% [9]. On the other hand, the finding of the current study is slightly higher than the findings of various studies conducted in the eastern part of Ethiopia, 22.5% [10], in a rural area of southern Ethiopia, 19.6% [19], in northwest Ethiopia, 24.9% [20], and Mecha District, 18.0% [5]. The high prevalence in the current study might be attributed to the difference in the socio- demographic, environmental, and behavioral characteristics of households and the nomadic nature of the population. As the communities living in the study area were nomads, they migrated from place to place in search of pasture and water. Having no permanent residential places, they may not have access to basic healthcare and sanitation services. Their main sources of water are rivers, streams, and wells which are prone to contamination. Because they have been practicing open defecation, their living environment is polluted with human excreta which is the main risk factor for diarrheal disease, especially for the children who routinely play in the unhygienic environment. Moreover, the people suffer from illiteracy and poverty which intern deteriorates their quality of life. All these phenomena are the direct risk factors for the occurrence of childhood diarrheal disease [7].
In this study, it was found that families who had two under-five children or above were more likely to have diarrhea than those who had only one child. As the number of children increased, the frequency of diarrhea increased significantly. This finding is supported by the findings of other similar studies. This can be justified by the fact that when the number of children in the household increases, it is expected that children could be more vulnerable to contamination because the quality of care and attention from parents decreases as mothers become incapable of caring for children. Furthermore, children who get diarrheal disease may easily transmit the disease to others who live in the same area [21–25].
The odds of having diarrheal morbidity were higher among children aged 6.0–11.0 and 12.0–23.0 months and lower at the age of 24.0 months and above compared to 0.0–5.0 months of age. The finding is consistent with that of other similar studies. This may be so because children aged more than 6 months start crawling or walking which increases their exposure to infectious agents. Moreover, such children start complementary feeding, and this may increase their exposure to different types of infections through contaminated food and water [5, 10, 22, 26–30].
This study indicated that maternal educational status was statistically associated with the occurrence of childhood diarrhea. Children whose mothers had attended formal education (primary and above) were less likely to develop diarrhea compared to children whose mothers had not attended any formal education. This may be due to the fact that education is likely to enhance household health and sanitation practices. Education can increase awareness about the transmission and prevention methods of diarrhea. It also encourages changes in behavior at the household level. Results of other studies agreed with this finding [8, 9, 31–34].
Household economic status was the other statistically associated variable. Children whose families were poor economically had higher odds of developing diarrhea compared with their counterparts. This may be due to the fact that rich families may have greater opportunity to use soap for hand washing and aqua-guard at their houses to protect microbial contamination in water and they may construct toilets. Lower income families were suffering from this disease because they could not afford these facilities [8, 11, 20, 28, 29, 33, 35, 36].
Limitation of the study
Even though childhood diarrhea was properly defined by using the WHO diarrhea assessment tool, its occurrence was determined based on the reports of mothers without the confirmation of physicians. Due to this phenomenon, the study might be affected by social desirability bias. However, female data collectors who were part of the community were recruited owning to their strong relationships with mothers so they could minimize the social desirability bias. The other limitation of the study was the scarcity of literatures on nomads or a similar population; thus, the discussion was made on the basis of findings on the general population.
The prevalence of diarrhea among under-five children in Hadaleala District was quite high. The highest rate of the prevalence was significantly concentrated among children aged 12.0–23.0 months. The childhood diarrheal disease was statistically associated with the number and age of under-five children, the educational level of mothers, and the economic status of households. To minimize the magnitude of childhood diarrheal disease, designing and implementing various prevention strategies, such as health education, child care, personal hygiene, and household sanitation, in integration with the existing national health extension program is recommended.
AOR:
Adjusted odds ratio
Crude odds ratio
EDHS:
Ethiopian Demographic and Health Survey
IQR:
km2 :
Kilometer square
MOFED:
Ministry of Finance and Economic Developments
SPSS:
Statistical Package for Social Sciences
TLU:
Tropical livestock holding
UNICEF. Levels and trends in child mortality, 2011 report. www.unicef.org/media/files/Child_Mortality_Report_2011_Final.pdf . Accessed 15 Sept 2016.
PATH. Diarrheal disease: solutions to defeat a global killer. https://www.path.org/publications/files/IMM_solutions_global_killer_pp1-14.pdf. Accessed 16 Apr 2016.
UNICEF. Water, sanitation and hygiene annual report 2013. www.unicef.org/…/WASH_Annual_Report_Final_7_2_Low_Res.pdf. Accessed 18 Apr 2016.
Nyantekyi LA, Legesse M, Belay M, Tadesse K, Manaye K, Macias C, Erko B. Intestinal parasitic infections among under-five children and maternal awareness about the infections in Shesha Kekele, Wondo Genet, Southern Ethiopia. Ethiop J Health Dev. 2010;24:3.
Dessalegn M, Kumie A, Tefera W. Predictors of under-five childhood diarrhea: Mecha District, West Gojam, Ethiopia. Ethiop J Health Dev. 2011;25(3):192–200.
EDHS. Early childhood mortality rates by socioeconomic characteristics. Ethiopia Demographic and Health Survey, Central Statistical Agency Addis Ababa, Ethiopia, 2005. www.csa.gov.et/newcsaweb/images/…2005/…/DHS_survey_report_2005.pdf . Accessed 15 Mar 2016.
Ethiopia. Hadaleala District. Finance and economic development office annual report 2014, by Dawud Haji Alisadik and others, Hadaleala: Officer of finance and economic development Afar Region, Ethiopia, 2014.
Rahman A. Assessing income-wise household environmental conditions and disease profile in urban areas: study of an Indian city. Geo J. 2006;65:211–27.
Shikur M, Marelign T, Dessalegn T. Morbidity and associated factors of diarrheal diseases among under five children in Arba-Minch District, Southern Ethiopia. Sci J Public Health. 2013;1(2):102–6.
Mengistie B, Berhane Y, Worku A. Prevalence of diarrhea and associated risk factors among children under-five years of age in eastern Ethiopia: a cross-sectional study. Open J Prev Med. 2013;3(7):446–53.
Teklemariam S, Getaneh T, Bekele F. Environmental determinants of diarrheal morbidity in under-five children, Keffa-Sheka zone, south west Ethiopia. Ethiop Med J. 2000;38:27–34.
Mediratta PR, Feleke A, Moulton HL, Yifru S, Sack BR. Risk factors and case management of acute diarrhoea in North Gondar zone, Ethiopia. J Health Popul Nutr. 2010;28:253–63.
Mekasha A, Tesfahun A. Determinants of diarrhoeal diseases: a community based study in urban south western Ethiopia. East Afr Med J. 2003;80:77–82.
Green S, Small J, Casman A. Determinants of national diarrhoeal disease burden. Environ Sci Technol. 2009;43(4):123–31.
UNICEF/WHO. Diarrhoea: why children are still dying and what can be done. The United Nations Children's Fund/World Health Organization, Geneva, 2009. www.unicef.org/…/Final_Diarrhoea_Report_October_2009_final.pdf . Accessed 18 May 2016.
Black RE, Morris SS, Bryce J. Where and why are 10 million children dying every year? Lancet. 2003;361(9376):2226–34.
Jahnke HE. Livestock production systems and livestock development in tropical Africa; livestock population in tropical Africa by species in numbers and in tropical livestock units (TLU) 1979. P. 10. www. pdf.usaid.gov/pdf_docs/PNAAN484.pdf. Accessed 10 May 2016.
Grandin BE, Bekure S, Nestel P. Livestock transactions, food consumption and household budgets. FAO Corporate, Documentary Repository. http://www.fao.org/wairdocs/ILRI/x5552E/x5552e0a.htm. Accessed 3 Sept 2016.
Tamiso A, Yitayal Y, Awoke A. Prevalence and determinants of childhood diarrhoea among graduated households, in rural area of Shebedino District, Southern Ethiopia. Sci J Public Health. 2013;2(3):243–51.
Gedefaw M, Takele M, Aychiluhem M, Tarekegn M. Current status and predictors of diarrhoeal diseases among under-five children in a rapidly growing urban setting: the case of city administration of Bahir Dar, northwest Ethiopia. Open J Epidemiol. 2015;5:89–97.
Godana W, Mengistie B. Determinants of acute diarrhoea among children under five years of age in Derashe District, Southern Ethiopia. Epub Rural Remote Health. 2013;13(3):2329.
Mihrete TS, Alemie GA, Teferra AS. Determinants of childhood diarrhea among underfive children in Benishangul Gumuz Regional State, North West Ethiopia. BMC Pediatr. 2014;14:1.
El-Gilany AH, Hammad S. Epidemiology of diarrhoeal diseases among children under age 5 years in Dakahlia, Egypt. East Mediterr Health J. 2005;11:762–75.
Shah MS, Yousafzai M, Lakhani BN, Chotanp AR, Nowshad G. Prevalence and correlates of diarrhea. Indian J Pediatr. 2003;70:207–11.
Arif A, Naheed R. Socio-economic determinants of diarrhoea morbidity in Pakistan. Acad Res Int. 2012;2:398–432.
Victor R, Baines SK, Agho KE, Dibley MJ. Determinants of breastfeeding indicators among children less than 24 months of age in Tanzania: a secondary analysis of the 2010 Tanzania Demographic and Health Survey. BMJ Open. 2013;3:1.
Calistus W, Alessio P. Factors associated with diarrhea among children less than 5 years old in Thailand: a secondary analysis of Thailand multiple indicator cluster survey. J Health Res. 2009;23:17–22.
Woldemicael G. Diarrheal morbidity among children in Eritrea: environmental and socio-economic determinants. J Health Popul Nutr. 2001;19(2):83–90.
Boadi KO, Kuitunen M. Childhood diarrheal morbidity in the Accra Metropolitan Area, Ghana: socio-economic, environmental and behavioral risk determinants. Journal of Health & Population in Developing Countries. 2005. http://www.jhpdc.unc.edu. Accessed 3 July 2016.
Dewey KG, Adu-Afarwuah S. Systematic review of the efficacy and effectiveness of complementary feeding interventions in developing countries. Matern Child Nutr. 2008;4:24–85.
Anteneh A, Kumie A. Assessment of the impact of latrine utilization on diarrheal diseases in the rural community of Hulet Ejju Enessie Woreda, East Gojjam Zone, Amhara Region. Ethiop J Health Dev. 2010;24(2):114.
Yilgwan C, Yilgwan G, Abok I. Domestic water sourcing and the risk of diarrhea: a cross-sectional survey of a semi-urban community in Nigeria. J Med. 2005;5(1):34–7.
Gebru T, Taha M, Kassahun W. Risk factors of diarrheal disease in under-five children among health extension model and non-model families in Sheko District rural community, Southwest Ethiopia: comparative cross-sectional study. BMC Public Health. 2014;14:395.
Yilgwan CS, Okolo SN. Prevalence of diarrhea disease and risk factors in Jos University Teaching Hospital, Nigeria. Ann Afr Med. 2012;11(4):217–21.
Siziya S, Muula AS, Rudatsikira E. Diarrhoea and acute respiratory infections prevalence and risk factors among under-five children in Iraq in 2000. Indian J Pediatr. 2009;35:8.
Root GM. Sanitation, community environments and childhood diarrhoea in rural Zimbabwe. J Health Popul Nutr. 2001;19:73–82.
The authors are pleased to acknowledge the data collectors, field supervisors, study participants, Hadaleala District Health Office, Afar Regional Health Bureau for their unreserved contributions to the success of this study. The authors would also like to extend their gratitude to the Hadaleala District administrators for their facilitation.
The authors of this study did not receive funds from any funding organization. However, the University of Gondar had covered questionnaire duplication and data collection fee.
Data will be made available upon requesting the primary author.
All the authors actively participated during conception of the research issue, development of a research proposal, data collection, analysis and interpretation, and writing various parts of the research report. WW had designed the study protocol and had supervised the quality of data. ZG had analyzed the data and had written the manuscript. BDB had revised the study protocol and manuscript. All authors read and approved the final manuscript.
This manuscript does not contain any individual person's data.
Ethical clearance was obtained from the institutional review board of the University of Gondar and an official letter was submitted to the district administrators. There were no risks due to participation in this research project, and the collected data were used only for this research purpose. Verbal informed consent was obtained from the mothers. All information collected from each household was treated with complete confidentiality. During data collection, oral rehydration solution and Zinc tablets with clear instructions were given to children who had diarrhea, and advice was given to mothers to take their children to a nearby health institution for further management.
Hadaleala District Health Office, Hadaleala District, Afar Regional State, Ethiopia
Wondwoson Woldu
Department of Environmental and Occupational Health and Safety, University of Gondar, Gondar, Ethiopia
Bikes Destaw Bitew & Zemichael Gizaw
Bikes Destaw Bitew
Zemichael Gizaw
Correspondence to Zemichael Gizaw.
Woldu, W., Bitew, B.D. & Gizaw, Z. Socioeconomic factors associated with diarrheal diseases among under-five children of the nomadic population in northeast Ethiopia. Trop Med Health 44, 40 (2016). https://doi.org/10.1186/s41182-016-0040-7
Childhood diarrhea
Under-five children
Socioeconomic factors
Afar Region | CommonCrawl |
Solving needs inequality analysis and inequality algebra
The problem: 3rd Hard number system question for CAT
Denominator of a fraction is less than the square of the numerator by 1. If 2 is added to both, the fraction will be more than $\frac{1}{3}$. And when 3 is subtracted from both, the fraction remains positive and is smaller than $\frac{1}{10}$. Find the fraction.
Hint: For quick solution of this 3rd hard number system question for CAT, explore deeply inequality analysis and inequality algebra.
Solution to the 3rd hard number system question for CAT: Inequality algebra with strategic focus on variable elimination
Let the fraction be $\displaystyle\frac{n}{d}$, where integers $n$ and $d$ are the positive numerator and the denominator.
By the first statement,
$d=n^2-1$.................................(1)
By the second statement,
$\displaystyle\frac{n+2}{d+2} > \displaystyle\frac{1}{3}$......................(2)
And by the third statement,
$0 < \displaystyle\frac{n-3}{d-3} < \displaystyle\frac{1}{10}.$..............(3)
How can we solve this awkward problem in the shortest possible way? Which of the three relations individually or combined with another would give us the first breakthrough?
The first equality relation just states the equality and we get no clue on the values of $n$ and $d$ except,
$n < d$.
The second relation is not responsive, but the third provides the first breakthrough.
As the fraction is positive, both numerator and denominator are positive.
$n > 3$
$d > 3$.
Both $n$ and $d$ are greater than 3.
This is the time to eliminate $d$ using mainly the first equation and from the inequalities involving only $n$ that is greater than 3 we should get more clarity into the problem.
As planned, substitute $n^2-1$ for $d$ in inequality (2),
$\displaystyle\frac{n+2}{n^2+1} > \displaystyle\frac{1}{3}$,
Or, $3n+6 > n^2+1$..........(4), by cross-multiplying that is permissible for inequalities.
Substitute $n^2-1$ for $d$ in inequality (3),
$\displaystyle\frac{n-3}{n^2-4} < \displaystyle\frac{1}{10}$,
Or, $10n-30 < n^2-4$.......(5).
It is clear that $n^2$ can be eliminated from inequalities (4) and (5) that would give us for the first time an inequality on $n$ only and knowing $n$ to be greater than 3, we should have a clearer grip on the value of $n$.
But how to eliminate $n^2$? These two are inequalities.
Well, we can. Easily.
First convert the inequality (5) to a greater than inequality reversing its original nature by multiplying both sides with $-1$,
$30-10n > 4-n^2$.........(6)
Now inequalities (4) and (6) can be added together as both are of same nature. Two LHSs lesser than two larger RHSs when added together would always remain less than the sum of the RHSs.
Adding (4) and (6),
$36 -7n > 5$.
Divide through by 7,
$n < \displaystyle\frac{36}{7} -\displaystyle\frac{5}{7}$,
Or, $n \leq 4$.
Knowing $n > 3$ we get the big breakthrough,
$n=4$, and ,
$d=n^2-1=15$.
The fraction is, $\displaystyle\frac{4}{15}$.
Answer: $\displaystyle\frac{4}{15}$.
For equation 1: $15 = 4^2-1=16-1$,
For inequality 2: $\displaystyle\frac{4+2}{15+2}=\frac{6}{17} > \displaystyle\frac{1}{3}$
For inequality 3: $\displaystyle\frac{4-3}{15-3}=\frac{1}{12} < \displaystyle\frac{1}{10}$.
Inequality rules used
Cross-multiplication of numerator and denominator between LHS and RHS
If $\displaystyle\frac{a}{b} < \displaystyle\frac{c}{d}$, then,
$ad < bc$.
Reversing nature of inequality
Multiply both sides of an inequality by $-1$ to reverse the nature of the inequality.
If $a > b$, then,
$-a < -b$.
Addition of two inequalities
If $a > b$ and $c > d$,
$a+c > b+d$.
Adding two smaller quantities remains less than the sum of two larger quantities obviously.
So the inequality rules are nothing that we didn't know. All these are based on very basic arithmetic concepts only.
Though inequality algebra looks so normal and obvious, usually we don't require to use these but when used suitably, a difficult problem can easily be broken open.
The main motive that provided the push though had been the necessity to eliminate $d$ which played the role of the core element in the systematic and directed problem solving.
Inequality algebra system | CommonCrawl |
Derive relation between alpha beta and gamma in transistor pdf
Relation between the Beta and Gamma Functions. Multiply by e−2s then integrate with respect to s, 0 ≤ s ≤ A, to get B(a,b) Z A 0 e−2s(2s)a+b−1 ds = Z A 0 Z s −s e−2s(s+t)a−1(s−t)b−1 dt ds. Take the limit as A → ∞ to get 1 2 B(a,b)Γ(a +b) = lim A→∞ Z A 0 Z s −s e−2s(s +t)a−1(s −t)b−1 dt ds. Relation between the Beta and Gamma Functions. Let σ = s +t, τ.
L²= L•² (1+αΔT)². A= L•² (1+2αΔT+α²ΔT²) A= A• (1+2αΔT) A• (1+βΔT) = A• (1+2αΔT) {α²ΔT² Neglecting them due to very smaller volume} β=2α. α:β:γ=1:2:3. ☺. mitgliedd1 and 189 more users found this answer helpful
ator by Ib. α= (Ic/Ib)/ ( (Ic/Ib)+ (Ib/Ib)
α and β are two important parameters in the transistor which define the current gains in the transistor. α = I E I C β = I B I C Since I C > I B ; β is very large and its value lies between 15 and 50. Since I C < I e ; so α < 1; Its value lies between 0.95 and 0.99. Relation between α and β: I E = I B + I
Derivation of relation between alpha beta and gamma in
The relationship between alpha and beta refers to a bipolar transistor. Alpha, α, is the ratio of collector current to emitter current and is usually close to one. Beta, β, is the ratio of collector current to base current and is usually a large number (50 - 1000). Because emitter current is the sum of collector current and base current one can derive a relationship between alpha and beta: β = α / (1 − α Transistor alpha (α) and beta (β) parameters represent the current gain, also known as forward current transfer ratio, of a BJT transistor. These parameters and associated formulas are utilised in semiconductor calculations. In a common-base configuration of a NPN transistor, the collector current (I C) is the input and emitter current (I E) is the. #Relation_between_alpha_beta_and_gamma #common_base_current_gain #common_emitter_current_gain #common_collector_current_gain #Analog_electronics #RunbyRohitK..
derive relation between alpha,beta,gamma related to thermal expansion They are in the ratio of 1:2:3 and hence can be written in one terms of another as: alpha and the common-emitter current gain in terms of the common-base current gain is. `beta_dc = alpha_dc/ (1 - alpha_dc)`. For a transistor, α dc is dose to but always less than 1 (about 0.92 to 0.98) and β dc ranges from 20 to 200 for most general purpose transistors. Concept: Bipolar Junction Transistor (BJT
Basic Electronics: Relationship Between Alpha and Bet
According to the above relation the alpha is equal to the half of the beta and one third of the gamma. From the expression it is clear that alpha is the linear, beta is areal and gamma is volume expansion of the substance. Thus, this is the relation between the alpha, beta and gamma. What is Gamma of transistor
Beta. has a value between 20 and 200 for most general purpose transistors. By combining the expressions for both . Alpha, α. and . Beta, β. the mathematical relationship between these parameters and therefore the current gain of the transistor can be given as: Where:
Since a given transistor may be connected in any of three basic configurations, there is a definite relationship, as pointed out earlier, between alpha (a), beta (b), and gamma (g). These relationships are listed again for your convenience: Take, for example, a transistor that is listed on a manufacturer's data sheet as having an alpha of 0.90
1 Answer1. Active Oldest Votes. 1. The first equation you wrote is correct. (1) I E = I C + I B. But he other two equations are written by neglecting the reverse saturation currents I C B O and I C E O. The original equations are: (2) I C = α I E + I C B O I C = β I B + I C E O (3) = β I B + ( β + 1) I C B O. Share
al for every 100 electrons flowing between the emitter-collector ter
This lecture gives the relationship between alpha,beta and gaama. About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features.
ed the relation between hENaC mRNA levels and the biologic activity of the hENaC in the respiratory epithelium of eight normal.
In this video you will learn How to Derive relation between alpha beta and gamma in thermal expansion @Kamaldheeriya Maths easyThis is must for those student.. The relation between alpha, beta, and gamma is given in the form of a ratio and the ratio is 1:2:3 and can be expressed as: \(\alpha =\frac{\beta }{2}=\frac{\gamma }{3}\) Following is the relation between the three: L = L (1 + α.ΔT) Where, α is the coefficient of linear expansion. A = A (1 + β.ΔT) Where, β is the coefficient of aerial expansio
This large size reduces the penetrative power of an alpha particle. Beta Particles. Beta particles (β) are high-energy electrons or positrons that carry a negative charge. Considerably smaller in size than alpha particles, beta particles have higher penetrative power. Gamma Rays. Gamma rays (γ) are not particles with a mass. They are a kind of electromagnetic radiation that is considerably higher in energy than x-rays. As a form of energy, gamma rays have no size or mass Here is the relationship between the Beta and Gamma functions: B ( α, β) = Γ ( α) Γ ( β) Γ ( α + β). Thanks for any help! special-functions gamma-function beta-function. Share. edited Apr 2 '20 at 6:31. StubbornAtom Difference Between Alpha Beta and Gamma Radiation - Summary. Property: Alpha radiation: Beta radiation: Gamma radiation: Nature of particle: A helium nucleus: An electron/positron: A photon: Charge: 0: Mass: 0: Speed ~0.05c: up to 0.99c: c Ion pairs per cm of air ~1 000 000 ~10 000 ~10: Interaction with perpendicular magnetic fields: Some deflection: Large defleciton: No deflection Stopped.
Define alpha and beta for a transistor
The relationship between beta and gamma function can be mathematically expressed as-. [latex]\beta (m,n)=\frac {\Gamma m\Gamma n} {\Gamma \left ( m+n \right )} [/latex] Where, [latex]\beta (m,n) [/latex] is the beta function with two variables m and n. [latex\Gamma m [/latex] is the gamma function with variable m
The overall expressions for relation between alpha, beta and gamma (α β & γ) in a transistor are given below: α = β / ( β + 1 ) β = α / (1-α) γ = β +1; Note: We have already discuses the α β & γ, current gain, voltage gain, power gain etc in the PNP transistor and they are same for both PNP and NPN transotors
This video deals with defination of alpha and beta and Relation between alpha and beta.��=α /(1−α )α= β/(1+β
In the realm of Calculus, many complex integrals can be reduced to expressions involving the Beta Function. The Beta Function is important in calculus due to its close connection to the Gamma Function which is itself a generalization of the factor..
Alpha of a transistor is the current gain in common base configuration defined as the ratio of change in collector current to change in emitter current while beta is the current gain in CE configuration. It is defined as the change in collector cu.. Both Alpha (α) and Beta (Β) are related to CE (common emitter) mode of transistor as shown in the figure-1 above. What is Difference between difference between FDM and OFDM Difference between SC-FDMA and OFDM Difference between SISO and MIMO Difference between TDD and FDD Difference between 802.11 standards viz.11-a,11-b,11-g and 11-n OFDM vs OFDMA CDMA vs GSM Bluetooth vs zigbee Fixed wimax. Common Base Transistor Configuration. The current gain of the CB circuit is calculated in a method related to that of the CE concept and it is denoted with alpha (α). It is the relationship between collector current and emitter current.The current gain is calculated by using the following formula This video lecture Relation Between Beta and Gamma Function in Hindi will help Engineering and Basic Science students to understand following topic of of.. Either way, you will usually see \$\beta\$ in the spec sheets of BJT chips. \$\alpha\$ is always less that 1.0 because of carrier generation and recombination going through the base region of the transistor, thus, the base and collector current are always less than the emitter current. There is a relationship between \$\alpha\$ and \$\beta\$..
Derive the relationship between alpha, beta and gamma Explain it step by step - Physics - Thermal Properties Of Matte Https Www Nuclearscienceweek Org Wp Content Uploads 2014 09 Alphas Betas Gammas Oh My Pdf Great as a research activity for students either inside or outside of class or as a consolidation or revision task this a4 sheet helps students keep track of what they need to know about alpha beta and gamma radiation. Alpha beta and gamma radiation worksheet answers. Alpha beta and gamma ray. Teacher.
ALPHA-, BETA-, GAMMA-, AND DELTA-HEXACHLOROCYCLOHEXANE U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Public Health Service Agency for Toxic Substances and Disease Registry August 2005 . HEXACHLOROCYCLOHEXANE . ii . DISCLAIMER . The use of company or product name(s) is for identification only and does not imply endorsement by the Agency for Toxic Substances and Disease Registry. Basic Electronics - Transistor Configurations. A Transistor has 3 terminals, the emitter, the base and the collector. Using these 3 terminals the transistor can be connected in a circuit with one terminal common to both input and output in a 3 different possible configurations. The three types of configurations are Common Base, Common Emitter.
Download PDF. Article; Open Access ; Published: 27 December 2019; The Relation between Alpha/Beta Oscillations and the Encoding of Sentence induced Contextual Information. René Terporten 1,2, Jan. Transistor Configurations. Any transistor has three terminals, the emitter, the base, and the collector. Using these 3 terminals the transistor can be connected in a circuit with one terminal common to both input and output in three different possible configurations. The three types of configurations are Common Base, Common Emitter and Common.
Alpha-, beta- and gamma-cyclodextrins are cyclic hexamers, heptamers, and octamers of glucose, respectively, and thus are hydrophilic; nevertheless, they have the ability to solubilize lipids through the formation of molecular inclusion complexes. The volume of lipophilic space involved in the solub The main difference between alpha beta and gamma particles is that alpha particles have the least penetration power while beta particles have a moderate penetration power and gamma particles have the highest penetration power. Of the three types of radiation alpha particles are the easiest to stop. Electrons and positrons which make up beta particles are antiparticles of each other. Gamma rays. Transistor 'alpha' and 'beta'. The proportion of the electrons able to cross the base and reach the collector is a measure of BJT efficiency. The heavy doping of emitter region and light doping of base region cause a number of electrons to be injected from emitter into the base than holes to be injected from the base into emitter
between alpha, beta, and gamma radiation and no isotopic identification can be performed directly. Construction Although the G-M tube has evolved over the years, the fundamental design has remained unchanged. The sizes and shapes of G-M tubes, and formulations of the counting gasses used are changed to optimize G-M tubes for specific applications. The G-M pancake is one such optimization. The. The difference between the dependent variable y and the estimated systematic influence of x on y is named the residual: To receive the optimal estimates for alpha and beta we need a choice-criterion; in the case of OLS this criterion is the sum of squared residuals: we calculate alpha and beta for the case in which the sum of all squared deviation Previous studies have shown clear differences between alpha and beta rhythms (Wang, 2010, Bressler and Richter, 2015, Haegens et al., 2014, Gregoriou et al., 2015); however, the present analysis revealed one peak spanning across alpha and beta frequency ranges (similar to Buffalo et al., 2011), which we therefore refer to as the alpha-beta band. Peak frequencies were essentially identical in. To evaluate the Beta function we usually use the Gamma function. To find their relationship, one has to do a rather complicated calculation involving change of variables (from rectangular into tricky polar) in a double integral. This is beyond the scope of this section, but I include the calculation for . 4 Arun Mahanta | Kaliabor College 4 the sake of completeness Thus BETA DISTRIBUTION. 10.5: Nuclear Reactions. By the end of this section, you will be able to: Early experiments revealed three types of nuclear rays or radiation: alpha (α) rays, beta (β) rays, and gamma (γ) rays. These three types of radiation are differentiated by their ability to penetrate matter. Alpha radiation is barely able to pass through a thin.
What is the relationship between alpha and beta in a
Click here��to get an answer to your question ️ What are alpha and beta parameters for a transistor? Obtain a relation between them
Difference between alpha beta and gamma rays in tabular form. S.NO Alpha Rays Beta Rays Gamma rays: 1 These are nuclei of helium. These are fast-moving electrons. These are electromagnetic radiations. 2 They carry a positive charge. They carry a negative charge. They carry no charge. 3 They deflected towards the anode in an electrical field. They are deflected towards the cathode in an.
relation between alpha beta and gamma in thermal expansion derivation; For an isotropic system show that β = 3α where β and α are volume expansivity and linear expansivity respectively ; Show that area expansivity is twice linear expansivity; prove that beta is equal to 2 alpha expansion pdf; superficial expansion equation; explain the coefficient of linear expansion and its disadvantages.
Alpha, Beta, and NowGamma The potential benefits from good financial planning decisions are often difficult to quantify. For any given portfolio, investment decisions can generally be.
Table 2-1.Transistor Configuration Comparison Chart. 2-28 and is related to collector-to-base current gain, beta ( b ), of the common-emitter circuit by the formula: Since a given transistor may be connected in any of three basic configurations, there is a definite relationship, as pointed out earlier, between alpha (a), beta (b), and gamma (g)
7: Alpha, Beta, and Gamma Diversity. Whittaker (1972) described three terms for measuring biodiversity over spatial scales: alpha, beta, and gamma diversity. Alpha diversity refers to the diversity within a particular area or ecosystem, and is usually expressed by the number of species (i.e., species richness) in that ecosystem
difference between alpha and beta, nor between theta and gamma,wassignificantatthep<.05level.Theseconclusions were supported by the analyses given below. Repeated-measures ANOVAs were conducted on the fac- tors of Frequency and Lag for both first target accuracy (T1), and second target accuracy, contingent on successful first tar-get detection (T2|T1). The p valuesreportedare Greenhouse.
Transistor Alpha and Beta - Peter Vi
A series of scientific discoveries involving four scientists lie behind what we now know as alpha, beta, and gamma radiation:. It was Henri Becquerel who accidentally discovered radioactivity in 1896. Marie Curie coined the term radioactivity when she and Pierre Curie started working on the phenomenon that Becquerel had discovered
17.3: Types of Radioactivity- Alpha, Beta, and Gamma Decay. Compare qualitatively the ionizing and penetration power of alpha particles ( α), beta particles ( β), and gamma rays ( γ). Express the changes in the atomic number and mass number of a radioactive nuclei when an alpha, beta, or gamma particle is emitted
Both Beta and Gamma engines use displacer-piston arrangements, the Beta engine having both the displacer and the piston in an in-line cylinder system, whilst the Gamma engine uses separate cylinders. The Alpha engine is conceptually the simplest Stirling engine configuration, however suffers from the disadvantage that both the hot and cold pistons need to have seals to contain the working gas
alpha, beta and gamma frequency range. The findings indicate that synchronous oscillations in the alpha frequency band inhibit the perception of shortly presented stimuli whereas synchrony in higher frequency ranges (>20 Hz) enhances visual perception. We conclude that alpha, beta and gamma oscillations indicate the attentional state of a subject and thus are able to predict perception.
Derive relation between alpha,beta and gamma that is alpha=beeta/2=gamma/3 but i need derivation - Physics - Thermal Properties Of Matte
Relationship between Alpha Beta and Gamma in BJT
The key difference between alpha beta gamma and delta coronavirus is that alpha and beta coronavirus are mainly associated with infections in mammals while gamma and delta coronavirus primarily infects birds.. Coronaviruses are enveloped viruses containing positive-sense single-stranded RNA genome. They have characteristic club-like or crown-like spikes on their surfaces A gamma quantum has no charge. • Alpha and beta particles show deflection when moving through magnetic fields and electric fields. Alpha particles have a lower curvature when moving through electric or magnetic fields. Gamma radiation shows no deflection. You may also be interested in reading: 1. Difference Between Radioactivity and Radiation 2 Page 1 of 2 Q: What's the Difference Between Alpha, Beta and Gamma Radiation? A: Everything in nature would prefer to be in a relaxed, or stable state. Unstable atoms undergo nuclear processes that cause them to become more stable. One such process involves emitting excess energy from the nucleus ABSORPTION OF BETA AND GAMMA RAYS Objective: To study the behavior of gamma and beta rays passing through matter; to measure the range of beta-particles from a given source and hence to determine the endpoint energy of decay; to determine the absorption coefficient in lead of the gamma radiation from a given source. Safety Note: The radioactive sources used in this experiment are extremely. The transistor current gain is normally specified in terms of h FE, h fe, or the Greek letter Beta β. When designing any transistor circuit, it is necessary to ensure there is sufficient gain to enable the circuit to operate correctly. Gain levels can be very high for many small signal devices, with current gains up to 1000 not uncommon, but for power transistors, gains are very much lower.
derive relation between alpha,beta,gamma related to
(a) Draw the characteristics in CE transistor configuration. Explain the shapes of the curves qualitatively. [4] (b) Define transistor current gains - Alpha, Beta and Gamma. Derive relation between Alpha and Beta. [4
The hump‐shaped patterns of elevational gamma and alpha diversity for herbaceous species were also significantly correlated, but the concordance between the alpha diversity of herbaceous species and local gamma diversity was stronger. Elevational patterns of alpha diversity were coarsely consistent across grain sizes, although the patterns became more pronounced at larger grain sizes
what is the relationship between alpha and gamma . Asked by pgskgokul | 6th Jul, 2020, 02:50: PM. Expert Answer: Linear expansion - α Cubical / Volumetric expansion - γ Thus, the relation between linear expansion and cubical expansion is, γ = 3α. Answered by Shiwani Sawant | 6th Jul, 2020, 09:15: PM. Concept Videos. Thermal Expansion - Part 1. This video explains the concept of heat.
Define α and β. Derive the relation between then ..
place of beta for gain. Alpha is the relationship of collector current (output current) to emitter current (input current). Alpha is calculated using the formula: For example, if the input current (I E) in a common base changes from 1 mA to 3 mA and the output current (I C) changes from 1 mA to 2.8 mA, the current gain (α) will be 0.90 or: This is a current gain of less than 1. - Transistor.
Main Difference - Alpha vs Beta vs Gamma Particles. Radioactivity is a process of decay of chemical elements with time. This decay occurs through emission of different particles. The emission of particles is also called the emission of radiation.The radiation is emitted from the nucleus of an atom, converting protons or neutrons of the nucleus into different particles
g 7-hydroxy-alpha-thujone as the major product plus five
An example: Alpha, beta and gamma diversity across a mountain landscape. Let's take a mountain slope as our landscape. On this slope, there will be many different patches of forests and grasslands. Alpha diversity is the species diversity present within each forest or grassland patch of the slope. Beta diversity is represented by the species diversity between any two patches and their. Difference Between Alpha and Beta Hemolysis www.differencebetween.com Key Difference - Alpha vs Beta Hemolysis Red blood cells are the most common type of blood cells in our blood. They are produced by the bone marrow. They are important in carrying oxygen from the lungs to the heart and to the entire body. Red blood cells contain hemoglobin molecules. Hemoglobin is an iron-containing. G. Tekes, H.-J. Thiel, in Advances in Virus Research, 2016 1.1 Taxonomy and Genome Organization. Together with the Arteriviridae, Mesoniviridae, and Roniviridae, the family Coronaviridae (subfamilies Coronavirinae and Torovirinae) make up the order Nidovirales.Coronaviruses belong to the subfamily Coronavirinae which has been divided into four genera: Alpha-, Beta-, Gamma-, and Deltacoronavirus Language Prediction Is Reflected by Coupling between Frontal Gamma and Posterior Alpha Oscillations quency bands (theta, alpha, beta, and gamma) have been associated with various cognitive functions, such as work-ing memory and long-term memory, attention, as well as different aspects of language processing (for a review, see Bastiaansen et al., 2012). Several studies have demon-strated. Overview over the 7 crystal systems: They are defined by the lengths and angles of the primitive translation vectors and exhibit different levels of symmetry. The 14 Bravais Lattices. So one classifies different lattices according to the shape of the parallelepiped spanned by its primitive translation vectors.. However, this is not yet the best solution for a classification with respect to.
gamma-tocopherol is the major form of vitamin E in many plant seeds and in the US diet, but has drawn little attention compared with alpha-tocopherol, the predominant form of vitamin E in tissues and the primary form in supplements. However, recent studies indicate that gamma-tocopherol may be impor Pics of : T25 Workout Difference Between Alpha And Beta. Blogging Miz Daisy Focus T25 Program Review Focus T25 Alpha Beta Gamma And More Paperblog T25 Review Zucchini Runner Focus T25 Workout Schedule And Calendar T25 Calories Burned Per Workout In The Alpha Beta Gamma Phase The Focus T25 Workout Calendar Team Eternal Fit The Focus T25 Review Must Read Of T25 Calories Burned Per Workout In The.
three types called: alpha (α), beta (β) and gamma (γ) radiations after the first three letters in the Greek alphabet (see figure below). The radiation emitted transforms the element into a new element. The process is called a decay or a disintegration. The research leading to the identification of the radiation emitted from the radioactive atoms is excit-ing and fundamental. We shall give. Alpha, Beta and Gamma Rays Elements heavier than lead (Z=82), are radioactive Radioactive materials emit: -alpha particles (helium nucleus) -beta particles (electrons) -gamma rays (electromagnetic radiation) How can we tell the difference between these? DEMO Remember the Periodic Table? Figure 33.3 Penetrating power is different for the different radioactive particles emitted Lead. Lesson 43: Alpha, Beta, & Gamma Decay The late 1800s and early 1900s were a period of intense research into the new nuclear realm of physics. In 1896 Henri Becquerel found that a sample of uranium he was doing experiments with had a special property. • After he was done with a series of experiments using the uranium, he put it into a drawer with a photographic plate. A photographic. Alpha, Beta and Gamma Alumina as a catalyst -A Review Kiran Y Paranjpe Abstract Alpha (α), Beta (β) and gamma (ƴ) are the different phases of Alumina. The α- Alumina is also known as Nano alumina and is white puffy powder. The specific surface area is low, resistant to high temperature and inert, but it does not belong to activated alumina, and it has almost no catalytic activity. β.
What is the relation between a ß and ? in a transistor
What is the difference between alpha, beta, gamma and neutron radiation? Gamma rays and beta particles make up most of the fallout radiation immediately after a nuclear explosion. Gamma rays are the immediate hazard to life. There are four major types of radiation. Alpha particles: Alpha particles cannot penetrate most matter. A piece of paper or the outer layers of skin is sufficient to stop. In this way alpha and beta particles cause the formation of a lot of ion pairs in their paths. (This is not quite so for gamma radiation — we shall deal with gamma ionisation separately a bit later). Because alpha particles are over 7000 times more massive than beta particles, they will move much more slowly than beta particles of the same. as a random proportion of gamma, then there is some association between alpha and beta (because of their common dependence on gamma) but not a lot of statistical dependence of alpha and beta, regardless of whether an additive or multiplicative partition is used to derive beta from (fixed) gamma and (random) alpha. In his contribution, Besalga shows that Veech and Crist's simulation is only. Internet Glossary of Statistical Terms by Dr. Howard S. Hoffman. The figure shown below provides a graphical illustration of the relationship between α, β & Power The Factor Theorem says that if a polynomial p ( x) has root r, then x − r divides p ( x). Since polynomial with complex coefficients always have exactly the same number of roots as its degree (counting multiplicity), and they have unique factorization, that means that if p ( x) is the polynomial. x n + p 1 x n − 1 + ⋯ + p n − 1 x + p 0
Transistor configurations - tpub
The numerical range for the gamma index is between 0 and 1. This measure may be written in the form of percentage and would thus range from 0 to 100. Figure 4.5 shows the maximal connectivity. It is evident from the figure that for a planar graph, the addition of each vertex to the system increases the maximum number of edges by three. This proposition is true for any planar network with. radiation are alpha, beta and gamma. Radioactive decay rates are stated in terms of their half-lives. A radioactive half-life is the time it takes for half of the atoms to emit radiation. The half-life of a given nuclear species is related to its radiation risk. The radioactivity of a sample can be measured by counting how many ionizing events occurred in a period of time or as rate (counts. Alpha, beta, or gamma: where does all the diversity go? J. John Sepkoski, Jr. Abstract.-Global taxonomic richness is affected by variation in three components: within-com-munity, or alpha, diversity; between-community, or beta, diversity; and between-region, or gamma, diversity. A data set consisting of 505 faunal lists distributed among 40 stratigraphic intervals and six environmental zones. Key Difference: Alpha radiation can be described as the producer of high energy and fast moving helium particles. Beta radiation is the producer of fast moving electrons and can penetrate further in comparison to the alpha particles. Gamma radiations are high energy radiations that are in the form of electromagnetic waves, and these radiations do not give off any particle like alpha and gamma.
Relative to forgotten items, remembered items showed a significant anticorrelation at a positive lag between ATL alpha/beta power and hippocampal slow gamma power (P fdr = 0.037, d = 0.731; Fig. 4B), where an increase in hippocampal gamma power preceded a decrease in ATL alpha/beta power by 200 to 300 ms. No correlation was observed between ATL alpha/beta power and hippocampal fast gamma power. then `alpha+beta+gamma= -b/a,` `alpha×beta+ beta×gamma+ gamma×alpha= c/a`, and `alpha×beta×gamma= -d/a` Let us try to understand how is it derived In generic terms, `p(x)= ax^3+bx^2+cx+d`, where a is not equal to zero, and alpha, beta and gamma are the zero of the polynomial. If alpha, beta and gamma are the zeros of the polynomial then the roots of the polynomial will be `(x-alpha. The relationship between alpha and beta refers to a bipolar transistor. Alpha, [math] \alpha [/math], is the ratio of collector current to emitter current and is usually close to one. Beta, [math] \beta [/math], is the ratio of collector current to base current and is usually a large number (50 - 1000). Because emitter current is the sum of collector current and base current one can derive a. We can comput the PDF and CDF values for failure time \(T\) = 1000, using the example Weibull distribution with \(\gamma\) = 1.5 and \(\alpha\) = 5000. The PDF value is 0.000123 and the CDF value is 0.08556. Functions for computing Weibull PDF values, CDF values, and for producing probability plots, are found in both Dataplot code and R code The relationship between alpha, beta and gamma with the latitudinal gradient of species richness was analyzed by Spearman correlations and linear re-gressions (Rodríguez et al., 2003). Ecoregions Balsas dry forests Central American dry forests Chiapas Depression dry forests Jalisco dry forests Sinaloan dry forests Sonoran-Sinaloan transition subtropical dry forests Southern Pacific dry.
Transistor ConstructionTransistor Construction • 3 layer semiconductor device consisting: • 2 n- and 1 p-type layers of material npn transistor • 2 p- and 1 n-type layers of material pnp transistor • The term bipolar reflects the fact that holes and electrons participate in the injection process into the oppositely polarized material • A single pn junction has two different types of. TOXICOLOGICAL PROFILE FOR ALPHA-, BETA-, GAMMA-, AND DELTA-HEXACHLOROCYCLOHEXANE Author: ATSDR Subject: HEXACHLOROCYCLOHEXAN Keywords: hexachlorocyclohexan ATSDR Toxicological Profile 319-84-6 319-85-7 319-86-8 58-89-9 Created Date: 8/23/2005 4:14:20 PM. Difference between Alpha, Beta and Gamma rays . Radioactivity is the act of emitting radiation spontaneously and is done by an atomic nucleus (unstable) to attain a more stable configuration by giving up some energy. Radioactivity is a physical, but not a biological phenomenon. Therefore, during radioactivity three major types of radiations are emitted by the radioactive particles namely alpha. Directional coupling of slow and fast hippocampal gamma with neocortical alpha/beta oscillations in human episodic memory Benjamin J. Griffiths a,b, George Parish a,b,1, Frederic Rouxa,b,1, Sebastian Michelmann , Mircea van der Plas , Luca D. Kolibiusa,b, Ramesh Chelvarajahc, David T. Rollingsc, Vijay Sawlanic, Hajo Hamerd, Stephanie Gollwitzerd
transistors - Alpha/Beta Parameters of BJT - Electrical
partitioning uses a convex metric (alpha-beta-gamma. ÉCOSCIENCE, vol. 17 (4), 2010 relationship shares the same units; Lande, 1996); hence the method can be used to quantify the relative contribution of alpha and beta diversity from each spatial scale to gamma diversity (Veech et al., 2002). The within- and between-community diversity relation-ships (alpha and beta relationships) across. Gamma ; Beta ; Alpha; Theta ; Delta (100 - 38 Hertz = cycles per second) (38 - 15 Hz) (14 - 8 Hz) (7 - 4 Hz) (3 - 0,5 Hz) Although logical thinking is often attributed to the left hemisphere and intuitive and creative activities are seen to be located in the right hemisphere, the Mind Mirror EEG in most people is quite symmetrical. Both hemispheres are very well connected, and even though. BJT and Alpha, Beta & Gamma Relation (20marks) (WRITE YOUR ANSWERS WITH PROPER NUMBER) a) Find the relation between a &B b) Find the relation between B & Y c) Find the relation between a & Y d) Find the relation between a,b & Y e) Q point in NPN transistor f) What is CE in NPN transistor What is CBin NPN transistor g) h) What is CC in NPN transistor i) a = B
Bipolar Transistor Tutorial, The BJT Transisto
So the difference between alpha and gamma, i.e. the beta diversity, is zero - we have the same sales distribution and a total overlap in all locations. In case two we find a low alpha diversity in each location, but a high consolidated gamma diversity taking all locations together: In this case the difference between alpha and gamma diversity, i.e. the beta diversity, is high - we have. Alpha - oxidation Defined as the oxidation of fatty acid (methyl group at beta carbon) with the removal of one carbon unit adjacent to the α carbon from the carboxylic end in the form of CO2 Alpha oxidation occurs in those fatty acids that have a methyl group(CH3) at the beta-carbon, which blocks beta oxidation. Substrate:-Phytanic acid, which is present in milk or derived from phytol present. are recognized, alpha or local diversity (α), beta diver-sity or differentiation β) and gamma or regional diver-sity (γ). Beta diversity, the spatial turnover or change in the identities of species, is a measure of the difference in species composition either between two or more local assemblages or between local and regional assem-blages. For a given level of regional species richness, as. Alpha, beta and gamma radiation can be detected by using magnetic field. Alpha and beta particles have contrary charges-they undergo deflection in opposite direction, whereas gamma rays do not transfer any charge-they do not undergo deflection.The displacement laws of radioactivity governed by Rutherford, which are listed below:- (a) During a α decay, the daughter element was always two. Similarly, beta cutoff frequency (f β) is a particular frequency that occurs when the common emitter current gain (β) value drops to 0.707 of its low frequency value.The common emitter current gain is the ratio of the value of transistor's collector current to the value of transistor's base current in a transistor. Consider the relation between alpha and beta cutoff frequencies
In particular, the relationship between alpha and gamma power was reflected in the amplitude of the BOLD signal, while the relationship between beta and gamma bands was reflected in the latency of BOLD with respect to significant changes in gamma power. These results lay the basis for identifying contributions of different neural pathways to cortical processing using fMRI Most alpha windows are made from 1-mil (0.001) thick Mylar with a coating of conducting material on both sides. In some ionization survey meters, slide-type alpha and beta absorbers are used to permit measuring beta particles in the presence of alpha particles and gamma rays in the presence of beta articles NPN Transistor - BJT Transistor Construction, Working & Applications as Inverter, Switching & Amplifier. When a third doped element is added to a diode in such a way that two PN-junctions are formed, the resulting device is called a transistor. Transistors are smaller than vacuum tubes, and were invented by J. Barden and W.H. Brattain of Bell Laboratories, USA Proteobacteria is a major phylum of Gram-negative bacteria.They include a wide variety of pathogenic genera, such as Escherichia, Salmonella, Vibrio, Helicobacter, Yersinia, Legionellales, and many others. Others are free-living (nonparasitic) and include many of the bacteria responsible for nitrogen fixation.. Carl Woese established this grouping in 1987, calling it informally the purple. Wacker Chemie AG uses dedicated enzymes, that can produce alpha-, beta- or gamma-cyclodextrin specifically. This is very valuable especially for the food industry, as only alpha- and gamma-cyclodextrin can be consumed without a daily intake limit. Crystal structure of a rotaxane with an α-cyclodextrin macrocycle. Derivatives. Interest in cyclodextrins is enhanced because their host-guest.
In case two we find a low alpha diversity in each location, but a high consolidated gamma diversity taking all locations together: In this case the difference between alpha and gamma diversity, i.e. the beta diversity, is high - we have totally different sales distributions among the locations, selling only one, but a different type of drinks in each location - we got totally different. gamma-globin chains, respectively. genes is clinically benign. However, when co-inherited with a beta-thalassaemia mutation, alpha-globin gene duplication leads to a more severe phenotype in beta-thalassaemia patients because it aggravates the balance between alpha- and beta-globin chains. Gene structure and transcript variants The human alpha-globin gene cluster spans about 43 kb and is. 9. The superposition principle indicates synergy between alpha, beta, gamma, theta, and delta oscillations during the performance of sensory-cognitive tasks. Integrative brain function operates through the combined action of multiple oscillations. 10. Our results strongly support the recommendation to use several methods in multiple frequency. Naturally occurring vitamin E exists in eight chemical forms (alpha-, beta-, gamma-, and delta-tocopherol and alpha-, beta-, gamma-, and delta-tocotrienol) that have varying levels of biological activity . Alpha- (or α-) tocopherol is the only form that is recognized to meet human requirements. Serum concentrations of vitamin E (alpha-tocopherol) depend on the liver, which takes up the. 5 Difference Between Deliquescence, Hygroscopic And Efflorescence 7 Difference Between Isotopes And Isobars With Examples 12 Difference Between Starch And Glycoge 3. a. Gamma radiation is most likely to penetrate. Alpha is blocked by paper, and beta by thick plastic, which would imply that body tissue would stop alpha and beta. b. The alpha particles would have to be in direct contact with living tissue to damage the organ. Making Claims 4. a. Lead is most effective. It reduced the levels of all three.
Gratorama.
Heets Alternative.
NLInvesteert.
EBay Banknoten verkaufen.
Haustürgeschäfte Corona.
Ethereum QR code format.
Coinbase COIN.
Check your open network ports.
Cyberpunk 2077 crashes PC.
Manteltarifvertrag Banken 2020 pdf.
Stones to tumble.
FTMO Rechnung schreiben.
Excel sort table by column.
Sims 4 Furniture CC folder.
Publizistik Frankfurt.
Vwu Unfall.
Bathroom color trends 2021.
Gut und Günstig Eis am Stiel.
USA online casinos seed capital no deposit.
Paysafecard pincode gratis.
Was ist die Bundesnetzagentur.
Coffee printer höhle der löwen.
Ontology coin youtube.
Kryptowährung Absturz.
Environment texture space.
XYO Binance.
BISON App Österreich Steuer.
Westlake Chemical Aktie.
HTEC ETF.
ING Verkauf über Fondsgesellschaft.
Historical stock market returns by month.
Ethereum gas price calculator.
Coinbase Paysafecard.
Trex vs Phoenix.
Audius coin kaufen.
Startup Aktien mit Potenzial 2021.
Tor Browser Alpha Android.
Wohnung kündigen Vorlage PDF.
Max Capital Trade. | CommonCrawl |
Classification Accuracy Metric
From GM-RKB
(Redirected from Accuracy)
An classification accuracy metric is a classifier performance metric based on the proportion of the classifier's correct classifications to incorrect classifications (on labeled testing records).
It can be calculated by:
(TP+TN)/(TP+TN+FP+FN), for a Two-Class Problem.
counting the correct classifications and dividing by the number of classifications made.
It can (typically) be the inverse of a Classification Error Measure.
It can be estimated by an Accuracy Estimation Process.
It can be reported as the rate which a case will be labeled with the right category, if the Predictive Model is a Classifier.
It can be reported as the average distance between the predicted label and the correct value, if the Predictive Model is an Estimator.
It is (typically) required that the Test Case be unseen during the Training Phase.
It can be the Inverse Function to the Error Rate Function.
Example(s):
A Classification System may be said to have 85.5% accuracy (to predict whether a customer responds to a promotional campaign).
Counter-Example(s):
a Point Estimator Measure.
a True Positive Rate, or a True Negative Rate, or a False Positive Rate.
a Cross-Entropy Metric.
See: Confusion Matrix; Resubstitution Accuracy; Precision; Recall; F-Measure; Error Rate; Statistical Significance; Cross-validation; Classification Task; Task Performance, Cross-Validation, Bootstrap.
(ML Glossary, 2018) ⇒ (2008). Accuracy. In: Machine Learning Glossary https://developers.google.com/machine-learning/glossary/ Retrieved 2018-04-22.
QUOTE: The fraction of predictions that a classification model got right. In multi-class classification, accuracy is defined as follows:
[math]\text{Accuracy} =\frac{\text{Correct Predictions}} {\text{Total Number Of Examples}}[/math]
In binary classification, accuracy has the following definition:
[math]\text{Accuracy} = \frac{\text{True Positives} + \text{True Negatives}}{\text{Total Number Of Examples}}[/math]
See true positive and true negative.
(Sammut & Webb, 2017) ⇒ (2017) Accuracy. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA
QUOTE: Accuracy refers to a measure of the degree to which the predictions of a model matches the reality being modeled. The term accuracy is often applied in the context of classification models. In this context, [math]accuracy = P(\lambda(X) = Y )[/math], where [math]XY[/math] is a joint distribution and the classification model [math]\lambda[/math] is a function [math]X \rightarrow Y[/math]. Sometimes, this quantity is expressed as a percentage rather than a value between 0.0 and 1.0.
The accuracy of a model is often assessed or estimated by applying it to test data for which the labels ([math]Y[/math] values) are known. The accuracy of a classifier on test data may be calculated as number of correctly classified objects/total number of objects. Alternatively, a smoothing function may be applied, such as a Laplace estimate or an m-estimate.
Accuracy is directly related to error rate, such that [math]accuracy = 1. 0 – error\; rate[/math] (or when expressed as a percentage, [math]accuracy = 100 – error\; rate[/math]).
(Melli, 2002) ⇒ Gabor Melli. (2002). "PredictionWorks' Data Mining Glossary.
Accuracy: The measure of a model's ability to correctly label a previously unseen test case. If the label is categorical (classification), accuracy is commonly reported as the rate which a case will be labeled with the right category. For example, a model may be said to predict whether a customer responds to a promotional campaign with 85.5% accuracy. If the label is continuous, accuracy is commonly reported as the average distance between the predicted label and the correct value. For example, a model may be said to predict the amount a customer will spend on a given month within $55. See also Accuracy Estimation, Classification, Estimation, Model, and Statistical Significance.
(Kohavi & Provost, 1998) ⇒ Ron Kohavi, and Foster Provost. (1998). "Glossary of Terms." In: Machine Leanring 30(2-3).
Accuracy (error rate): The rate of correct (incorrect) predictions made by the model over a data set (cf. coverage). Accuracy is usually estimated by using an independent test set that was not used at any time during the learning process. More complex accuracy estimation techniques, such as cross-validation and the bootstrap, are commonly used, especially with data sets containing a small number of instances.
Retrieved from "http://www.gabormelli.com/RKB/index.php?title=Classification_Accuracy_Metric&oldid=705570" | CommonCrawl |
Human amniotic mesenchymal stromal cells (hAMSCs) as potential vehicles for drug delivery in cancer therapy: an in vitro study
Arianna Bonomi1,
Antonietta Silini2,
Elsa Vertua2,
Patrizia Bonassi Signoroni2,
Valentina Coccè1,
Loredana Cavicchini1,
Francesca Sisto1,
Giulio Alessandri3,
Augusto Pessina1 &
Ornella Parolini2
Stem Cell Research & Therapyvolume 6, Article number: 155 (2015) | Download Citation
In the context of drug delivery, mesenchymal stromal cells (MSCs) from bone marrow and adipose tissue have emerged as interesting candidates due to their homing abilities and capacity to carry toxic loads, while at the same time being highly resistant to the toxic effects. Amongst the many sources of MSCs which have been identified, the human term placenta has attracted particular interest due to its unique, tissue-related characteristics, including its high cell yield and virtually absent expression of human leukocyte antigens and co-stimulatory molecules. Under basal, non-stimulatory conditions, placental MSCs also possess basic characteristics common to MSCs from other sources. These include the ability to secrete factors which promote cell growth and tissue repair, as well as immunomodulatory properties. The aim of this study was to investigate MSCs isolated from the amniotic membrane of human term placenta (hAMSCs) as candidates for drug delivery in vitro.
We primed hAMSCs from seven different donors with paclitaxel (PTX) and investigated their ability to resist the cytotoxic effects of PTX, to upload the drug, and to release it over time. We then analyzed whether the uptake and release of PTX was sufficient to inhibit proliferation of CFPAC-1, a pancreatic tumor cell line sensitive to PTX.
For the first time, our study shows that hAMSCs are highly resistant to PTX and are not only able to uptake the drug, but also release it over time. Moreover, we show that PTX is released from hAMSCs in a sufficient amount to inhibit tumor cell proliferation, whilst some of the PTX is also retained within the cells.
Taken together, for the first time our results show that placental stem cells can be used as vehicles for the delivery of cytotoxic agents.
In addition to the well-known ability of bone marrow mesenchymal stromal cells (MSCs) to differentiate and exert immunomodulatory effects which make them useful for applications in regenerative medicine, these cells can also migrate to inflammatory microenvironments [1] and tumors [2]. The ability of MSCs to home to sites of injury has brought many researchers to study these cells as vehicles for the delivery of anti-cancer agents to the tumor site. To this end, both gene-modified as well as wild-type MSCs have been used. MSCs have been genetically modified to over-express several anti-tumor factors, such as interleukins, interferons, pro-drugs, oncolytic viruses, anti-angiogenic agents, pro-apoptotic proteins, and growth factor antagonists [3]. Despite promising results in animal models, the genetic manipulation of MSCs for clinical application is not risk-free [4]. We have recently demonstrated that MSCs can behave as chemotherapeutic carriers without genetic manipulation. This was observed for MSCs from bone marrow [5, 6], adipose tissue [7], and dermal fibroblasts [8].
Bone marrow is the best characterized source of adult stem cells; unfortunately, the harvesting procedure is highly invasive and the number, differentiation potential, and maximum life span of MSCs obtained from this tissue significantly decline with the age of the donor [9]. In comparison, placenta is a very attractive MSC source due to its easy, non-invasive, and ethically uncontroversial procurement.
The human amniotic membrane from term placenta has been recently recognized as a valuable source of mesenchymal stromal cells, referred to as hAMSCs [10–12]. hAMSCs have attracted much attention due to their immunomodulatory properties [13], and also due to their paracrine actions and potential applications in regenerative medicine [14]. Interestingly, studies have shown that hAMSCs interact with and modulate the functions of a wide variety of immune cells both in vitro [15–19] and in vivo [20]. Moreover, we have recently demonstrated that hAMSCs can inhibit tumor cell proliferation in vitro [21]. This occurred through cell cycle arrest in the G0/G1 phase, and affected hematopoietic [lymphoid (KG1a, Jurkat), myeloid (KG1, U937)], and non-hematopoietic (Girardi heart, Hela, Saos) tumor cells. Owing to this property and to the ability of amnion-derived stem cells to target tumor sites [22], herein we investigated if hAMSCs were able to uptake the chemotherapeutic agent paclitaxel, and thus be considered as a means of drug delivery for anti-tumor therapy.
Human term placentae (n = 7) were collected from healthy women after vaginal delivery or caesarean section. Samples were collected after having obtained informed written consent according to the guidelines set by the Ethics Committee for the Institution of Catholic Hospitals (CEIOC). The research project was authorized by Fondazione Poliambulanza.
Isolation, culture, expansion, and characterization of hAMSC
Human term placentas were processed immediately after birth, as previously described [18]. Briefly, the amnion was manually separated from the chorion and washed extensively in 0.9 % NaCl containing 100U/ml penicillin and 100 μg/ml streptomycin (both from Sigma, St Louis, MO, USA) and 2.5 mg/ml amphotericin B (Sigma (or Sigma-Alrich), St. Louis, MO, USA). Afterwards, the amnion was cut into small pieces (3 × 3 cm2). Amnion fragments were sterilized by a brief incubation in 0.9 % NaCl + 2.5 % Eso Jod (Esoform, Italy) and 3 minutes in PBS containing 500U/ml penicillin, 500 μg/ml streptomycin, 12.5 μg/ml amphotericin B and 1.87 mg/ml Cefamezin (Teva Italia Srl, Assago, Italy). Sterilized amnion fragments were then incubated for 9 minutes at 37 °C in HBSS (Sigma (or Sigma-Alrich), St. Louis, MO, USA) containing 2.5 U/ml dispase (VWR International Srl, Milan, Italy). The fragments were digested in complete RPMI 1640 medium (Sigma (or Sigma-Alrich), St. Louis, MO, USA) supplemented with 0.94 mg/ml collagenase (Roche, Mannheim, Germany) and 10 μg/ml DNase (Roche, Mannheim, Germany) for 2.5−3.0 hours at 37 °C. Amnion epithelium fragments were then removed by low-g centrifugation, and mobilized MSC were passed through 100-μm and 70-μm cell strainers and collected by centrifugation. These cells are referred to as human amniotic mesenchymal stromal cells (hAMSCs).
To obtain cells at different passages, freshly isolated P0 hAMSCs were plated at a density of 50 × 103/cm2. hAMSCs were cultured at 37 °C and 5 % CO2 in DMEM complete medium supplemented with 10 % heat-inactivated fetal bovine serum (FBS, Sigma (or Sigma-Alrich), St. Louis, MO, USA), 2 mM L-glutamine (Sigma (or Sigma-Alrich), St. Louis, MO, USA), 100 U/ml penicillin and 100 μg/ml streptomycin. For phenotype evaluation, hAMSCs were trypsinized and subsequently washed with FACS buffer (0.1 % sodium azide (Sigma-Aldrich) and 0.1 % FBS (Sigma-Aldrich) in PBS). Cells were incubated for 20 minutes at 4 °C with anti-human fluorescein isothiocyanate (FITC), or phycoerythrin- (PE) or allophycocyanin (APC)-conjugated antibodies, or isotype controls (specified below) with 20 mg/ml polyglobin (Gammagard®, Baxter, IL, USA) prepared in PBS with 1 % BSA to block non-specific binding. After incubation cells were washed with FACS buffer. Dead cells were gated out by propidium iodide (PI) staining (for cell surface staining). The clones and suppliers of the monoclonal antibodies used are as follows: CD44 (clone L178), CD73 (AD2), CD90 (5E10), CD45 (2D1), HLA-DR (TU36), CD105 (266), CD13 (L138), and HLA-ABC (G46-2.6) were all purchased from BD Bioscience, San Jose, CA, USA.
Intracellular P-glycoprotein (P-gp) expression was analyzed using a mouse anti-human monoclonal antibody (clone JSB-1, Chemicon International Merck Millipore, Billerica, MA, USA). Briefly, cells were fixed and permeabilized using BD CytoFix/CytoPerm (BD Biosciences, San Jose, CA, USA) for 20 minutes at 4 °C, washed twice with Perm/Wash Buffer 1X (BD Biosciences, San Jose, CA, USA), and incubated with P-gp antibody for 25 minutes at room temperature. After two washes in Perm/Wash Buffer 1X, cells were incubated with goat anti-mouse polyclonal immunoglobulins/RPE Goat F(ab')2 (DAKO Corporation, Denmark), and washed prior to acquisition. Cells were acquired on a FACS Calibur machine using CellQuest Software (BD Biosciences, San Jose, CA, USA) and results were analyzed using FCS Express 4 (De Novo Software, Los Angeles, CA, USA). IgG1 (clone X40, BD Biosciences, San Jose, CA, USA) and IgG2b (clone MG2b-57, Biolegend, San Diego, CA, USA) were used as isotype controls. Quantification of P-gp expression was performed by determining the mean fluorescence intensity (MFI) ratio as follows: MFI of P-gp/MFI isotype control.
Sensitivity of hAMSCs to Paclitaxel
Paclitaxel (PTX) was purchased from AdipoGen (Vinci-Biochem, Vinci, Italy), diluted in dimethylsulfoxide to a concentration of 5 mg/ml, and stored at −20 °C in 5-μl aliquots. PTX was thawed immediately prior to use and diluted in culture medium to obtain the desired concentration.
The anti-proliferative and cytotoxic effects of PTX on hAMSCs were evaluated in 96-multiwell plates (Corning, Corning, NY, USA) by first seeding 2,000 and 10,000 cells/well, respectively, in 100 μl/well of complete medium. The cells were then incubated for 24 hours (cytotoxicity test) or 7 days (anti-proliferative assay) with 10-fold dilutions of PTX (from 1 ng/ml to 10,000 ng/ml). At the end of the incubation, cell proliferation and viability were evaluated by a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay, as previously described [5]. The inhibitory concentrations (IC50 and IC90) were determined according to the Reed and Muench formula [6] or by linear regression analysis.
Paclitaxel priming of hAMSCs
Sub-confluent cultures (3−4 × 105 cells) of hAMSCs were exposed to 2,000 ng/ml of PTX. Twenty-four hours later, the cells were collected, counted, and seeded at the concentration 105 cells/ml, according to a previously described protocol [5]. Conditioned media from primed hAMSCs (hAMSCsPTX-CM) were collected after 48 hours, centrifuged at 2,500 g for 15 minutes to discard cell debris, aliquoted, and stored at −70 °C. The remaining cells were trypsinized and then lysed by resuspension (106 cells/ml) in bi-distilled water and four freeze/thaw cycles. After centrifugation at 2,500 g for 15 minutes, cell debris was discarded and the lysates (hAMSCsPTX-LYS) were aliquoted and stored at −70 °C.
In order to evaluate the release of PTX over time, hAMSCsPTX-CM were collected at different timepoints (48, 72, 120 hours), and after each collection hAMSCsPTX-CM was replaced with fresh medium. Both conditioned media (CM) and lysates (LYS) were tested in vitro for their anti-proliferative activity against CFPAC-1, a human ductal pancreatic adenocarcinoma cell line highly sensitive to PTX. The values obtained were normalized by CM and LYS from untreated hAMSCs.
Modulation of hAMSC sensitivity to PTX with verapamil
Verapamil (VP), a P-gp inhibitor, was purchased as a 5.5 mM solution for i.v. injection (Isoptin, Abbott, Rome Italy). The modulation of PTX sensitivity was evaluated through a proliferation assay as reported above. Cells were seeded in the presence of increasing PTX concentrations and 20 μM of VP, a dose previously demonstrated to affect PTX sensitivity in a murine bone marrow stromal cell line [5].
In vitro anti-proliferative assay on CFPAC-1 of PTX, CM and LYS from PTX-primed hAMSCs
The effects of PTX, hAMSCsPTX-CM, and hAMSCsPTX-LYS were studied on CFPAC-1 using a MTT assay. Briefly, two-fold serial dilutions of pure PTX, PTX-CM, or PTX-LYS were prepared in 100 μl of culture medium/well in 96-multiwell plates (Corning, USA) and then 1,000 tumor cells were added to each well. Tumor cell viability was evaluated by the MTT assay after 7 days of incubation at 37 °C and 5 % CO2. The percentages of viability were calculated by dividing the optical density of tumor cells grown in PTX-CM or PTX-LYS by the optical density found in cells grown in the same dilution of CM or LYS obtained from control hAMSCs. The anti-tumor activity of PTX-CM and PTX-LYS were compared to that of pure PTX and expressed as paclitaxel equivalent concentration (PECCM and PECLYS, respectively) according to the following algorithm:
$$ \mathrm{P}\mathrm{E}\mathrm{C}\ \left(\mathrm{ng}/\mathrm{ml}\right) = \left({\mathrm{IC}}_{50}\mathrm{P}\mathrm{T}\mathrm{X}/{\mathrm{V}}_{50}\right) \times 100. $$
V50 is the volume (μl/well) of CM or LYS able to inhibit CFPAC-1 proliferation by 50 %; IC50PTX is the concentration (ng/ml) of pure paclitaxel able to inhibit CFPAC-1 proliferation by 50 %. The amount of PTX internalized and released by a single hAMSC cell (PECCM, picograms (pg)/cell) and the amount of PTX internalized and retained inside each hAMSC (PECLYS, pg/cell) was determined as follows:
$$ \mathrm{P}\mathrm{E}\mathrm{C}\left(\mathrm{pg}/\mathrm{cell}\right) = \mathrm{P}\mathrm{E}\mathrm{C}\left(\mathrm{ng}/\mathrm{ml}\right) \times \mathrm{C}\mathrm{M}\ \mathrm{or}\ \mathrm{L}\mathrm{Y}\mathrm{S}\ \mathrm{volume}\left(\mathrm{ml}\right)\times 1000/\mathrm{number}\ \mathrm{of}\ \mathrm{cells}\ \mathrm{seeded}. $$
The sum PECCM+ PECLYS, both expressed as pg/cell, indicates the total amount of PTX incorporated by a single cell after 24 hours of exposure to 2,000 ng /ml of PTX.
Comparison between different hAMSC donors was performed by a multiple comparison post-test (one-way analysis of variance (ANOVA)) and p values <0.05 were considered significant. Values represent mean ± standard deviation (SD).
hAMSC characterization
hAMSCs were used at either passages 3 or 4 and were analyzed for phenotype and morphology. Figure 1 reports the phenotype of hAMSCs, which is in line with previously published studies showing the expression of mesenchymal lineage markers CD90, CD73, CD44, CD13, HLA-ABC, and CD105, and absence of CD45 and HLA-DR [15, 23]. Interestingly, we did not notice any substantial differences in marker expression between unprimed (Fig. 1a) and PTX-primed (Fig. 1b) hAMSCs.
Characterization of human amniotic mesenchymal stromal cells (hAMSCs). Phenotype of unprimed (a) and paclitaxel (PTX)-primed (b) hAMSCs. The percentage of positive cells is indicated in each plot. Cell morphology is shown in panel c, magnification × 4. The images on the left show unprimed hAMSC (top) or hAMSC after the 24-hour treatment with 2,000 ng/ml of PTX (bottom). The images on the right show unprimed hAMSC (top) or hAMSC at the time at which conditioned media and lysates were collected and tested for their anti-proliferative activity against CFPAC-1 (bottom, 48 hrs)
The morphology of unprimed and PTX-primed hAMSCs is shown in Fig. 1c. Twenty-four hours after the addition of PTX, hAMSCs were more enlarged and had increased granularity when compared to their unprimed controls. After 48 hours, PTX-primed cells recovered their fibroblast-like morphology, similar to their unprimed counterparts.
hAMSC sensitivity to PTX
hAMSCs were highly resistant to PTX cytotoxicity when evaluated 24 hours after treatment. Their viability was >90 % even at the highest PTX concentration tested (10 μg/ml), (Fig. 2a). Based on these results, we established that a PTX treatment time of 24 hours was suitable, and that a 2,000 ng/ml dose would be used for experiments with hAMSCs. This is in accordance with a previous study that used the same concentration and treatment duration for priming bone marrow (BM)-MSCs with PTX [5].
Paclitaxel (PTX) sensitivity of human amniotic mesenchymal stromal cells (hAMSCs). a Twenty-four-hour cytotoxicity assay of hAMSCs in the presence of PTX. Bars represent mean value ± SD for five donors. b Seven-day anti-proliferation assay of hAMSCs (seven donors) in the presence of 10-fold serial dilutions of PTX (linear regression analysis). c Paclitaxel half maximal inhibitory concentration (IC 50 ) values (expressed as ng/ml) assessed by linear regression analysis in the anti-proliferation assay. For each donor, the mean value ± SD of at least two independent experiments is reported. The analysis of IC50 values by a multiple comparison test showed that only the IC50 of donor 5 is significantly different from mean value (*)
Concerning the effects of PTX on the proliferation of hAMSCs (Fig. 2b and c), we observed significant heterogeneity in PTX sensitivity amongst hAMSCs from the seven donors: the IC50 values ranged from 34.85 ng/ml to 659.12 ng/ml (Fig. 2c).
Evaluation of PTX release from primed hAMSCs
After having observed that hAMSCs were resistant to the cytotoxic effect of PTX, we then sought to investigate if they could take up and release the drug in culture, a characteristic previously observed using MSC from other sources [5]. To this end, we evaluated the release of the drug over time by collecting and replacing culture medium at different time intervals. hAMSCsPTX-CM was collected at 48, 72, and 120 hours after priming with PTX (Fig. 3a). For the four out of seven donors tested, we observed that the release of the drug was highest after 48 hours, and then decreased over time. PTX was detected in the CM collected from hAMSCs cultured for up to 120 hours after priming. In order to investigate if the PTX released from hAMSCs was sufficient to inhibit tumor cell proliferation, hAMSCs were subcultured for an additional 48 hours after priming, which represented the time point at which we observed the highest release of the drug and is also in accordance with a previously described protocol [5]. The anti-proliferative potential of hAMSCsPTX-CM was evaluated on CFPAC-1, a human pancreatic adenocarcinoma cell line highly sensitive to PTX (IC50 = 3.97 ± 4.48 ng/ml, n = 47), and compared to pure PTX (Fig. 3b). The hAMSCsPTX-CM from all seven donors produced a dose-dependent, anti-proliferative effect on CFPAC-1 (Fig. 3b).
Paclitaxel (PTX) uptake/release by human amniotic mesenchymal stromal cells (hAMSCs). a Release of PTX over time evaluated in four out of seven donors. Bars represent the amount of PTX (expressed as picograms/cell) released at each time point, and the curve expresses the total amount of PTX released over time. b Proliferation curves of CFPAC-1 in the presence of serial dilutions of PTX (white circles) or conditioned media from PTX-primed hAMSCs (hAMSCsPTX-CM) (black circles). Each point represents the mean value ± SD. The curve of hAMSCsPTX-CM represents mean values obtained from seven donors. The half maximal inhibitory concentration (IC 50 ) and the volume of CM able to inhibit tumor growth by 50 % (V 50 ) are shown. PEC paclitaxel equivalent concentration
The highest release of PTX was observed after 48 hours and represented approximately one half (59.3 %) of the incorporated drug, which was comparable to a release of approximately 0.51 ± 0.29 pg/cell (Fig. 3a). This suggests that some PTX was retained by the cells and not released, an observation previously also reported for MSCs from bone marrow [5]. To evaluate the amount of PTX internalized but not released into hAMSCsPTX-CM, at the end of the 48 hours of subculture and after the collection of hAMSCsPTX-CM (release phase), cells were trypsinized and lysed. The presence of PTX in the lysates (hAMSCsPTX-LYS) was then tested by analyzing the effects of hAMSCsPTX-LYS on the proliferation of CFPAC-1 tumor cells. Figure 4a shows that hAMSCsPTX-LYS was able to inhibit CFPAC-1 proliferation with a trend similar to hAMSCsPTX-CM, suggesting that a proportion of the internalized PTX is not released into the culture medium, but rather, is retained inside primed cells. The difference observed between hAMSCsPTX-LYS and hAMSCsPTX-CM in inhibiting CFPAC-1 tumor cell proliferation was not significant, and could be influenced by the number of hAMSCs and the volume of diluent used for the preparations of CM and LYS. By considering the PEC values found in hAMSCsPTX-CM and hAMSCsPTX-LYS, we calculated the percentages of PTX released and retained by the cells, respectively. As shown in Fig. 4b, more than 50 % of the incorporated PTX was released by the primed cells during the subculture phase (59.02 ± 63.56 %, mean value obtained with five out seven donors tested) and the remaining amount (40.98 ± 36.81 %) was retained inside hAMSCs primed with PTX.
Evaluation of paclitaxel (PTX) internalized by human amniotic mesenchymal stromal cells (hAMSCs) but not released into culture medium. a Proliferation curves of CFPAC-1 in the presence of serial dilutions of conditioned media from PTX-primed hAMSCs (hAMSCsPTX-CM) (solid line) or lysates from PTX-primed hAMSCs (hAMSCsPTX-LYS) (dashed line). Five different hAMSC donors were tested. b The graph shows the amount of PTX incorporated and released by hAMSCs (CM) and the amount incorporated and retained inside the cells (LYS), expressed as percentages of the total incorporated PTX, considered 100 %. Bars represent the mean values ± SD. Five different hAMSC donors were tested. The difference between CM and LYS was not statistically significant (p >0.05)
Effect of verapamil on PTX sensitivity and uptake/release ability of hAMSCs
P-gp has been described to be associated with drug resistance through an increased drug efflux from tumor cells [24]. In order to investigate whether P-gp underlies the mechanism by which hAMSC are resistant to PTX, we first analyzed P-gp expression in hAMSCs from the seven donors. P-gp was expressed in six out of seven hAMSC donors analyzed, with a mean ratio of fluorescence intensity (MFI) of 2.2 ± 0.39 (Fig. 5a). Next, we investigated if blocking the pump with verapamil (VP), an inhibitor of P-gp, could alter the sensitivity of hAMSCs to PTX. As shown in Fig. 5b, the presence of VP had no effect on hAMSC sensitivity to the anti-proliferative activity of PTX. Furthermore, as shown in Fig. 5c and d, the presence of 20 μM VP during the PTX uptake phase did not significantly alter the amount of drug internalized and subsequently released by hAMSCs. In fact, in line with previous observations (Fig. 4b), 59.19 % of the incorporated PTX was released into the culture medium and 40.81 % was retained inside the cells (Fig. 5e).
Effect of verapamil on paclitaxel (PTX) toxicity and PTX uptake/release by human amniotic mesenchymal stromal cells (hAMSCs). a P-glycoprotein (P-gp) expression is represented as the ratio of mean fluorescence intensity (MFI) for each donor: nd not determined. b Proliferation of hAMSCs in the presence of PTX and 20 μM verapamil (VP). Half maximal inhibitory concentration (IC50) values (mean ± SD) were calculated by linear regression analysis. c Proliferation curves of CFPAC-1 in the presence of serial dilutions of PTX (white circles), conditioned media from PTX-primed hAMSCs (hAMSCsPTX-CM) (black circles) or hAMSCsPTX-CM collected from cells primed with PTX in the presence of 20 μM VP (black triangles). d Proliferation curves of CFPAC-1 in the presence of serial dilutions of hAMSCsPTX-CM (solid line) or hAMSCsPTX-LYS (dashed line) from PTX primed hAMSCs. Both CM and LYS were obtained from hAMSCs primed with PTX in the presence of 20 μM VP. e Amount of PTX incorporated and released by hAMSCs (CM) and the amount incorporated and retained inside the cells (LYS), expressed as percentages of the total incorporated PTX, considered 100 %. hAMSCs were primed in the presence of 20 μM VP. Bars represent the mean values ± SD. The difference between CM and LYS was not statistically significant (p >0.05). To evaluate the effect of VP, hAMSCs from two donors were used
For the first time we demonstrate that MSCs from the amniotic membrane of human term placenta can be loaded with PTX and can release the drug over time. Notably, the drug released from hAMSCs is able to inhibit tumor cell proliferation in vitro. The findings described herein make these cells interesting candidates for drug delivery vehicles, also considering that they are able to inhibit tumor cell proliferation per se under specific culture conditions in vitro [21].
We show that hAMSCs are resistant to the cytotoxic effect of PTX, a drug known for its strong anti-tumor [25] and anti-angiogenic activities [26], and currently used to treat advanced solid tumors [27–30]. Resistance to PTX has been reported in MSC from other sources (BM [5], adipose tissue [7] and dermal fibroblasts [8]).
We observed significant heterogeneity in the ability of PTX to inhibit proliferation of hAMSCs from seven healthy donors; indeed, the range of IC50 values was 34.85−659.12 ng/ml. Interesting, placental MSCs from all seven donors had higher resistance to PTX when compared to MSCs from other sources. In our previous studies, MSCs from alternative sources had more homogeneous PTX sensitivity, regardless of the donors. IC50 values for BM-MSCs, AT-MSCs, and dermal fibroblasts were 4.07 ± 1.75 ng/ml [5], 2.55 ± 1.02 ng/ml [7], and 7.01 ± 2.17 ng/ml [8], respectively.
Even though most of the incorporated drug was released within 48 hours, it is interesting to note that drug was released into the culture medium for up to 120 hours after priming. Although the mechanism of PTX binding to microtubules has been extensively studied [25], very little is known about the molecular mechanisms at the basis of the drug resistance of MSCs, or the capacity of these cells to accumulate and release PTX. In previous experiments, we demonstrated the expression of P-gp, the first discovered and the best-characterized of drug-efflux transporters, by human BM-MSCs [5]. Over the last few years, several studies have been performed to better understand the role that placenta plays in distributing pharmacological agents within the maternal and fetal compartments [31] and the presence of several drug efflux proteins in placenta has been demonstrated [32]. For example, the expression of the breast cancer resistant protein (BCRP) has been previously described [33], while other authors confirmed the presence of P-gp in syncytiotrophoblast cells [34, 35]. It is interesting to note that, despite its presence, P-gp protein does not seem to have a role in PTX transport in human placenta [36]. The lack of correlation between P-gp expression and PTX transport could be explained by the inverse relationship between protein expression and activity, and by the polymorphism of the MDR1 gene [37]. Furthermore, placenta is known to express a spectrum of metabolizing enzymes [37]; among them are drug-metabolizing CYP enzymes (such as CYP1A and CYP2E1). Further studies are therefore warranted to better clarify other mechanisms of resistance, which could also be acting in placental MSCs, such as mutations in the tubulin gene [38], presence of different tubulin isotypes [39], or altered dynamics of microtubules [40]. Studies to verify the possible role of survivin, which has been shown to regulate cell division and/or survival in the presence of Taxol [41], would also provide relevant insight.
Notwithstanding the mechanism by which hAMSCs take up and secrete PTX, our data demonstrate for the first time that through a simple process of in vitro priming, these cells incorporate PTX in an amount sufficient to inhibit tumor cell proliferation in vitro.
Amongst the different MSC sources investigated and identified over the years, the human term placenta has drawn increased interest mainly due to its non-invasive procurement and large cell yield. Placental MSCs also share basic properties with MSCs from other sources, such as bone marrow. In addition, they offer significant advantages for application in the clinic due to their immunomodulatory capacities [15, 19, 20], making them very attractive for transplantation in allogeneic settings. Therefore, in addition to the advantages of using placenta as a source of MSCs, their ability to take up and release PTX over time in a sufficient amount to inhibit tumor cell proliferation could surely have a significant impact in the context of targeted cancer therapy.
Herein, we demonstrate that mesenchymal stromal cells from the amniotic membrane of human term placenta (hAMSCs) are highly resistant to the cytotoxicity of PTX. Of note, hAMSCs are able to take up, retain, and release PTX, as shown by the anti-proliferative effects exerted by lysates and conditioned medium obtained from PTX-primed hAMSCs on tumor cells in vitro. Interestingly, we also show that P-gp, even though expressed by hAMSCs, does not seem to be implicated in hAMSC resistance to PTX, as shown by the fact that blocking P-gp with verapamil had no effect on hAMSC sensitivity to the anti-proliferative activity of PTX. Furthermore, P-gp inhibition did not significantly alter the amount of drug internalized and subsequently released by hAMSCs.
Taken together, our results show that placental stem cells can be used as vehicles for delivery of cytotoxic agents, thus putting forth a new potential strategy for the delivery of cytotoxic loads to tumors, and at the same time contributing to our understanding of placental MSC, a rapidly evolving field of interest.
ANOVA:
APC:
allophycocyanin
BM:
BCRP:
breast cancer resistant protein
BSA:
bovine serum albumin
CYP:
cytochrome
DMEM:
Dulbecco's modified Eagle medium
fluorescence automated cell sorting
FBS:
fetal bovine serum
FITC:
fluorescein isothiocyanate
hAMSC:
human amniotic mesenchymal stromal cells
hAMSCsPTX-CM:
conditioned media from paclitaxel-primed hAMSCs
hAMSCsPTX-LYS:
lysates from paclitaxel-primed hAMSCs
HBSS:
Hank's Balanced Salt Solution
IC50 :
half maximal inhibitory concentration
MDR1:
multi-drug resistance
MFI:
mean fluorescence intensity
MSC:
mesenchymal stromal cells
MTT:
(3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide)
PBS:
phosphate-buffered saline
phycoerythrin
paclitaxel equivalent concentration
P-gp:
P-glycoprotein
PI:
PTX:
RPMI:
Roswell Park Memorial Institute medium
V50 :
volume of conditioned media or lysates able to inhibit tumor growth by 50 %
Moodley Y, Vaghjiani V, Chan J, Baltic S, Ryan M, Tchongue J, et al. Anti-inflammatory effects of adult stem cells in sustained lung injury: a comparative study. PLoS One. 2013;8:e69299.
Belmar-Lopez C, Mendoza G, Oberg D, Burnet J, Simon C, Cervello I, et al. Tissue-derived mesenchymal stromal cells used as vehicles for anti-tumor therapy exert different in vivo effects on migration capacity and tumor growth. BMC Med. 2013;11:139.
Shah K. Mesenchymal stem cells engineered for cancer therapy. Adv Drug Deliv Rev. 2012;64:739–48.
Zhang XB, Beard BC, Trobridge GD, Wood BL, Sale GE, Sud R, et al. High incidence of leukemia in large animals after stem cell gene therapy with a HOXB4-expressing retroviral vector. J Clin Invest. 2008;118:1502–10.
Pessina A, Bonomi A, Cocce V, Invernici G, Navone S, Cavicchini L, et al. Mesenchymal stromal cells primed with paclitaxel provide a new approach for cancer therapy. PLoS One. 2011;6:e28321.
Pessina A, Cocce V, Pascucci L, Bonomi A, Cavicchini L, Sisto F, et al. Mesenchymal stromal cells primed with Paclitaxel attract and kill leukaemia cells, inhibit angiogenesis and improve survival of leukaemia-bearing mice. Br J Haematol. 2013;160:766–78.
Bonomi A, Cocce V, Cavicchini L, Sisto F, Dossena M, Balzarini P, et al. Adipose tissue-derived stromal cells primed in vitro with paclitaxel acquire anti-tumor activity. Int J Immunopathol Pharmacol. 2013;26:33–41.
Pessina A, Cocce V, Bonomi A, Cavicchini L, Sisto F, Ferrari M, et al. Human skin-derived fibroblasts acquire in vitro anti-tumor potential after priming with Paclitaxel. Anticancer Agents Med Chem. 2013;13:523–30.
Stolzing A, Jones E, McGonagle D, Scutt A. Age-related changes in human bone marrow-derived mesenchymal stem cells: consequences for cell therapies. Mech Ageing Dev. 2008;129:163–73.
Parolini O, Alviano F, Bagnara GP, Bilic G, Buhring HJ, Evangelista M, et al. Concise review: isolation and characterization of cells from human term placenta: outcome of the first international Workshop on Placenta Derived Stem Cells. Stem Cells. 2008;26:300–11.
Parolini O, Caruso M. Review: Preclinical studies on placenta-derived cells and amniotic membrane: an update. Placenta. 2011;32:S186–95.
Pozzobon M, Piccoli M, De Coppi P. Stem cells from fetal membranes and amniotic fluid: markers for cell isolation and therapy. Cell Tissue Bank. 2014;15:199–211.
Manuelpillai U, Moodley Y, Borlongan CV, Parolini O. Amniotic membrane and amniotic cells: potential therapeutic tools to combat tissue inflammation and fibrosis? Placenta. 2011;32:S320–5.
Silini A, Parolini O, Huppertz B, Lang I. Soluble factors of amnion-derived cells in treatment of inflammatory and fibrotic pathologies. Curr Stem Cell Res Ther. 2013;8:6–14.
Magatti M, Caruso M, De Munari S, Vertua E, De D, Manuelpillai U, et al. Human amniotic membrane-derived mesenchymal and epithelial cells exert different effects on monocyte-derived dendritic cell differentiation and function. Cell Transplant, 2014.
Magatti M, De Munari S, Vertua E, Gibelli L, Wengler GS, Parolini O. Human amnion mesenchyme harbors cells with allogeneic T-cell suppression and stimulation capabilities. Stem Cells. 2008;26:182–92.
Magatti M, De Munari S, Vertua E, Nassauto C, Albertini A, Wengler GS, et al. Amniotic mesenchymal tissue cells inhibit dendritic cell differentiation of peripheral blood and amnion resident monocytes. Cell Transplant. 2009;18:899–914.
Rossi D, Pianta S, Magatti M, Sedlmayr P, Parolini O. Characterization of the conditioned medium from amniotic membrane cells: prostaglandins as key effectors of its immunomodulatory activity. PLoS One. 2012;7:e46956.
Pianta S, Bonassi Signoroni P, Muradore I, Rodrigues MF, Rossi D, Silini A, et al. Amniotic membrane mesenchymal cells-derived factors skew T cell polarization toward Treg and downregulate Th1 and Th17 cells subsets. Stem Cell Rev. 2015;11:394–407.
Parolini O, Souza-Moreira L, O'Valle F, Magatti M, Hernandez-Cortes P, Gonzalez-Rey E, et al. Therapeutic effect of human amniotic membrane-derived cells on experimental arthritis and other inflammatory disorders. Arthritis Rheumatol. 2014;66:327–39.
Magatti M, De Munari S, Vertua E, Parolini O. Amniotic membrane-derived cells inhibit proliferation of cancer cell lines by inducing cell cycle arrest. J Cell Mol Med. 2012;16:2208–18.
Kang NH, Hwang KA, Kim SU, Kim YB, Hyun SH, Jeung EB, et al. Potential antitumor therapeutic strategies of human amniotic membrane and amniotic fluid-derived stem cells. Cancer Gene Ther. 2012;19:517–22.
Soncini M, Vertua E, Gibelli L, Zorzi F, Denegri M, Albertini A, et al. Isolation and characterization of mesenchymal cells from human fetal membranes. J Tissue Eng Regen Med. 2007;1:296–305.
Gottesman MM. Mechanisms of cancer drug resistance. Annu Rev Med. 2002;53:615–27.
de Weger VA, Beijnen JH, Schellens JH. Cellular and clinical pharmacology of the taxanes docetaxel and paclitaxel--a review. Anticancer Drugs. 2014;25:488–94.
Bocci G, Di Paolo A, Danesi R. The pharmacological bases of the antiangiogenic activity of paclitaxel. Angiogenesis. 2013;16:481–92.
Vlahovic G, Karantza V, Wang D, Cosgrove D, Rudersdorf N, Yang J, et al. A phase I safety and pharmacokinetic study of ABT-263 in combination with carboplatin/paclitaxel in the treatment of patients with solid tumors. Invest New Drugs. 2014;32:976–84.
Huang TC, Campbell TC. Comparison of weekly versus every 3 weeks paclitaxel in the treatment of advanced solid tumors: a meta-analysis. Cancer Treat Rev. 2012;38:613–7.
Burris 3rd HA, Dowlati A, Moss RA, Infante JR, Jones SF, Spigel DR, et al. Phase I study of pazopanib in combination with paclitaxel and carboplatin given every 21 days in patients with advanced solid tumors. Mol Cancer Ther. 2012;11:1820–8.
Tolaney SM, Barry WT, Dang CT, Yardley DA, Moy B, Marcom PK, et al. Adjuvant paclitaxel and trastuzumab for node-negative, HER2-positive breast cancer. N Engl J Med. 2015;372:134–41.
Shiverick KT, Slikker Jr W, Rogerson SJ, Miller RK. Drugs and the placenta−a workshop report. Placenta. 2003;24:S55–9.
Vahakangas K, Myllynen P. Drug transporters in the human blood-placental barrier. Br J Pharmacol. 2009;158:665–78.
Wang H, Zhou L, Gupta A, Vethanayagam RR, Zhang Y, Unadkat JD, et al. Regulation of BCRP/ABCG2 expression by progesterone and 17beta-estradiol in human placental BeWo cells. Am J Physiol Endocrinol Metab. 2006;290:E798–807.
Novotna M, Libra A, Kopecky M, Pavek P, Fendrich Z, Semecky V, et al. P-glycoprotein expression and distribution in the rat placenta during pregnancy. Reprod Toxicol. 2004;18:785–92.
Lee NY, Lee HE, Kang YS. Identification of p-glycoprotein and transport mechanism of Paclitaxel in syncytiotrophoblast cells. Biomol Ther (Seoul). 2014;22:68–72.
Hemauer SJ, Patrikeeva SL, Nanovskaya TN, Hankins GD, Ahmed MS. Opiates inhibit paclitaxel uptake by P-glycoprotein in preparations of human placental inside-out vesicles. Biochem Pharmacol. 2009;78:1272–8.
Hemauer SJ, Nanovskaya TN, Abdel-Rahman SZ, Patrikeeva SL, Hankins GD, Ahmed MS. Modulation of human placental P-glycoprotein expression and activity by MDR1 gene polymorphisms. Biochem Pharmacol. 2010;79:921–5.
Giannakakou P, Sackett DL, Kang YK, Zhan Z, Buters JT, Fojo T, et al. Paclitaxel-resistant human ovarian cancer cells have mutant beta-tubulins that exhibit impaired paclitaxel-driven polymerization. J Biol Chem. 1997;272:17118–25.
Kavallaris M, Kuo DY, Burkhart CA, Regl DL, Norris MD, Haber M, et al. Taxol-resistant epithelial ovarian tumors are associated with altered expression of specific beta-tubulin isotypes. J Clin Invest. 1997;100:1282–93.
Goncalves A, Braguer D, Kamath K, Martello L, Briand C, Horwitz S, et al. Resistance to Taxol in lung cancer cells associated with increased microtubule dynamics. Proc Natl Acad Sci USA. 2001;98:11737–42.
Zhou J, O'Brate A, Zelnak A, Giannakakou P. Survivin deregulation in beta-tubulin mutant ovarian cancer cells underlies their compromised mitotic response to taxol. Cancer Res. 2004;64:8708–14.
The authors also would like to thank Fondazione Poliambulanza-Istituto Ospedaliero of Brescia, the physicians and midwives of the Department of Obstetrics and Gynecology of Fondazione Poliambulanza-Istituto Ospedaliero, and all of the mothers who donated placentas. This work was supported by Competitiveness ROP ERDF 2007−2013 of the Region of Lombardy (Regional Operational Programme of the European Regional Development Fund – Progetto NUTEC NUove TECnologie ID n.30263049), and the Italian Ministry of Health Ricerca Finalizzata (RF-2010-2315681).
Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
Arianna Bonomi
, Valentina Coccè
, Loredana Cavicchini
, Francesca Sisto
& Augusto Pessina
Centro di Ricerca E. Menni, Fondazione Poliambulanza-Istituto Ospedaliero, Via Bissolati, 57 I-25124, Brescia, Italy
Antonietta Silini
, Elsa Vertua
, Patrizia Bonassi Signoroni
& Ornella Parolini
Cellular Neurobiology Laboratory, Department of Cerebrovascular Diseases, Fondazione IRCCS Neurological Institute C. Besta, Milan, Italy
Giulio Alessandri
Search for Arianna Bonomi in:
Search for Antonietta Silini in:
Search for Elsa Vertua in:
Search for Patrizia Bonassi Signoroni in:
Search for Valentina Coccè in:
Search for Loredana Cavicchini in:
Search for Francesca Sisto in:
Search for Giulio Alessandri in:
Search for Augusto Pessina in:
Search for Ornella Parolini in:
Correspondence to Ornella Parolini.
AB performed experiments and wrote the manuscript, AS assisted in the research and contributed to writing the manuscript, EV, PB, VC, LC, and FS substantially contributed to data acquisition, analysis and interpretation, and contributed to drafting the manuscript. GA, AP, and OP contributed to the experimental design, supervised the research, and assisted in writing the manuscript. All authors read and approved the manuscript.
Arianna Bonomi and Antonietta Silini contributed equally to this work.
Mesenchymal Stromal Cell
Amniotic Membrane
Inhibit Tumor Cell Proliferation
Human Term Placenta | CommonCrawl |
(6)/(11)+(1)/(22) - adding of fractions
(6)/(11)+(1)/(22) - step by step solution for the given fractions. Adding of fractions, full explanation.
$ \frac{6}{11 }+\frac{ 1}{22 }=? $
The common denominator of the two fractions is: 22
$ \frac{6}{11 }= \frac{(2*6)}{(2*11)} =\frac{ 12}{22} $
$ \frac{1}{22 }= \frac{(1*1)}{(1*22)} =\frac{ 1}{22} $
$ \frac{6}{11 }+\frac{ 1}{22 }=\frac{ 12}{22 }+\frac{ 1}{22} $
$ \frac{12}{22 }+\frac{ 1}{22 }= \frac{(1+12)}{22} $
$ \frac{(1+12)}{22 }=\frac{ 13}{22} $
| (14)/(9)+(7)/(6) - addition of fractions | | (1)/(x)*(1)/(x+1) - multiply fractions | | (13)/(35)+(5)/(21) - add fractions | | (7)/(10)+(8)/(25) - add fractions | | (-3)/(4)/((2x-3))/(20) - divide fractions | | (19)/(14)+(6)/(21) - adding of fractions | | (5)/(8)+(1)/(26) - addition of fractions | | (13)/(20)+(1)/(6) - addition of fractions | | (25)/(24)-(19)/(16) - subtract fractions | | (4)/(5)+(7)/(20) - addition of fractions | | (7)/(3)*(3)/(14) - multiplication of fractions | | (4)/(5)+(16)/(20) - add fractions | | (4)/(5)+(16)/(20) - adding of fractions | | (-15)/(2)*(1)/(100) - multiplication of fractions | | (13)/(18)-(5)/(9) - subtraction of fractions | | (10)/(8)-(5)/(4) - subtract fractions | | (11)/(15)-(18)/(25) - subtract fractions | | (7)/(15)-(3)/(25) - subtract fractions | | (1)/(9)-(1)/(36) - subtraction of fractions | | (2)/(3)-(4)/(6) - subtract fractions | | (23)/(12)-(5)/(6) - subtraction of fractions | | (2)/(7)-(5)/(8) - subtraction of fractions | | (2)/(7)+(5)/(8) - addition of fractions | | (44)/(9)+(16)/(3) - addition of fractions | | (67)/(10)+(393)/(100) - add fractions | | (6)/(10)+(93)/(100) - addition of fractions | | (61)/(10)+(393)/(100) - addition of fractions | | (5(2m-7)+8m)/(2)+(17)/(2) - addition of fractions | | (5(2m-7)+8m)/(5)+(17)/(2) - adding of fractions | | (3)/(4)+(1)/(24) - add fractions | | (7y)/(8)/(-9)/(5y) - divide fractions | | (-11a)/(4)+(11)/(8a) - adding of fractions | | (7)/(2)/(8)/(7) - dividing of fractions | | (-8)/(-5)/(7)/(5) - divide fractions | | (11)/(12)/(11)/(4) - dividing of fractions | | (a)/(7)-(5)/(7) - subtract fractions | | (13)/(56)+(5)/(7) - adding of fractions | | (5)/(16)-(1)/(20) - subtraction of fractions | | (1)/(5)+(3)/(20) - adding of fractions | | (5-2)/(3)+(2-2)/(4) - adding of fractions | | (x-2)/(3)+(x-2)/(4) - adding of fractions | | (7)/(1/4)*(1)/(18) - multiplication of fractions | | (13)/(56)+(5)/(7) - add fractions | | (2)/(15)-(1)/(20) - subtraction of fractions | | (35)/(36)*(6)/(7) - multiplying of fractions | | (150)/(45)+(180)/(45) - adding of fractions | | (150)/(45)*(180)/(45) - multiplication of fractions | | (2)/(5)+(1)/(9) - adding of fractions | | (10)/(81)+(25)/(72) - adding of fractions | | (2a)/(15)-(1)/(3) - subtraction of fractions | | (7)/(9)-(14)/(45) - subtraction of fractions | | (12)/(15)/(6)/(5) - dividing of fractions | | (2)/(7)/(10)/(3) - dividing of fractions | | (9)/(1)*(1)/(5) - multiplying of fractions | | (12)/(7)*(9)/(8) - multiply fractions | | (1)/(4)*(2)/(1) - multiplication of fractions | | (4)/(1)*(1)/(16) - multiplication of fractions | | (1)/(16)*(1)/(16) - multiplication of fractions | | (1)/(3)+(7a)/(8) - adding of fractions | | (16)/(1)*(25)/(36) - multiplication of fractions | | (12)/(1)*(4)/(3) - multiplication of fractions | | (12)/(1)+(4)/(3) - addition of fractions | | (12)/(1)*(25)/(36) - multiplication of fractions | | (15(2-x))/(1)+(13(3-x))/(1) - adding of fractions | | (7)/(15)+(13)/(18) - add fractions | | CommonCrawl |
Dimitri Lozeve
Home Projects Archive Contact
Quick Notes on Reinforcement Learning
In this series of blog posts, I intend to write my notes as I go through Richard S. Sutton's excellent Reinforcement Learning: An Introduction (1).
I will try to formalise the maths behind it a little bit, mainly because I would like to use it as a useful personal reference to the main concepts in RL. I will probably add a few remarks about a possible implementation as I go on.
Relationship between agent and environment
Context and assumptions
The goal of reinforcement learning is to select the best actions availables to an agent as it goes through a series of states in an environment. In this post, we will only consider discrete time steps.
The most important hypothesis we make is the Markov property:
At each time step, the next state of the agent depends only on the current state and the current action taken. It cannot depend on the history of the states visited by the agent.
This property is essential to make our problems tractable, and often holds true in practice (to a reasonable approximation).
With this assumption, we can define the relationship between agent and environment as a Markov Decision Process (MDP).
A Markov Decision Process is a tuple \((\mathcal{S}, \mathcal{A}, \mathcal{R}, p)\) where:
\(\mathcal{S}\) is a set of states,
\(\mathcal{A}\) is an application mapping each state \(s \in \mathcal{S}\) to a set \(\mathcal{A}(s)\) of possible actions for this state. In this post, we will often simplify by using \(\mathcal{A}\) as a set, assuming that all actions are possible for each state,
\(\mathcal{R} \subset \mathbb{R}\) is a set of rewards,
and \(p\) is a function representing the dynamics of the MDP:
\[\begin{align} p &: \mathcal{S} \times \mathcal{R} \times \mathcal{S} \times \mathcal{A} \mapsto [0,1] \\ p(s', r \;|\; s, a) &:= \mathbb{P}(S_t=s', R_t=r \;|\; S_{t-1}=s, A_{t-1}=a), \end{align} \]
such that \[ \forall s \in \mathcal{S}, \forall a \in \mathcal{A},\quad \sum_{s', r} p(s', r \;|\; s, a) = 1. \]
The function \(p\) represents the probability of transitioning to the state \(s'\) and getting a reward \(r\) when the agent is at state \(s\) and chooses action \(a\).
We will also use occasionally the state-transition probabilities:
\[\begin{align} p &: \mathcal{S} \times \mathcal{S} \times \mathcal{A} \mapsto [0,1] \\ p(s' \;|\; s, a) &:= \mathbb{P}(S_t=s' \;|\; S_{t-1}=s, A_{t-1}=a) \\ &= \sum_r p(s', r \;|\; s, a). \end{align} \]
Rewarding the agent
The expected reward of a state-action pair is the function
\[\begin{align} r &: \mathcal{S} \times \mathcal{A} \mapsto \mathbb{R} \\ r(s,a) &:= \mathbb{E}[R_t \;|\; S_{t-1}=s, A_{t-1}=a] \\ &= \sum_r r \sum_{s'} p(s', r \;|\; s, a). \end{align} \]
The discounted return is the sum of all future rewards, with a multiplicative factor to give more weights to more immediate rewards: \[ G_t := \sum_{k=t+1}^T \gamma^{k-t-1} R_k, \] where \(T\) can be infinite or \(\gamma\) can be 1, but not both.
Deciding what to do: policies
Defining our policy and its value
A policy is a way for the agent to choose the next action to perform.
A policy is a function \(\pi\) defined as
\[\begin{align} \pi &: \mathcal{A} \times \mathcal{S} \mapsto [0,1] \\ \pi(a \;|\; s) &:= \mathbb{P}(A_t=a \;|\; S_t=s). \end{align} \]
In order to compare policies, we need to associate values to them.
The state-value function of a policy \(\pi\) is
\[\begin{align} v_{\pi} &: \mathcal{S} \mapsto \mathbb{R} \\ v_{\pi}(s) &:= \text{expected return when starting in $s$ and following $\pi$} \\ v_{\pi}(s) &:= \mathbb{E}_{\pi}\left[ G_t \;|\; S_t=s\right] \\ v_{\pi}(s) &= \mathbb{E}_{\pi}\left[ \sum_{k=0}^{\infty} \gamma^k R_{t+k+1} \;|\; S_t=s\right] \end{align} \]
We can also compute the value starting from a state \(s\) by also taking into account the action taken \(a\).
The action-value function of a policy \(\pi\) is
\[\begin{align} q_{\pi} &: \mathcal{S} \times \mathcal{A} \mapsto \mathbb{R} \\ q_{\pi}(s,a) &:= \text{expected return when starting from $s$, taking action $a$, and following $\pi$} \\ q_{\pi}(s,a) &:= \mathbb{E}_{\pi}\left[ G_t \;|\; S_t=s, A_t=a \right] \\ q_{\pi}(s,a) &= \mathbb{E}_{\pi}\left[ \sum_{k=0}^{\infty} \gamma^k R_{t+k+1} \;|\; S_t=s, A_t=a\right] \end{align} \]
The quest for the optimal policy
R. S. Sutton and A. G. Barto, Reinforcement learning: an introduction, Second edition. Cambridge, MA: The MIT Press, 2018.
Site proudly generated by Hakyll | CommonCrawl |
Find all matrices that commute with given matrix
Find all $2\times 2$ matrices that commute with
$$\left( \begin{array}{cc} 2 & 3 \\ 1 & 4 \end{array} \right)$$
I know that a square matrix commutes with itself, the identity matrix of that order, the null matrix of that order and any scalar matrix of that order.
The answer has been given as:
$$\left( \begin{array}{cc} m & 3n \\ n & m+2n \end{array} \right)$$
I don't understand how they're getting that form. Can someone please explain?
linear-algebra matrices
DiyaDiya
Let's call your matrix $$A = \left( \begin{array}{cc} 2 & 3 \\ 1 & 4 \end{array} \right)$$
We want a matrix $X_{2\times 2} = \begin{pmatrix} a & b\\ c&d\end{pmatrix}$ such that $AX = XA$.
$$AX = \begin{pmatrix} 2a + 3c & 2b+3d\\ a + 4c&b+4d\end{pmatrix}$$
$$XA = \begin{pmatrix} 2a + b&3a + 4b\\2c+d & 3c+4d\end{pmatrix}$$
Now you have a system of equations in four variables:
$$2a + 3c = 2a + b \implies b = 3c$$
$$2b+3d = 3a + 4b$$
$$a + 4c = 2c + d$$
$$b+4d= 3c+4d$$
Solve the system of equations. (Note, if you do Gaussian Elimination, you'll have two of the four rows all zero.)
amWhyamWhy
203k146146 gold badges260260 silver badges483483 bronze badges
You need $$\left(\begin{array}{cc}a&b\\c&d\end{array}\right) \left(\begin{array}{cc}2&3\\1&4\end{array}\right)= \left(\begin{array}{cc}2&3\\1&4\end{array}\right)\left(\begin{array}{cc}a&b\\c&d\end{array}\right)$$
Multiply out the matrices; that will give you four equations that connect $a,b,c$ and $d$. Then solve those equations.
Empy2Empy2
If you write down the unknown matrix as $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$ Then you want $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} $$ Write out the left and right sides as matrices with entries like 2a + 1b, etc. Set them equal. You get four equations in 4 unknowns. Solve, and you get the answer above.
John HughesJohn Hughes
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices or ask your own question.
How to compute the following set of the matrix?
Which are matrices 2×2 that commute with the matrix
How can I show that these matrices don't commute
All matrices which commute with all $2\times 2$ matrices
Let $A,X,Y$ be square matrices such that $X,Y$ commute with $A$. Show: if the characteristic polynomial of $A$ splits, then $X$ and $Y$ commute
Find all matrices which commute with a given matrix
Find all $2\times 2$ matrices that commute with $AX = XA$?
Find two matrices that commute with a given matrix but do not commute with each other
A complex matrix commuting with all diagonalizable matrices is a scalar matrix | CommonCrawl |
Algebraic K-theory
2010 Mathematics Subject Classification: Primary: 18F25 Secondary: 18-XX [MSN][ZBL]
A branch of algebra, dealing mainly with the study of the so-called $K$-functors ($K_0, K_1$, etc., cf. $K$-functor); it is a part of general linear algebra. It deals with the structure theory of projective modules and their automorphism groups. To put it more simply, it is a generalization of results obtained on the existence and uniqueness (up to an automorphism) of a basis of a vector space and other group-theoretical facts concerning linear groups over fields. On passing from a field to an arbitrary ring $R$ these theorems usually become invalid, and the Grothendieck group $K_0(R)$ and the Whitehead group $K_1(R)$ are, in a certain sense, a measure of their deviation from being true. Similar generalizations of the structure theorems of linear algebra appear in topology. A vector space can be regarded as a special case of a vector bundle. These objects may be studied with the aid of the homotopy theory of vector bundles and of topological $K$-theory. It is important to note in this connection that a projective module can be regarded as the module of sections of a vector bundle. This explains the choice of the class of projective modules as the object of the theory. Algebraic $K$-theory makes extensive use of the theory of rings, homological algebra, category theory and the theory of linear groups.
Algebraic $K$-theory has two different historical origins, both in the field of geometry. The first is related to certain topological obstructions. The starting point was the introduction of the concept of Whitehead torsion, which is connected with the homotopy equivalence of finite complexes and is an element in the Whitehead group, the latter being some quotient group of the group $K_1(R)$, where $R=\Z\;\Pi$ is the integral group ring of the fundamental group $\Pi$. The next step concerned topological spaces $X$ that are dominated by a finite complex, and their generalized Euler characteristic $\chi(A)$, which is an element of group $K_0(\Z\pi)$. The computation of the Whitehead group and $L$-groups (which is, strictly speaking, an algebraic problem concerning group rings), was in fact one of the first objectives of algebraic $K$-theory. Both $K_2$ and other higher functors have topological applications of the same type (for example, an obstruction to the deformation of a pseudo-isotopy of a closed manifold into an isotopy lies in some quotient group of the group $K_2(\Z\pi)$). Algebraic studies of the Whitehead group began in the 1940s. A related field is the study of the structure of linear groups over arbitrary rings, in particular, the theory of determinants over a skew-field [Ar].
The second origin of algebraic $K$-theory was an algebraic proof of the Riemann–Roch theorem [Ma] and its generalizations by A. Grothendieck in 1957. These considerations involved the introduction of the $K$-functor $K(X)$ as the group of values of a universal additive functor on coherent sheaves on a smooth algebraic variety. Moreover, the previously familiar representation rings, Witt rings (cf. Witt ring) of classes of quadratic forms, etc., turned out to be related constructions. The $K$-functor was then transferred to topology, in which it found numerous applications, and with its help several previously unsolved problems could be dealt with.
It became clear, moreover, that this construction reveals new perspectives in the understanding of old analytical problems (the index problem of elliptic operators), topological problems (extraordinary homology theories), and the theory of group representations. However, the development of algebraic $K$-theory for rings (beginning with the establishment of the correspondence (analogy) between projective, finitely-generated modules and vector bundles) was hindered by the fact that an adequate concept, analogous to that of suspension in topology, was lacking in algebra.
The 1950s and 1960s saw the beginning of the systematic study of projective modules over finite groups, and the development of one of the most important ideas on which algebraic $K$-theory is based — the idea of "stabilization" , the essence of which, roughly speaking, is that general relationships are more clearly manifested on passing to the limit of the dimension of the objects studied (e.g. linear groups or projective modules). Connections were noted between algebraic $K$-theory and the reciprocity laws of the theory of algebraic numbers and algebraic functions; studies were made of problems connected with congruence subgroups (cf. Congruence subgroup) and an algebraic analogue of the Bott periodicity theorem — the theory of polynomial extensions — was obtained.
For a ring $R$ with a unit element, the Grothendieck group $K_0(R)$ is defined as the Abelian group generated by the isomorphism classes of finitely-generated projective $R$-modules, with the defining relation:
$$[P_1]+[P_2] = [P_1\oplus P_2]$$ where $P$ is the class of modules isomorphic to the module $P$. Let $\def\GL{\textrm{GL}} \GL(n,R)$ be the general linear group over $R$, let $$A\mapsto \begin{pmatrix}A&0\\0&1\end{pmatrix}$$ be the imbedding of $\GL(n,R)$ in $\GL(n+1,R)$, let $\GL(R)$ be the direct limit of the groups $\GL(n,R)$, and let $\def\E{\textrm{E}}\E(R)$ be the subgroup in $\GL(R)$ generated by the elementary matrices $\def\l{\lambda} e_{ij}^\l$, i.e. by the matrices that have an element $\l\in R$ at the $(i,j)$-th place and agree with the unit matrix in all other places. $\E(R)$ then coincides with the commutator of $\GL(R)$. The quotient group $\GL(R)/\E(R)$ is denoted by $K_1(R)$, and is known as the Whitehead group. Finally, the Steinberg group $\def\St{\textrm{St}}$ for $n\ge 3$ is defined by the generators $x_{ij}^\l$, $\l\in R$, $1\le i,j\le n$, $i\ne j$, and the relations
$$x_{ij}^\l x_{ij}^\mu = x_{ij}^{\l+\mu},$$
$$[x_{ij}^\l,x_{jl}^\mu] = x_{il}^{\l\mu} \textrm{ if } i\ne l,$$
$$[x_{ij}^\l,x_{kl}^\mu] = 1 \textrm{ if } j\ne k,\; i\ne l,$$ Passing to the direct limit, one obtains the group $\St(R)$ and a natural homomorphism
$$\def\phi{\varphi} \phi:\St(R) \to \E(R),$$ with
$$\phi(x_{ij}^\l) = e_{ij}^\l.$$ The kernel $\ker\phi$ is denoted by $K_2(R)$ (the Milnor group). It coincides with the centre of $St(R)$. Thus, $K_0, K_1$ and $K_2$ are functors from the category of rings into the category of Abelian groups. Each of the functors $K_0$ and $K_1$ can be characterized as a functor from finitely-generated projective modules to Abelian groups that satisfies certain properties and is universal with respect to these properties. Such a "universal" characterization makes it possible to define analogues of the functors $K_0$ and $K_1$ on "sufficiently good" categories. In particular, for the category of Noetherian $R$-modules functors $G_i(R)$ quite close to $K_i(R)$ can be defined.
Examples of the groups $K_i(R)$. Let $R$ be a skew-field and let $R^*$ be its multiplicative group. $K_0(R) = \Z$ is then the group of integers, $K_1(R) = R^*/[R^*,R^*]$; and $K_2(\Z)$ is the cyclic group of order two. If $R$ is a finite field, then $K_2(R)=0$.
An important result in algebraic $K$-theory is the exact Mayer–Vietoris sequence for a Cartesian square. The diagram below represents a Cartesian square of ring homomorphisms in which $f_1$ is an epimorphism;
$$\require{AMSmath} \def\mapright#1{\xrightarrow{#1}} \def\mapdown#1{\Big\downarrow\rlap{\raise2pt{\scriptstyle{#1}}}} \begin{array}{ccc} R & \mapright{h_2} & R_2 \\ \mapdown{h_1} & & \mapdown{f_2} \\ R_1& \mapright{f_1} & R' \end{array}\phantom{h},$$ then there is an exact sequence
$$K_1(R) \to K_1(R_1)\oplus K_1(R_2)\to K_1(R')\to K_0(R)\to K_0(R_1)\oplus K_0(R')\to K_1(R)\to\; \cdots$$
If $f_2$ is also an epimorphism, then the sequence is supplemented by the terms
$$K_2(R)\to K_2(R_1)\oplus K_2(R_2)\to K_2(R')\to K_1(R)\to \; \cdots$$ If $I$ is a two-sided ideal of $R$, then the Mayer–Vietoris sequence makes it possible [Mi] to define the relative functors $K_i(R,I)$, which yield an exact sequence
$$K_2(R,I)\to K_2(R)\to K_2(R/I) \to K_1(R,I)\to K_1(R)\to K_1(R/I)\to K_0(R,I)\to K_)(R)\to K_0(R/I).$$
A fairly complete study has been made of the behaviour of $K$-functors on passing from a ring $R$ to its localization with respect to a central, multiplicatively-closed system. In particular, if certain conditions on $R$ are satisfied, then the following exact sequence has been found for the functor $G_0(R)$:
$$\cup_{s\in S} G_0(R/(s))\to G_0(R)\to G_0(S^{-1}R)\to 0.$$ If $R$ is commutative, $K_0(R)$ becomes a ring with a unit element by introducing the multiplication induced by the tensor product of modules. There exists a split epimorphism of $K_0(R)$ onto the ring $H(R)$ of continuous integer-valued functions (the ring $\Z$ is given the discrete topology) on the spectrum of $R$ (cf. Spectrum of a ring). The kernel of this homomorphism is denoted by $\def\a{\alpha} \tilde K_0(R)$. It is known that $\tilde K_0(R)$ is the nil radical of $K_0(R)$ and, if $R$ is Noetherian and if the dimension of its maximal spectrum is $\a$, then $\tilde K_0(R)^{\a+1} = 0$. If this dimension is at most 1, then $\tilde K_0(R)$ is isomorphic to the Picard group $\textrm{Pic} (K)$.
For arithmetical rings there are finiteness theorems for the functors $K_i(R)$ and $G_i(R)$. In fact, if $A$ is the ring of integers or the ring of polynomials over a finite field, and $R$ is an $R$-order and at the same time an $R$-lattice in a semi-simple finite-dimensional algebra over the field of fractions of a ring $A$, then the groups $K_i(R)$ and $G_i(R)$ are finitely generated ($i=0,1$).
The development of algebraic $K$-theory was stimulated by studies carried out on the problem of congruence subgroups: Do all subgroups of finite index in an arithmetical group contain some congruence subgroup? This question is closely connected with the problem of computing the group $K_1(R,I)$ for ideals $I$ in $R$.
Of the results concerning the stable structure of projective modules one can mention the following theorem: If $R$ is a commutative Noetherian ring whose maximal spectrum has dimension $d$, and $A$ is a module-finite $R$-algebra, then any finitely-generated projective $A$-module $P$ such that
$$\def\fm{\mathfrak{m}} P_\fm \cong A_\fm^{d+1}\oplus Q$$ for all maximal ideals $\fm$ of $R$ is isomorphic to $A\oplus N$ (here $M_\fm$ is the localization of the module $A$ at $\fm$). Another important theorem on the structure of projective modules is the cancellation theorem: Let $R$, $A$ and the module $P$ be as above. Let $Q$ be a finitely-generated projective $A$-module, and let $M$ and $N$ be arbitrary $A$-modules. Then it follows from $Q\oplus P\oplus M\cong Q\oplus N$ that
$$P\oplus M\cong N.$$ The stable rank of a ring $R$ is closely connected with problems of the stable structure of projective modules. Thus, if $R$ is a commutative ring of stable rank smaller than $d$, then
$$K_1(R) \cong \GL(d,R)/\E(d,R).$$ In connection with the theory of induced representations of groups, the functors $K_i$ for group rings have been studied. One of the results of these studies is that if $G$ is a finite group of order $n$ and $C$ is the set of cyclic subgroups of $G$, then the index of the subgroup
$$\sum_{c\in C} \textrm{Im} (K_i(Rc)\to K_i(RG))$$ in $K_i(RG)$ is divisible by $n$ if $i=0,1,2$.
Regarding polynomial ring extensions it is known that if $R$ is a regular ring, then
$$K_0(R[t])\cong K_0(R[t,t^{-1}])\cong K_0(R),$$
$$K_1(R[t])\cong K_1(R).$$ Moreover, the sequence
$$0\to K_1(R)\to K_1(R[t])\oplus K_1(R[t^{-1}])\to K_1(R[t,t^{-1}])\to K_0(R) \to 0$$
is exact for any ring $R$.
One result in the computation of the functor $K_2(R)$ is the theorem of Matsumoto: If $R$ is a field, then $K_2(R)$ is given by the generators $\{a,b\}, a,b\in R^*$ and the relations $\{a,bb'\} = \{a,b\}\{a,b'\},\; \{aa',b\} = \{a,b\}\{a',b\},\; \{a,1-a\} = 1$, where the latter holds for all $ a\in R^*, a \ne 1$.
In the 1970s there appeared numerous versions of the definitions of the functors $K_i$ for $i\ge 2$. It has been shown [Ba3] that these theories coincide and yield the classical functors $K_n$ if $n\le 2$. In several cases effective methods of computation for higher $K$-groups were found. The development of unitary $K$-theory ([Ba3], Vol. 3), which studies analogous problems for modules on which quadratic and bilinear forms are defined, also began in that decade.
Algebraic $K$-theoretic ideas and results have become most important in certain parts of functional analysis centering around $C^*$-algebras (cf. $C^*$-algebra). Especially in the form of $KK$-theory (or Kasparov $K$-theory). Cf. e.g. [Cu].
In algebraic geometry there are important connections with the Chow groups (cf. Chow ring).
[Ar] E. Artin, "Geometric algebra", Interscience (1957) MR0082463 Zbl 0077.02101
[At] M.F. Atiyah, "K-theory. Lectures", Benjamin (1967) MR0224083
[Ba] H. Bass, "Lectures on topics in algebraic K-theory", Tata Inst. (1966) MR0279159 Zbl 0226.13006
[Ba2] H. Bass, "Algebraic K-theory", Benjamin (1968) MR0249491 Zbl 0174.30302
[Ma] Yu.I. Manin, "Lectures on the K-functor in algebraic geometry" Russian Math. Surveys, 24 : 5 (1969) pp. 1–89 Uspekhi Mat. Nauk, 24 : 5 (1969) pp. 3–86 MR0265355 Zbl 0204.21302
[Mi] J.W. Milnor, "Introduction to algebraic K-theory", Princeton Univ. Press (1971) MR0349811 Zbl 0237.18005
[Sw] R.G. Swan, "Algebraic K-theory", Springer (1968) MR0245634 Zbl 0193.34601
[SwEv] R.G. Swan, E.G. Evans, "K-theory of finite groups and orders", Springer (1970) MR0308195 Zbl 0205.32105
Algebraic K-theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algebraic_K-theory&oldid=36232
This article was adapted from an original article by A.V. MikhalevA.I. Nemytov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Algebraic_K-theory&oldid=36232"
Category theory; homological algebra | CommonCrawl |
The dynamics of the antibiotic resistome in the feces of freshly weaned pigs following therapeutic administration of oxytetracycline
Dynamics of the fecal microbiome and antimicrobial resistome in commercial piglets during the weaning period
Prapat Suriyaphol, Jimmy Ka Ho Chiu, … Gunnaporn Suriyaphol
Swine growth promotion with antibiotics or alternatives can increase antibiotic resistance gene mobility potential
Johanna Muurinen, Jacob Richert, … Timothy A. Johnson
The occurrence of antibiotic resistance genes in the microbiota of yak, beef and dairy cattle characterized by a metagenomic approach
Weiwei Wang, Xiaojuan Wei, … Jiyu Zhang
Gut Bacterial Microbiota and its Resistome Rapidly Recover to Basal State Levels after Short-term Amoxicillin-Clavulanic Acid Treatment in Healthy Adults
Chad W. MacPherson, Olivier Mathieu, … Thomas A. Tompkins
Amoxicillin and thiamphenicol treatments may influence the co-selection of resistance genes in the chicken gut microbiota
Andrea Laconi, Roberta Tolosi, … Alessandra Piccirillo
Characterization of antibiotic resistance genes in the species of the rumen microbiota
Yasmin Neves Vieira Sabino, Mateus Ferreira Santana, … Hilário Cuquetto Mantovani
Resistance to change: AMR gene dynamics on a commercial pig farm with high antimicrobial usage
Jolinda Pollock, Adrian Muwonge, … Alexander Corbishley
Identification of the core bacteria in rectums of diarrheic and non-diarrheic piglets
Jing Sun, Lei Du, … Liangpeng Ge
Surveillance of Enterococcus spp. reveals distinct species and antimicrobial resistance diversity across a One-Health continuum
Rahat Zaheer, Shaun R. Cook, … Tim A. McAllister
Mahdi Ghanbari ORCID: orcid.org/0000-0002-8846-49471,
Viviana Klose1,
Fiona Crispie2 &
Paul D. Cotter ORCID: orcid.org/0000-0002-5465-90682
Metagenomics
In this study, shotgun metagenomics was employed to monitor the effect of oxytetracycline, administered at a therapeutic dose, on the dynamics of the microbiota and resistome in the feces of weaned pigs. Sixteen weaning pigs were assigned to one of two treatments including standard starter diet for 21 days or antibiotic-supplemented diet (10 g oxytetracycline/100 kg body weight/day) for 7 days, followed by 14 days of standard starter diet. Feces were collected from the pigs on days 0, 8, and 21 for microbiota and resistome profiling. Pigs receiving oxytetracycline exhibited a significantly greater richness (ANOVA, P = 0.034) and diversity (ANOVA, P = 0.048) of antibiotic resistance genes (ARGs) than the control pigs. Antibiotic administration significantly enriched the abundances of 41 ARGs, mainly from the tetracycline, betalactam and multidrug resistance classes. Compositional shifts in the bacterial communities were observed following 7 days of antibiotic adminstration, with the medicated pigs showing an increase in Escherichia (Proteobacteria) and Prevotella (Bacteroidetes) populations compared with the nonmedicated pigs. This might be explained by the potential of these taxa to carry ARGs that may be transferred to other susceptible bacteria in the densely populated gut environment. These findings will help in the optimization of therapeutic schemes involving antibiotic usage in swine production.
Antibiotics have been used for decades in swine and other livestock production for both therapeutic (e.g. treatment of specific diseases) and nontherapeutic (growth promotion) purposes1,2. For years, nontherapeutic (low-dose) application of antibiotics as growth promoters was linked with beneficial effects; however, there is data that supports the fact that this practice possible contributes to the emergence of antimicrobial-resistant bacteria, thus exacerbating the problem of antibiotic resistance in animal and human pathogens2,3,4,5,6,7,8,9,10. Additionally, it is likely that therapeutic doses of antibiotics can be subinhibitory antibiotic concentrations for some host-associated bacteria, enhancing the selection for antibiotic resistance genes (ARGs) and the horizontal transfer of these genes4.
Tetracyclines are a broad spectrum and relatively low cost group of antibiotics, of which tetracycline, chlortetracycline and oxytetracycline are frequently employed in veterinary medicine5,11,12. Tetracyclines have several therapeutic indications, which are associated with various infections in food-producing animals (e.g., infections caused by Mycoplasma, Chlamydia, Pasteurella, Clostridium, Ornithobacterium rhinotracheale, and some protozoa11,12). In food-producing species, including swine, first-generation tetracyclines (e.g., oxytetracycline) are most frequently employed. Therapeutic indications in animals comprise respiratory infections, dermal and soft tissue infections, peritonitis, metritis, and other enteric infections5,8,11. The recommended dosage of oxytetracycline for pigs for therapeutic purposes is 40 mg oxytetracycline hydrochloride/kg body weight (KBW)/day, for 7–10 days. In numerous countries, tetracyclines are included in feed, not just for therapeutic purposes but also at subtherapeutic doses to promote growth in swine2,3,5,11,13. Consumer apprehension associated with emerging bacterial resistance has led to antibiotics being no longer used for such purposes in some regions including the EU, the USA, New Zealand, Chile, Bangladesh, South Korea, and Vietnam14,15. However, nontherapeutic administration of tetracyclines for growth promotion purposes is still allowed in many other countries3,11,12.
We hypothesized that a therapeutic application of antibiotic could cause detectable and long term changes on pig fecal microbiota and resistome composition. To test this theory, we investigated the effect of a 7-day oxytetracycline administration at therapeutic dose and its withdrawal, on the diversity and abundance of the fecal antibiotic resistome and microbiota in freshly weaned pigs using a whole-metagenome shotgun sequencing approach. The findings of this study have important implications for swine production and public health, since there is a paucity of research on the effects of therapeutic doses of antibiotics on the diversity and abundance of the gut microbiota as well as the antibiotic resistome.
Within the first week of the feeding trial, three pigs (two from the antibiotic-medicated group and one from the control group) showed symptoms of E. coli infection and were treated once with the anti-inflammatory drug dexamethasone as well as with fluoroquinolone (3rd generation, 3 mL/100 KBW) for three consecutive days. Therefore, metagenome data from these animals were not included in the downstream data processing.
Sequencing generated approximately 600 million sequences, ranging from 8.18 to 19 million per sample (Supplementary Table S1). The average quality score (Phred scores) across all the samples was 35.11 and ranged from 32.6 to 40. Phred scores greater than Q30 indicated that there was a less than 0.1% chance that a base was called incorrectly. Quality filtering of datasets resulted in the removal of 0.19% of the reads with Phred score <33 as well as removal of 2.5% of the reads that were classified as belonging to the host and PhiX genome.
Resistome diversity and composition of the gut
Using the MEGARes database with a 90% gene cutoff fraction, 490,000 reads were aligned to 648 AMR genes across both groups. The AMR genes were classified into 19 unique classes of resistance, 49 mechanisms and 175 groups (Supplementary Tables S2 and S3). Following antibiotic administration for 7 days, the pigs receiving oxytetracycline were enriched in ARGs and had high diversity of ARGs (Fig. 1). Alpha diversity analysis revealed that the overall size of the resistome (i.e., the number of unique ARGs) was significantly affected (ANOVA, P < 0.05) by the day as well as by the oxytetracycline treatment (Fig. 1), with the highest resistome diversity in both groups observed on day 0 and the lowest diversity observed at the end of the trial. Linear mixed-effect analysis of the diversity indices showed a significant differences in the richness (P = 0.034) and diversity (P = 0.048, Shannon index) of ARGs between the pigs of the control and antibiotic-medicated groups from day 0 to day 8 but not at day 21 following the withdrawal period.
Richness and diversity (Shannon) of antibiotic resistance genes across treatments and time points. The richness and diversity are presented with the median values indicated (central black horizontal lines); the 25th and 75th percentiles are indicated (boxes), and the whiskers extend from each end of the box to the most extreme values within 1.5 times the interquartile range from the respective end. ANOVA, *P < 0.05, **P < 0.01, and ***P < 0.001, respectively.
In a further assessment of the resistome composition and diversity level, NMDS analysis based on the Bray-Curtis dissimilarity metric displayed a clear separation of the medicated animals from the nonmedicated animals samples at day 8 and 21 (Fig. 2). The two-way PerMANOVA test followed by pairwise post-hoc comparisons (https://github.com/leffj/mctoolsr/) showed a significant difference in the profile of the relative abundance of ARGs between the treatments for both day 8 (q = 0.038) and day 21 (q = 0.010) but not for day 0 (q = 0.262). While the medicated pigs medicated with oxytetracycline clearly diverged from the nonmedicated pigs at day 8 according to the NMDS ordination, the ARGs profile in the medicated group at day 21 tended to be closer to the nonmedicated group, indicating a resilience of the bacterial communities carrying ARGs from antibiotic perturbation.
NMDS ordinations based on Bray–Curtis dissimilarity metric showing the changes in ARG compositions in antibiotic and control groups over time (stress = 0.095, R = 0.40, and P = 0.001). The low 2D stress values indicating that these data were well-represented by the two-dimensional ordinations. Ellipses indicate 95% confidence intervals of multivariate t-distribution around centroids of the groupings with treatment and sampling time points as factor.
Tetracycline resistance was the predominant class to which the reads were aligned, with beta-lactam resistance constituting most of the remaining reads (Fig. 3a). In fact, regardless of antibiotic medication, all the samples harbored a diverse range of ARGs (Fig. 3). In the tetracycline class, the main mechanism of resistance detected was through resistance ribosomal protection proteins (RRPPs). The main mechanism of resistance within the beta-lactam class was Class A beta-lactamases (CABLs). In addition to CABLs and RRPPs, the other predominant mechanisms of resistance were multidrug efflux pumps, multidrug resistance regulators, macrolide resistance efflux pumps, and lincosamide nucleotidyltransferases (Fig. 3b).
Normalized relative abundances of the top 10 classes (a) and mechanisms (b) as well as of the top 20 groups of ARGs (c) in the feces of medicated and nonmedicated pigs at different time points (d0, d8, d21).
Differential abundance analysis revealed that from day 0 to day 8 (the last day of antibiotic administration), oxytetracycline feeding significantly enriched the abundances of 41 ARGs (q < 0.05), which were mainly from the tetracycline, beta-lactam and multidrug resistance classes (Fig. 4a, Supplementary Table S4). Further analysis of the samples at day 21, two weeks after the withdrawal of antibiotic administration, showed that 17 ARGs remained significantly more abundant (q < 0.05) in antibiotic-treated pigs than in the control group (Fig. 4b, Supplementary Table S5).
Significant (q < 0.05) log-fold changes in the abundances of ARG hits (summarized according to resistance mechanism and colored according to the class of resistance) in samples from day 0 to day 8 (a) and day 21 (b). Positive log-fold change point out an increase in abundance, while negative log-fold change point out a reduction in abundance over time in the antibiotic medicated pigs compared to the control group.
Microbiome diversity and gut composition
Comparison of microbial community structure using the alpha diversity indices revealed that the total number of detected species (richness) as well as their diversity (Shannon) were lower for the communities in medicated animals than for the control animals during the antibiotic treatment period (day 0 to day 8) (Fig. 5), with the values decreasing further during the withdrawal period (Fig. 5).
Richness and diversity (Shannon) of taxa across treatments and over time. The indices are presented with the median values indicated (central black horizontal lines); the 25th and 75th percentiles are indicated (boxes), and the whiskers extend from each end of the box to the most extreme values within 1.5 times the interquartile range from the respective end. Whiskers data points beyond this range are displayed as small black circles. ANOVA, *P < 0.05 and **P < 0.011, respectively.
The temporal shifts of bacterial communities were relatively similar to those of ARGs, with the medicated pigs diverged from the nonmedicated pigs (Fig. 6). In fact, analysis of the community structures showed significant differences between medicated and nonmedicated animals after antibiotic treatment (day 8, post hoc two-way PerMANOVA, q = 0.04). Taken together, these data indicate that oxytetracycline administration reduced both the bacterial community' richness and diversity in the gut microbiota of the pigs and that the gut bacterial community diversity did not fully recover, despite the withdrawal of the antibiotic for two weeks.
NMDS ordinations based on the Bray–Curtis dissimilarity metric showing the shift in the composition of bacterial community in antibiotic and control groups over time (stress = 0.14, R = 0.45, and P = 0.001. The 2D stress values was lower than 0.17 indicating that these data were well-represented by the two-dimensional ordinations. Ellipses indicate 95% confidence intervals of multivariate t-distribution around centroids of the groupings with treatment and sampling time points as factor. Each point represents a pig with sequences clustered based on classification at the species level.
Taxonomic profiling of the fecal samples were performed to see whether or not the (temporal) changes in ARG profiles were associated with the changes in the fecal microbial population structure in response to oxytetracycline administration. The distribution of the most abundant phyla and genera in the feces over the course of the study can be seen in Fig. 7. The results showed that among the dominant group of taxa, Firmicutes exhibited low relative abundance, while the phyla Bacteroidetes and Proteobacteria exhibited increased abundances, in the medicated animals (q < 0.05, Fig. 7a, Supplementary Tables S6 and S7) on day 8 and day 21. Interestingly, the enrichment of Bacteroidetes in the feces of medicated animals was proportional to the decrease in Firmicutes abundance (Fig. 7b). This oxytetracycline-derived shift to a Bacteroidetes-dominant microbial community was also observed when the medicated animals were compared to the pretreatment animals.
The 10 and 20 most abundant taxa in the bacterial communities at the phylum (a) and genus (b) levels, respectively, according to Metaxa2 analysis.
Differential abundance analysis of the species-level taxonomic assignments revealed significant differences between the medicated and nonmedicated pigs microbiota. Many taxa exhibited relatively decreased abundances with antibiotic administration, most of which from the phylum Firmicutes (Fig. 8). However, the abundances of representatives of the genera Escherichia−Shigella, Acidaminococcus, Marvinbryantia, Prevotella, Blautia, Parabacteroides, Paludibacter, Megasphaera, Clostridium, Sporobacterium and Achromobacter and of an unclassified Lachnospiraceae were significantly enriched (q < 0.05) in the fecal microbiota of the antibiotic treated animals (Fig. 8, Supplementary Tables S8 and S9). The relative increase in the abundances of Prevotella spp. and Parabacteroides spp. was reflected by an overall increase in proportions of the phylum Bacteroidetes in the oxytetracycline-treated animals. The change in proportion of Prevotella was particularly notable; while this genus was among the low-abundance taxonomic groups during the pretreatment period, the abundance increased consistently over time, and Prevotella remained by far the most dominant taxonomic group until the end of the feeding trial.
Significant (q < 0.05) log-fold changes in the abundances of bacterial species in samples from day 0 to day 8 (a) and day 21 (b). Positive log-fold change point out an increase in abundance, while negative log-fold change point out a reduction in abundance over time in the antibiotic medicated pigs compared to the control group.
One of the important questions in microbiome research relates to the extent to which production practices and environmental factors affect microbiota transmission, acquisition, and function16. To address this question, one approach used is experimental manipulation of gut systems to measure the impact, such as the effect of diet or antibiotic use on the microbiome16. In the current study, we employed shotgun metagenomics to explore the effect of in-feed oxytetracycline and its withdrawal on the dynamics of the fecal microbiota composition as well as the microbial resistome in postweaned swine over a 21-day period. Oxytetracycline is one of the most frequently employed antibiotic compounds in swine production in the European Union and the United States, with use in disease prevention as well as feed efficiency improvement11,12,14.
The biodiversity analysis results revealed the presence of diverse resistance genes in the fecal microbiome of the pigs, even in the absence of antibiotic pressure. In fact, ARG types, including genes encoding resistance to beta-lactams and tetracycline as well as multidrug resistance genes, were highly abundant in both medicated and nonmedicated pigs. Although there was a similarity in ARG classes detected in this study with those reported for human feces and the environmental samples17, the prevalent ARGs detected in this study were different from those found in human feces, river water, and sediments17. This finding supports the theory that specific ARGs are associated with particular environments and are not randomly distributed18 and that the constant selective pressure of antibiotic administration for over 50 years in swine production seems to have led to a high background level of gut resistome in swine6.
Oxytetracycline administration resulted in a detectable increase in the diversity and abundance of resistance genes that was even higher than the large background resistance, though the gut resistome diversity mainly recovered after two weeks of antibiotic withdrawal. Consistent with our results, Noyes et al.19 and Looft et al.6 observed an increase in the abundance and diversity of antimicrobial resistance genes in feedlot pens, where animals were administered tetracycline and ASP20 during feeding. As expected, tetracycline resistance genes were significantly enriched in the feces of the medicated animals in the current study.
Generally, efflux pumps, ribosome protection and tetracycline modification are the primary means via which bacteria are afforded resistance to tetracycline12. Consistent with our findings, ribosome protection seems to be the most prevalent of these mechanisms in nature20. The spread of the ribosomal protection proteins determinants such as tetQ and tetM throughout eubacteria via lateral gene transfer events might have been facilitated by their presence on mobile genetic elements.
Many of the resistance ribosomal protection proteins determinants such as tetQ and tetM are located on mobile genetic elements and this may have facilitated the spread of these genes throughout eubacteria via lateral gene transfer events21. In the current study, the tetQ gene, which is often associated with conjugative transposons in members of Bacteroidetes (Prevotella, Bacteroides, Parabacteroides, Paludibacter)21, represented the most dominant group of ARGs in the medicated animals, suggesting that the bacteria in the guts of medicated animals may become resistant mainly by acquisition of this gene.
As expected, oxytetracycline administration resulted in the enrichment of some tetracycline resistance genes, most likely due to a direct interaction. However, a collateral effect of antibiotic administration was observed, so that some ARGs that do not confer resistance toward oxytetracyline (e.g., rpoB, oxA, catP, TEM, mphA, cme, CTX, carB, gyrA, parE) also exhibited increased abundance with in-feed oxytetracycline, indicating an indirect mechanism of selection. Looft et al.6 suggested that this is likely due to co-presence of some ARGs on mobile genetic elements conferring resistance to antibiotic. Accordingly, further analysis in our study revealed that a majority of these enriched ARGs have been found on mobile genetic elements such as plasmids and integrons, which carry at least two other resistance genes (data not shown). The co-occurrence of ARGs on mobile genetic element could promote spread of these genes22 and could further facilitate horizontal transfer of these resistance gene clusters to potential human pathogens such as E. coli in the swine gut or the agricultural environment6. Together the results showed that in-feed oxytetracycline enriched the abundance of resistance genes specific to (and beyond) the administered antibiotic in the pig fecal microbiome.
Based on the analyses of the microbiota, we conclude that the fecal microbial diversity increases over time and shifted to an adult-type microbiota, which is consistent with previous studies made in pig23,24,25,26. Overall, Firmicutes and Bacteroidetes phyla were the predominate taxa in the fecal microbiota of the pigs, accounting for more than 90% of the bacterial population during the post weaning period24,27,28. Analysis also revealed that the therapeutic dose of oxytetracycline caused a reduction in overall species richness and diversity in the medicated animals and that the reduction lasted even after antibiotic administration was discontinued. Although the reduction was not statistically significant at the community level, the antibiotic treatment resulted in significant and enduring changes at the species level, indicating that a particular group of the microbial communities could confer greater resistance to perturbance induced by antibiotic than other gut microbiota members, which could be due to the specific effect the antibiotic29.
In this study, the most notable change in bacterial abundance was the increase in the abundances of Bacteroidetes and Proteobacteria during the first 7 days of oxytetracycline exposure, which was mainly observed as increased Prevotella, Parabacteroides, Paludibacter (Bacteroidetes) and Escherichia (Proteobacteria) abundances. Similar to our findings, ASP250 administration for three weeks has been shown to cause detectable divergence in the swine gut microbiota, including an increase in Proteobacteria abundance, which was correlated with increased Escherichia spp. abundance6. However, when amoxicillin and the β-lactamase inhibitor clavulanic acid were applied together, in the feed and via intramuscular injection, decreased E. coli abundance was observed in pigs30. Escherichia has been found to encode various ARGs, such as resistance genes for beta-lactams (cfxA3) and tetracycline (tetQ), genes for multidrug resistance (acrA, mdtH, mdtL and mdtO), and other genes (dimethyladenosine transferase)31. The phylum Bacteroidetes has been found to decrease in pigs fed tylosin32 and ASP2506, while carbadox administration has been reported to increase the abundance of this phylum during the early phase of administration4. An increase in the ratio of Bacteroidetes to Firmicutes proportion has been recently linked to increased short-chain fatty acid (SCFA) production in mice in response to fructo-oligosaccharide administration33. However, other studies have also highlighted possible negative impacts of enriched Bacteroidetes populations in the gut34,35. In terms of the gut resistome, the observed increase in the abundances of Prevotella, Parabacteroides, and Paludibacter in the medicated animals in the present study might be due to the potential of these taxa to carry ARGs that may be transferred to other susceptible bacteria in the densely populated microbial environment like the swine gut36,37.
Interestingly, previous studies have clearly highlighted the occurrence of tetracycline resistance genes (mainly tetQ) in taxa from Escherichia, Parabacteroides and Prevotella21,31,37,38. Recently, the relative abundances of Prevotella, Paludibacter, and Parabacteroides have been reported to be significantly correlated with the abundances of aminoglycoside, beta-lactam, MLS, sulfonamide, and tetracycline resistance genes and the abundances of transposases39. Blautia, Acidaminococcus and Megasphaera from Firmicutes were also found to be significantly enriched in the feces of the medicated swine. Blautia has been reported to harbor tetracycline resistance genes (tetQ, tetO, tet32, tetM) and a MLS resistance gene (ermB)31,38. Similarly, Acidaminococcus and Megasphaera have also been reported to carry tetracycline resistance genes (tetO, tetW)40,41. Overall, different patterns of shifts in microbial populations have been reported when different antibiotics were administered to pigs4,6,28,42, indicating that the effects of antibiotics on some microbial members are specific to the antibiotic being administered and depend on the varying collateral effects of different antibiotics.
In this study, the experimental design featured environmental controls such as host genetic control, no application of antibiotics to the sows or pigs to prior the experiment, and identical diet except for the inclusion of oxytetracycline for one treatment group. However, a limitation of the present study is that resistome profiling of the feeding trial facility environment as well as the feed samples in the pre and post weaning phase was not considered. The lack of this information may have impacted the accuracy of our findings to some extent. Despite this limitation, this study represents the first report on using shotgun metagenomics for studying dynamics of the gut microbiome and antibiotic resistome alterations in swine.
Further research is recommended to look beyond metagenomics-based resistome profiling and at effects on (AR) gene expression and even on the proteome and metabolome level. Additionally, given the widespread distribution of phages in the gut environment, the role of phages in the acquisition and spread of ARGs should be considered in future studies. Despite the recent observation that ARGs are rarely encoded in phage genomes43, the bacterium-phage interaction and subsequent (antibiotic resistance) gene transfer in the gut environment has not been fully investigated.
In this study, the collateral effects of in-feed oxytetracycline administration at therapeutic dose on the pig fecal antibiotic resistome are observed. Even a short-term administration of oxytetracycline increased the abundance and diversity of ARGs, including those conferring resistance to antibiotics that were not administered, and increased the abundance of Proteobacteria, including E. coli population, a potential human pathogen. Although the effect of the therapeutic application on ARGs diminished over time, some ARGs remained significantly more abundant (q < 0.05) in medicated pigs than in the control group two weeks after the withdrawal of antibiotic administration.
The animal experiments were conducted under a protocol approved by the office of the Lower Austrian Region Government, Group of Agriculture and Forestry, Department of Agricultural Law (approval codes LF1-TVG-39/038-2016). The trial was carried out at the Center of Animal Nutrition (Tulln, Austria). All experiments and methods were conducted according to relevant guidelines and regulations.
Animals and experimental design
Sixteen freshly weaned pigs (sow: Landrace × Large White, boar: Pietrain) that were ∼28 days old were selected for this study. Upon arrival the animals were individually housed and maintained in similar climatically controlled rooms. After four days of adaptation with ad libitum access to a standard starter diet (Table 1), the pigs were blocked (row-column design44) by sex (2) and ancestry (4), and within each block, the animals were randomly allocated to one of two treatments (n = 8 pigs/treatment): 1) standard starter diet for 21 days (control group) or 2) antibiotic-supplemented diet (10 g oxytetracycline Agrar-Service/100 KBW/day, corresponding to 40 mg oxytetracycline hydrochloride/KBW/day) for 7 days (recommended therapeutic dosage by the manufacturer). The treatment was followed by 14 days of standard starter diet (antibiotic group). For the duration of the study, the pigs were allowed ad libitum access to water and feed and all dietary treatments were equally represented in each room to remove any variation due to environmental factors.
Table 1 Composition of the pig starter diet.
Fecal sampling, DNA extraction, library preparation and sequencing
Fecal materials were obtained from the individual pig by rectal stimulation on days 0 (before oxytetracycline treatment), 8 (after the oxytetracycline treatment), and 21 (two weeks after the withdrawal of oxytetracycline), and stored in sterile containers at –20 °C until processed. The total DNA was extracted from the fecal samples by the QIAamp PowerFecal Kit (Qiagen, Crawley, West Sussex, UK) following the manufacturer's instructions with some modifications recommended by Hart et al.45. The final DNA were eluted in 100 μL of 10 mM Tris buffer (pH 8) after being incubated for 5 min for maximum elution efficiency. A Qubit fluorometer (Qubit 3, Invitrogen) was used to determine the total DNA concentration, and purity was assessed via the 260/280 and 260/230 absorbance ratios using a spectrophotometre (NanoDrop® ND-1000). The samples were sent for DNA sequencing to the Teagasc Food Research Centre, Ireland. Paired-end sequencing libraries were prepared from the extracted DNA using the Illumina Nextera XT Library Preparation Kit (Illumina Inc., San Diego, CA) followed by sequencing on the Illumina NextSeq 500 platform using high-output chemistry (2 × 150 bp) according to the manufacturer's instructions.
All bioinformatics and statistical analyses of the metagenome datasets were conducted with custom Bash, R, and Perl scripts using the existing softwares and algorithms. (see below).
Quality filtering
Quality filtering of the metagenome datasets was carried out in several steps to remove sequencing adapters (cutadapt v. 1.1246), low-quality sequences with quality scores < 33 (fastx toolkit v. 0.0.14), reads mapped to the host genome (pig, NCBI accession no. NC 010443) (DeconSeq v. 0.4.347), and finally to remove any sequences mapped to the PhiX174 genome (NCBI accession no. NC 001422) (DeconSeq v. 0.4.347).
Resistome annotation and comparison
To quantify the abundances of ARGs, 42 quality-filtered metagenomes were used for similarity searches against the hand-curated antimicrobial resistance database MEGARes48 by using USEARCH (v10)49. Containing the sequences of approximately 4,000 ARGs, the MEGARes database is based on a nonredundant compilation of sequences contained in ResFinder (November 2015), ARG-ANNOT (November 2015), the Comprehensive Antibiotic Resistance Database (CARD, v1.0.7), and the National Center for Biotechnology Information (NCBI) Lahey Clinic beta-lactamase archive (December 2015)48.
High confidence matches to the sequence in MEGARes database were obtained by considering the entire coverage of the query reads against ARGs genes with a identity threshold of 90% (parameters were set as "-usearch-global -id 0.9, maxaccepts 1, threads 50"), as suggested elsewhere17. For each antibiotic resistance determinant (ARD), the total number of aligned reads was counted followed by normalization to the length of the respective gene, in order to remove possible sequence length variations bias17. Further, the length-normalized counts were normalized to the bacterial 16S rRNA sequences number (obtained by employing Metaxa250) divided by the average length of the 16S gene to yield an approximation of the ARGs number per bacterial 16S rRNA17 (Equation 1).
$$Abundance=\sum _{1}^{n}\frac{{N}_{{\rm{AMR}} \mbox{-} {\rm{like}}\mathrm{sequence}}/{L}_{AMRreferencesequence}}{{N}_{16Ssequence}/{L}_{16Ssequence}}$$
Taxonomic affiliation
The taxonomic compositions of the metagenome datasets were identified by extracting the bacterial 16S rRNA sequences with Metaxa2 version 2.0 using the default options50,51. Genus assignment of the extracted sequences was carried out using the Metaxa 2 curated database taking to the account the reliability score (>80) as well as the similarity threshold (>90% identity with the reference 16S rRNA sequence) and reported as relative abundance based on the total number of 16S rRNA counts in each metagenome sample.
Ordination and log-fold changes in abundance were calculated in R (version 3.3.0). Ordination was performed with log-transformed normalized reads on 2 dimensions with the "phyloseq's" ordinate function using Non-metric multidimensional scaling (NMDS) analysis52. On the completed ordination plots, separation between groups was tested with PerMANOVA53. Log-fold changes in abundance (of taxa and ARGs) between groups was determined by a negative binomial generalized linear model using DESeq2 version 1.17.1054 in R, considering random differences between the treatment groups at first sampling as covariates term in the model. Accordingly, treatment and sampling day were included as fixed factors, while blocks were considered as confounder variables (random factors) in the analysis. These main factors along with their interactions were also taken into the account to investigate richness (the number of unique taxa or ARGs) and Shannon diversity (the number and relative abundance of unique taxa or ARGs) in each sample using the lme4 package in R55. Statistical significance for differential abundance analysis was considered at FDR-corrected P ≤ 0.05 (where applicable) and shown as the q value. In all statistical analysis, the individual animal/pen was considered the experimental unit.
The data are deposited in the NCBI Short Read Archive under BioSamples SAMN09209536- SAMN09209575, which are affiliated with BioProject PRJNA471402.
Institute, A. H. Additives and their Uses. (Animal Health Institute, Bloomington, MN, 2012).
Sun, J. et al. Comparison of Fecal Microbial Composition and Antibiotic Resistance Genes from Swine, Farm Workers and the Surrounding Villagers. Scientific Reports 7, 4965 (2017).
Van Boeckel, T. P. et al. Global trends in antimicrobial use in food animals. Proceedings of the National Academy of Sciences 112, 5649–5654, https://doi.org/10.1073/pnas.1503141112 (2015).
Looft, T., Allen, H. K., Casey, T. A., Alt, D. P. & Stanton, T. B. Carbadox has both temporary and lasting effects on the swine gut microbiota. Front Microbiol 5, 276 (2014).
Allen, H. K., Levine, U. Y., Looft, T., Bandrick, M. & Casey, T. A. Treatment, promotion, commotion: antibiotic alternatives in food-producing animals. Trends Microbiol. 21, 114–119 (2013).
Looft, T. et al. In-feed antibiotic effects on the swine intestinal microbiome. Proc. Natl. Acad. Sci. USA 109, 1691–1696, https://doi.org/10.1073/pnas.1120238109 (2012).
Article ADS PubMed Google Scholar
Zhu, Y. G. et al. Diverse and abundant antibiotic resistance genes in Chinese swine farms. Proc. Natl. Acad. Sci. USA 110, 3435–3440 (2013).
Agga, G. E. et al. Effects of chlortetracycline and copper supplementation on antimicrobial resistance of fecal Escherichia coli from weaned pigs. Prev. Vet. Med. 114, 231–246 (2014).
Thakur, S. & Gebreyes, W. A. Prevalence and antimicrobial resistance of Campylobacter in antimicrobial-free and conventional pig production systems. J. Food Prot. 68, 2402–2410 (2005).
Keelara, S. et al. Longitudinal Study of Distributions of Similar Antimicrobial-Resistant Salmonella Serovars in Pigs and Their Environment in Two Distinct Swine Production Systems. Appl. Environ. Microbiol. 79, 5167–5178 (2013).
Granados-Chinchilla, F. & Rodríguez, C. Tetracyclines in Food and Feedingstuffs: From Regulation to Analytical Methods, Bacterial Resistance, and Environmental and Health Implications. Journal of Analytical Methods in Chemistry 2017, 1315497 (2017).
Chopra, I. & Roberts, M. Tetracycline Antibiotics: Mode of Action, Applications, Molecular Biology, and Epidemiology of Bacterial Resistance. Microbiol. Mol. Biol. Rev. 65, 232–260 (2001).
Browne, H. P. et al. Culturing of 'unculturable' human microbiota reveals novel taxa and extensive sporulation. Nature 533, 543 (2016).
Union, E. In Official Journal of the European Union Vol. L 26829–43 (2003).
Maron, D. F., Smith, T. J. & Nachman, K. E. Restrictions on antimicrobial use in food animal production: an international regulatory and economic survey. Global Health 9, 48 (2013).
Koskella, B., Hall, L. J. & Metcalf, C. J. E. The microbiome beyond the horizon of ecological and evolutionary theory. Nat Ecol Evol (2017).
Pal, C., Bengtsson-Palme, J., Kristiansson, E. & Larsson, D. G. J. The structure and diversity of human, animal and environmental resistomes. Microbiome 4, 54 (2016).
Xiong, W. et al. Antibiotic-mediated changes in the fecal microbiome of broiler chickens define the incidence of antibiotic resistance genes. Microbiome 6, 34, https://doi.org/10.1186/s40168-018-0419-2 (2018).
Noyes, N. R. et al. Resistome diversity in cattle and the environment decreases during beef production. Elife 5, e13195 (2016).
Connell, S. R., Tracz, D. M., Nierhaus, K. H. & Taylor, D. E. Ribosomal Protection Proteins and Their Mechanism of Tetracycline Resistance. Antimicrob. Agents Chemother. 47, 3675–3681 (2003).
Leng, Z., Riley, D. E., Berger, R. E., Krieger, J. N. & Roberts, M. C. Distribution and mobility of the tetracycline resistance determinant tetQ. J. Antimicrob. Chemother. 40, 551–559 (1997).
Partridge, S. R., Kwong, S. M., Firth, N. & Jensen, S. O. Mobile Genetic Elements Associated with Antimicrobial Resistance. Clin. Microbiol. Rev. 31, e00088–00017, https://doi.org/10.1128/cmr.00088-17 (2018).
Zhang, Q., Widmer, G. & Tzipori, S. A pig model of the human gastrointestinal tract. Gut Microbes 4, 193–200 (2013).
Kim, H. B. et al. Longitudinal investigation of the age-related bacterial diversity in the feces of commercial pigs. Vet. Microbiol. 153, 124–133 (2011).
Schokker, D. et al. Long-Lasting Effects of Early-Life Antibiotic Treatment and Routine Animal Handling on Gut Microbiota Composition and Immune System in Pigs. PLOS ONE 10, e0116523 (2015).
Kraler, M., Ghanbari, M., Domig, K. J., Schedle, K. & Kneifel, W. The intestinal microbiota of piglets fed with wheat bran variants as characterised by 16S rRNA next-generation amplicon sequencing. Arch. Anim. Nutr. 70, 173–189 (2016).
Kim, H. B. et al. Microbial shifts in the swine distal gut in response to the treatment with antimicrobial growth promoter, tylosin. Proc. Natl. Acad. Sci. USA 109, 15485–15490 (2012).
Kim, H. B. & Isaacson, R. E. The pig gut microbial diversity: Understanding the pig gut microbial ecology through the next generation high throughput sequencing. Vet. Microbiol. 177, 242–251 (2015).
Pérez-Cobas, A. E. et al. Differential effects of antibiotic therapy on the structure and function of human gut microbiota. PloS one 8, e80201–e80201, https://doi.org/10.1371/journal.pone.0080201 (2013).
Thymann, T. et al. Antimicrobial treatment reduces intestinal microflora and improves protein digestive capacity without changes in villous structure in weanling pigs. Br. J. Nutr. 97, 1128–1137, https://doi.org/10.1017/s0007114507691910 (2007).
Li, B. et al. Metagenomic and network analysis reveal wide distribution and co-occurrence of environmental antibiotic resistance genes. The Isme Journal 9, 2490 (2015).
Holman, D. B. & Chenier, M. R. Temporal changes and the effect of subtherapeutic concentrations of antibiotics in the gut microbiota of swine. FEMS Microbiol. Ecol. 90, 599–608 (2014).
De Vadder, F. et al. Microbiota-generated metabolites promote metabolic benefits via gut-brain neural circuits. Cell 156, 84–96 (2014).
Ou, J. et al. Diet, microbiota, and microbial metabolites in colon cancer risk in rural Africans and African Americans. The American Journal of Clinical Nutrition 98, 111–120 (2013).
Jalanka-Tuovinen, J. et al. Faecal microbiota composition and host-microbe cross-talk following gastroenteritis and in postinfectious irritable bowel syndrome. Gut 63, 1737–1745 (2014).
Boente, R. F. et al. Detection of resistance genes and susceptibility patterns in Bacteroides and Parabacteroides strains. Anaerobe 16, 190–194 (2010).
Nakano, V. et al. Antimicrobial resistance and prevalence of resistance genes in intestinal Bacteroidales strains. Clinics 66, 543–547 (2011).
Forslund, K. et al. Country-specific antibiotic use practices impact the human gut resistome. Genome Res. 23, 1163–1169 (2013).
Zhou, Z. C. et al. Antibiotic resistance genes in an urban river as impacted by bacterial community and physicochemical parameters. Environ. Sci. Pollut. Res. Int. 24, 23753–23762 (2017).
Galán, J. C., Reig, M., Navas, A., Baquero, F. & Blázquez, J. ACI-1 from Acidaminococcus fermentans: Characterization of the First β-Lactamase in Anaerobic Cocci. Antimicrob. Agents Chemother. 44, 3144–3149 (2000).
Wang, H. H. & Schaffner, D. W. Antibiotic Resistance: How Much Do We Know and Where Do We Go from Here? Appl. Environ. Microbiol. 77, 7093–7095 (2011).
Allen, H. K. et al. Antibiotics in feed induce prophages in swine fecal microbiomes. MBio 2 (2011).
Enault, F. et al. Phages rarely encode antibiotic resistance genes: a cautionary tale for virome analyses. Isme j 11, 237–247, https://doi.org/10.1038/ismej.2016.90 (2017).
Shah, K. R. & Sinha, B. K. In Handbook of Statistics Vol. 13 903–937 (Elsevier, 1996).
Hart, M. L., Meyer, A., Johnson, P. J. & Ericsson, A. C. Comparative Evaluation of DNA Extraction Methods from Feces of Multiple Host Species for Downstream Next-Generation Sequencing. PLoS One 10, e0143334, https://doi.org/10.1371/journal.pone.0143334 (2015).
Martin, M. Cutadapt removes adapter sequences from high-throughput sequencing reads. EMBnet. journal 17, 10–12 (2011).
Schmieder, R. & Edwards, R. Fast identification and removal of sequence contamination from genomic and metagenomic datasets. PLoS One 6, e17288, doi:8 (2011).
Lakin, S. M. et al. MEGARes: an antimicrobial resistance database for high throughput sequencing. Nucleic Acids Res. 45, D574–D580 (2017).
Edgar, R. C. & Flyvbjerg, H. Error filtering, pair assembly and error correction for next-generation sequencing reads. Bioinformatics 31, 3476–3482 (2015).
Bengtsson-Palme, J. et al. metaxa2: improved identification and taxonomic classification of small and large subunit rRNA in metagenomic data. Molecular Ecology Resources 15, 1403–1414 (2015).
Bengtsson-Palme, J., Thorell, K., Wurzbacher, C., Sjöling, Å. & Nilsson, R. H. Metaxa2 Diversity Tools: Easing microbial community analysis with Metaxa2. Ecological Informatics 33, 45–50 (2016).
McMurdie, P. J. & Holmes, S. phyloseq: An R Package for Reproducible Interactive Analysis and Graphics of Microbiome Census Data. PLOS ONE 8, e61217 (2013).
Anderson, M. J. & Walsh, D. C. I. PERMANOVA, ANOSIM, and the Mantel test in the face of heterogeneous dispersions: What null hypothesis are you testing? Ecol. Monogr. 83, 557–574 (2013).
Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biology 15, 550 (2014).
Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting Linear Mixed-Effects Models Using lme4. 2015 67, 48, https://doi.org/10.18637/jss.v067.i01 (2015).
We thank Dr. Veronika Nagl, Birgit Antlinger, Iris Schantl, Aleksandra Koler, the Center for Applied Animal Nutrition (CAN) staff, Dr. Orla O'Sullivan (Teagasc) and Laura Finnegan (Teagasc) for their outstanding assistance and Dr. Johan Bengtsson-Palme for his helpful advice. This work was supported by the Austrian Research Promotion Agency (FFG) through the project Competence Headquarter (853863 & 859603).
BIOMIN Research Center, Tulln, Austria
Mahdi Ghanbari & Viviana Klose
Teagasc Food Research Centre, Moorepark, Fermoy, Cork, and APC Microbiome Ireland, Cork, Ireland
Fiona Crispie & Paul D. Cotter
Mahdi Ghanbari
Viviana Klose
Fiona Crispie
Paul D. Cotter
M.G. performed analysis on all samples, interpreted data, wrote manuscript. P.C., V.K. and F.C. supervised development of work, helped in data interpretation and manuscript evaluation.
Correspondence to Mahdi Ghanbari.
All authors declare no competing financial and non-financial interests, except MG and VK who are employed by BIOMIN Holding GmbH. BIOMIN is involved in natural feed additive development and research in natural alternatives of in-feed medication in livestock production.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary file S1
Ghanbari, M., Klose, V., Crispie, F. et al. The dynamics of the antibiotic resistome in the feces of freshly weaned pigs following therapeutic administration of oxytetracycline. Sci Rep 9, 4062 (2019). https://doi.org/10.1038/s41598-019-40496-8
Age influences the temporal dynamics of microbiome and antimicrobial resistance genes among fecal bacteria in a cohort of production pigs
Tara N. Gaire
H. Morgan Scott
Victoriya V. Volkova
Animal Microbiome (2023)
The impacts of viral infection and subsequent antimicrobials on the microbiome-resistome of growing pigs
Carissa Odland
Noelle Noyes
Microbiome (2022)
Stability and volatility shape the gut bacteriome and Kazachstania slooffiae dynamics in preweaning, nursery and adult pigs
Brandi Feehan
Qinghong Ran
Sonny T. M. Lee
A review of the resistome within the digestive tract of livestock
Tao Ma
Tim A. McAllister
Le Luo Guan
Journal of Animal Science and Biotechnology (2021)
Intestinal microbiota modulation and improved growth in pigs with post-weaning antibiotic and ZnO supplementation but only subtle microbiota effects with Bacillus altitudinis
Daniel Crespo-Piazuelo
Peadar G. Lawlor
Gillian E. Gardiner | CommonCrawl |
Binomial Distributions
Problems with Binomial Distributions
Features of Binomial Distributions
Bernoulli Trials and Sequences
Bernoulli mean and variance
Applications of Bernoulli random variables and probabilities
Margins of Error
Level 8 - NCEA Level 3
Bernoulli distribution
When an experiment can have either of two possible outcomes, usually called success and failure, it gives rise to a Bernoulli random variable. We assign the values $X=1$X=1 and $X=0$X=0 to a Bernoulli random variable $X$X according to whether a trial of the experiment results in a success or a failure. Also, we assign probabilities $p$p and $q$q to the two outcomes.
Thus, we write $P\left(X=1\right)=p$P(X=1)=p and $P\left(X=0\right)=q=1-p$P(X=0)=q=1−p.
The expected value or mean of the Bernoulli random variable $X$X may be thought of informally as the average amount of 'success' per trial over a very large number of trials. This is just $p$p and we write $\mu_X=p$μX=p or $E(X)=p$E(X)=p.
If we experiment and calculate the amount of 'success' per trial over just a few trials, we will quite likely obtain a value different from $p$p. By doing this repeatedly, we obtain a spread of values centred around the mean, $p$p. This spread of values is what is meant by the variance of the random variable $X$X. Using the definition of variance, we write $Var(X)=E(X-\mu_X)^2$Var(X)=E(X−μX)2 and evaluate this from the definition as
$Var(X)=p(1-\mu_x)^2+q(0-\mu_x)^2=$Var(X)=p(1−μx)2+q(0−μx)2=$pq^2+qp^2$pq2+qp2$=pq(p+q)$=pq(p+q)$=pq$=pq
Thus, a Bernoulli random variable has mean $\mu_X=p$μX=p and variance $Var\left(X\right)=p\left(1-p\right)$Var(X)=p(1−p).
We are often interested in strings of independent Bernoulli trials. The distinguishing feature of the Binomial distribution is that we are interested in the probability of observing each possible number of successes in a string of Bernoulli trials.
In an experiment involving $n$n trials, there could be anywhere from $0$0 to $n$n successes. As $p$p is the long-run proportion of successes over many trials, if the $n$n trials were to be repeated many times, we would expect the number of successes on average, to be $np$np and this number is the mean of the binomial distribution.
The actual number of observed successes varies about this mean, giving rise to a variance $np\left(1-p\right)$np(1−p) which you should compare with the variance for the Bernoulli distribution.
Suppose $r$r successes are observed, and $n-r$n−r failures. We can count the number of ways this outcome can occur, namely $^nC_r$nCr or in equivalent notation, $\binom{n}{r}$(nr). From the theory of combinatorics, we know that this is evaluated by $^nC_r=\frac{n!}{r!\left(1-r\right)!}$nCr=n!r!(1−r)!.
The numbers $^nC_r$nCr are the same as the coefficients that arise in the expansion of the binomial expression $\left(a+b\right)^n$(a+b)n. Hence, the name binomial distribution.
We can now calculate the probabilities associated with the outcomes of a binomial experiment. The probability of a particular instance of $r$r successes and $n-r$n−r failures must be $p^r\left(1-p\right)^{n-r}$pr(1−p)n−r. But, because there are $^nC_r$nCr ways in which this outcome can occur, we conclude that
$P\left(N=r\right)=\binom{n}{r}p^r\left(1-p\right)^{n-r}$P(N=r)=(nr)pr(1−p)n−r
where $N$N is called a binomial random variable. It takes integer values from $0$0 to $n$n.
Although it may not be strictly true, we assume for the sake of this example that the occurrence of rain on a given day over a thirty-day period is independent of the weather on the preceding and following days. Suppose that according to historical records the probability of rain on any day in April is $0.2$0.2.
The mean number of rainy days in April is $np=30\times0.2=6$np=30×0.2=6. However, in the most recent month of April, there were $10$10 rainy days. The variance is $np\left(1-p\right)=30\times0.2\times0.8=4.8$np(1−p)=30×0.2×0.8=4.8 and we might wonder how unlikely it is to get a number of rainy days this far or further away from the mean.
The probability of getting exactly the mean number of rainy days is $\binom{30}{6}\times0.2^6\times0.8^24=0.179$(306)×0.26×0.824=0.179 to three decimal places.
The probability of getting exactly ten days of rain is $\binom{30}{10}\times0.2^{10}\times0.8^{20}=0.035$(3010)×0.210×0.820=0.035 to three decimal places.
We could calculate the probability of observing at least $10$10 days of rain by first calculating the probabilities of exactly $0$0, $1$1, $2$2, $3$3, $4$4, $5$5,$6$6, $7$7, $8$8, and $9$9 days of rain. The sum of these is the probability of seeing fewer than $10$10 days of rain and the number we want is one minus this amount.
You should work through this calculation to check that the probability of observing $10$10 or more rainy days would be $0.061$0.061 to three decimal places. So, the observed event is not easily explained as a random fluctuation.
Worked Examples
Find the value of $\nCr{5}{4}\times\left(0.1\right)^4\times0.9+\nCr{5}{5}\times\left(0.1\right)^5\times\left(0.9\right)^0$5C4×(0.1)4×0.9+5C5×(0.1)5×(0.9)0.
Reveal Solution
Census data show that $80%$80% of the population in a particular country have brown eyes.
A random sample of $900$900 people is selected from the population.
What is the mean number of people in the sample who have brown eyes?
What is the standard deviation of the number of people in the sample who have brown eyes?
Investigate situations that involve elements of chance: A calculating probabilities of independent, combined, and conditional events B calculating and interpreting expected values and standard deviations of discrete random variables C applying distributions such as the Poisson, binomial, and normal
Apply probability distributions in solving problems | CommonCrawl |
Delay-induced spiking dynamics in integrate-and-fire neurons
Density function analysis for a stochastic SEIS epidemic model with non-degenerate diffusion
Limiting behavior of unstable manifolds for spdes in varying phase spaces
Lin Shi 1,, , Dingshi Li 1, and Kening Lu 2,
School of Mathematics, Southwest Jiaotong University, Chengdu 610031, China
Department of Mathematics, Brigham Young University, Provo, Utah 84602, USA
* Corresponding author: Lin Shi
Received June 2020 Published January 2021
Fund Project: The first author is supported by National Natural Science Foundation of China (NO. 11701475, NO. 12071384, NO. 11971394 and NO. 11971330)
In this paper, we study a class of singularly perturbed stochastic partial differential equations in terms of the phase spaces. We establish the smooth convergence of unstable manifolds of these equations. As an example, we study the stochastic reaction-diffusion equations on thin domains.
Keywords: Stochastic partial differential equation, random dynamical systems, unstable manifold, and thin domain.
Mathematics Subject Classification: Primary: 37D45, 37C40.
Citation: Lin Shi, Dingshi Li, Kening Lu. Limiting behavior of unstable manifolds for spdes in varying phase spaces. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2021020
L. Arnold, Random Dynamical Systems, Springer, New York, 1998. doi: 10.1007/978-3-662-12878-7. Google Scholar
J. M. Arrieta and E. Santamaría, Estimates on the distance of inertial manifolds, Discrete Contin. Dyn. Syst., 34 (2014), 3921-3944. doi: 10.3934/dcds.2014.34.3921. Google Scholar
P. W. Bates, K. Lu and C. Zeng, Persistence of overflowing manifolds for semiflow, Comm. Pure Appl. Math., 52 (1999), 983-1046. doi: 10.1002/(SICI)1097-0312(199908)52:8<983::AID-CPA4>3.0.CO;2-O. Google Scholar
P. W. Bates, K. Lu and C. Zeng, Existence and persistence of invariant manifolds for semiflows in Banach space, Mem. Amer. Math. Soc., 135 (1998), 645. doi: 10.1090/memo/0645. Google Scholar
P. W. Bates, K. Lu and C. Zeng, Approximately invariant manifolds and global dynamics of spike states, Invent. Math., 174 (2008), 355-433. doi: 10.1007/s00222-008-0141-y. Google Scholar
A. Bensoussan and F. Flandoli, Stochastic inertial manifold, Stochastics Rep., 53 (1995), 13-39. doi: 10.1080/17442509508833981. Google Scholar
T. Caraballo, I. Chueshov and J. A. Langa, Existence of invariant manifolds for coupled parabolic and hyperbolic stochastic partial differential equations, Nonlinearity, 18 (2005), 747-767. doi: 10.1088/0951-7715/18/2/015. Google Scholar
T. Caraballo, I. D. Chueshov and P. E. Kloeden, Synchronization of a stochastic reaction-diffusion system on a thin two-layer domain, SIAM J. Math. Anal. 38 (2007) 1489–1507. doi: 10.1137/050647281. Google Scholar
S.-N. Chow, X.-B. Lin and K. Lu, Smooth invariant foliations in infinite-dimensional spaces, J. Diff. Eqs., 94 (1991) 266–291. doi: 10.1016/0022-0396(91)90093-O. Google Scholar
T. V. Girya and I. D. Chueshov, Inertial manifolds and stationary measures for stochastically perturbed dissipative dynamical systems, Sb. Math., 186 (1995) 29–45. doi: 10.1070/SM1995v186n01ABEH000002. Google Scholar
I. Chueshov and S. Kuksin, Random kick-forced 3D Navier-Stokes equations in a thin domain, Arch. Ration. Mech. Anal., 188 (2008) 117–153. doi: 10.1007/s00205-007-0068-2. Google Scholar
I. Chueshov and S. Kuksin, Stochastic 3D Navier-Stokes equations in a thin domain and its $\alpha$-approximation, Phys. D, 237 (2008) 1352–1367. doi: 10.1016/j.physd.2008.03.012. Google Scholar
[13] G. Da Prato and J. Zabczyk, Stochastic Equations in Infinite Dimension, University Press, Cambridge, 1992. doi: 10.1017/CBO9780511666223. Google Scholar
G. Da Prato and A. Debussche, Construction of stochastic inertial manifolds using backward integration, Stochastics Rep., 59 (1996) 305–324. doi: 10.1080/17442509608834094. Google Scholar
J. Duan, K. Lu and B. Schmalfuß, Invariant manifolds for stochastic partial differential equations, Ann. Prob., 31 (2003) 2109–2135. doi: 10.1214/aop/1068646380. Google Scholar
J. Duan, K. Lu and B. Schmalfuß, Smooth stable and unstable manifolds for stochastic evolutionary equations, J. Dynam. Diff. Eqns., 16 (2004) 949–972. doi: 10.1007/s10884-004-7830-z. Google Scholar
J. K. Hale and G. Raugel, Reaction-diffusion equation on the thin domain, J. Math. pures et. appl., 71 (1992) 33–95. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations, volume 840 of Lecture Notes in Mathematics, Springer-Verlag, New York, 1981. Google Scholar
D. A. Jones, A. M. Stuart and E. S. Titi, Persistence of invariant sets for dissipative evolution equations, J. Math. Anal. Appl., 219 (1998) 479–502. doi: 10.1006/jmaa.1997.5847. Google Scholar
D. Li, B. Wang and X. Wang, Limiting behavior of non-autonomous stochastic reaction-diffusion equations on thin domains, J. Diff. Eqs., 262 (2017) 1575–1602. doi: 10.1016/j.jde.2016.10.024. Google Scholar
D. Li, K. Lu, B. Wang, and X. Wang, Limiting behavior of dynamics for stochastic reaction-diffusion equations with additive noise on thin domains, Discrete Contin. Dyn. Syst., 38 (2018) 187–208. doi: 10.3934/dcds.2018009. Google Scholar
K. Lu and B. Schmalfuß, Invariant foliations for stochastic partial differential equations, Stoch. Dyn., 8 (2008) 505–518. doi: 10.1142/S0219493708002421. Google Scholar
K. Lu and B. Schmalfuß, Invariant manifolds for stochastic wave equations, J. Diff. Eqs., 236 (2007) 460–492. doi: 10.1016/j.jde.2006.09.024. Google Scholar
S.-E. A. Mohammed and M. K. R. Scheutzow, The stable manifold theorem for stochastic differential equations., The Annals of Probability, 27 (1999) 615–652, . doi: 10.1214/aop/1022677380. Google Scholar
S. Mohammed, T. Zhang and H. Zhao, The stable manifold theorem for semilinear SPDEs, Memoirs of AMS, , 196 (2008) 1–105. Google Scholar
P. S. Ngiamsunthorn, Invariant manifolds for parabolic equations under perturbation of the domain, Nonlinear Analysis TMA, 80 (2013) 23–48. doi: 10.1016/j.na.2012.12.001. Google Scholar
M. Prizzi and K. P. Rybakowski, The effect of domain squeezing upon the dynamics of reaction-diffusion equations, J. Diff. Eqs. 173 (2001) 271–320. doi: 10.1006/jdeq.2000.3917. Google Scholar
M. Prizzi and K. P. Rybakowski, Inertial manifolds on squeezed domains, J. Dynam. Diff. Eqs., 15 (2003) 1–48. doi: 10.1023/A:1026151910637. Google Scholar
M. Prizzi and K. P. Rybakowski, On inertial manifolds for reaction-diffusion equations on genuinely high-dimensional thin domains, Studia Math., 154 (2003) 253–275. doi: 10.4064/sm154-3-6. Google Scholar
E. Santamaría, Distance of Attractors of Evolutionary Equations, Universidad Complutense de Madrid, Ph.D thesis, 2014. Google Scholar
B. Schmalfuß, A random fixed point theorem and the random graph transformation, J. Math. Anal. Appl., 225 (1998) 91–113. doi: 10.1006/jmaa.1998.6008. Google Scholar
N. Varchon, Domain perturbation and invariant manifolds, J. Evol. Equ., 12 (2012) 547–569. doi: 10.1007/s00028-012-0144-4. Google Scholar
T. Wanner, Linearization of random dynamical systems, Dynamics Rep., 4 (1995) 203–269. Google Scholar
Leanne Dong. Random attractors for stochastic Navier-Stokes equation on a 2D rotating sphere with stable Lévy noise. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020352
Mauricio Achigar. Extensions of expansive dynamical systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020399
Qingfeng Zhu, Yufeng Shi. Nonzero-sum differential game of backward doubly stochastic systems with delay and applications. Mathematical Control & Related Fields, 2021, 11 (1) : 73-94. doi: 10.3934/mcrf.2020028
Guojie Zheng, Dihong Xu, Taige Wang. A unique continuation property for a class of parabolic differential inequalities in a bounded domain. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020280
Editorial Office. Retraction: Xiao-Qian Jiang and Lun-Chuan Zhang, A pricing option approach based on backward stochastic differential equation theory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 969-969. doi: 10.3934/dcdss.2019065
The Editors. The 2019 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2020, 16: 349-350. doi: 10.3934/jmd.2020013
Nitha Niralda P C, Sunil Mathew. On properties of similarity boundary of attractors in product dynamical systems. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021004
Thomas Frenzel, Matthias Liero. Effective diffusion in thin structures via generalized gradient systems and EDP-convergence. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 395-425. doi: 10.3934/dcdss.2020345
Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129
Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331
Alessandro Fonda, Rodica Toader. A dynamical approach to lower and upper solutions for planar systems "To the memory of Massimo Tarallo". Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021012
Stefan Siegmund, Petr Stehlík. Time scale-induced asynchronous discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1011-1029. doi: 10.3934/dcdsb.2020151
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Lin Shi Dingshi Li Kening Lu | CommonCrawl |
Variation on circular lake problem
An escaped prisoner finds himself in the middle of a SQUARE swimming pool. The guard that is chasing him is at one of the corners of the pool. The guard can run faster than the prisoner can swim. The prisoner can run faster than the guard can run. The guard does not swim. Which direction should the prisoner swim in in order to maximize the likelihood that he will get away?
calculus puzzle
jaasssoooonnnnnjaasssoooonnnnn
$\begingroup$ Toward the opposite corner. And the prisoner won't need calculus for that. $\endgroup$ – Ron Gordon Mar 27 '14 at 18:03
$\begingroup$ "Which direction" may not be the best method of escape. In the circular case, an optimal strategy may involve swimming in a J shape or swimming in a spiral until the prisoner's angular velocity no longer exceeds the guard's. The square case is likely similar. $\endgroup$ – user2357112 supports Monica Apr 8 '14 at 0:28
I don't think it's as easy as some are making it out to be. First draw a sketch, it is obvious that you should go in one of two directions. In my sketch, that's either downward (the way I drew) or rightward (going towards the right edge of the square). Both of those will produce the same result (it's symmetrical).
The time it takes to get to the side is proportional to the length ($t = \frac{1}{v}d$). The prisoner should go at a certain angle (again look at my sketch), $\theta$. This gives the length to the side (the hypotenuse) as:
$$ l_{swimmer} = \frac{L}{\cos(\theta)} $$
where $L$ is half the length of the square and $\theta$ ranges from $0$ to $\frac{\pi}{4}$ radians ($\theta = 0$ means going straight down and $\theta = \frac{\pi}{4}$ means going towards the opposite corner).
Now, the length that the guard has to run is given by $3L + L\tan(\theta)$ ($2L$ from going down one full side, $L$, from going to the right half way, and then finally the extra length from the prisoner swimming at a diagonal).
So now there is a balance to be had. The longer you swim, the further the guard has to go, but he has more time to go that further distance. The shorter you swim, the shorter the guard has to go, but he has less time to go that distance. If everything were linear, then it wouldn't matter, but since it's not, there may be an optimum angle.
Let's assume that the prisoner swims at $1$ length/time and the guard runs at some value $v$ (so the guard runs $v$ times faster than the prisoner swims). The time it takes the guard to reach where the prisoner exited and the time it takes the prisoner to get to that point are given by:
$$ t_{swimmer} = \frac{1}{\cos(\theta)}L \\ t_{runner} = \frac{3 + \tan(\theta)}{v}L $$
The prisoner escapes when it takes longer for the guard to get to the exit, that is when $t_{runner} > t_{swimmer}$. So we want the largest possible value of $t_{runner} - t_{swimmer}$ (a maximum):
$$ \Delta t = \left(\frac{3 + \tan(\theta}{v} - \frac{1}{\cos(\theta}\right)L \\ f(\theta) = \frac{3 + \tan(\theta)}{v} - \frac{1}{\cos(\theta)} \\ f'(\theta) = \frac{\sec^2(\theta)}{v} - \sin(\theta)\sec^2(\theta) = \sec^2(\theta)\left(\frac{1}{v} - \sin(\theta)\right) $$
There are two critical points: 1) when $\sec^2(\theta)$ is undefined at $\theta = \frac{\pi}{2}$ and 2) when $\sin(\theta) = \frac{1}{v}$. The first is outside of our range (we only want up to $\theta = \frac{\pi}{4}$). This second is a local max and that's easy to see. $\sin(\theta)$ is initially zero (when $\theta = 0$), $sec^2(0) > 0$ and thus the derivative is initially positive. The sign switches when $\sin(\theta) = \frac{1}{v}$. Afterwards $\sin(\theta) > \frac{1}{v}$. Therefore at this critical point, the derivative's sign changes from positive (going up) to negative (going down) therefore this is a local max and likely the global max.
It's still informative to write out the boundary points anyway:
$$ f(0) = \frac{3}{v} - 1 = \frac{3 - v}{v} \\ f\left(\frac{\pi}{4}\right) = \frac{3 + 1}{v} - \sqrt{2} = \frac{4 - v\sqrt{2}}{v} $$
Notice that $\lim_{v \rightarrow \infty} \theta = 0$ (the sine goes to zero as the angle goes to zero). Also note that the maximum speed for the guard, such that the prisoner goes directly to the bottom (or right) is $3 - v > 0 \rightarrow v < 3$ whereas the maximum speed for the guard such that the prisoner can get away if he swims to the diagonal is $4 - v\sqrt{2} > 0 \rightarrow v < \frac{4}{\sqrt{2}} < 3$.
The prisoner could just assume the guard runs at about 3 times the speed he can swim (if the guard runs much faster, the prisoner is doomed no matter what...see edit below) and thus he could head towards the bottom at an angle with the vertical of:
$$ \sin(\theta_C) = \frac{1}{3} \rightarrow \theta_C \approx 19.47^\circ $$
edit: boundary case
There is a speed for which the optimal path is to go to the corner and that's when the guard runs slow. In fact, you can basically tell it's just when $\theta_C > \frac{\pi}{4}$ which occurs when $\sin(\theta_C) > \frac{1}{\sqrt{2}} \rightarrow \frac{1}{v} > \frac{1}{\sqrt{2}} \rightarrow v < \sqrt{2}$. So when the guard runs slower than $\sqrt{2}$ times as fast as the prisoner swims, you should just go to the opposite corner (although you would still get out if you went straight to the downward edge or at the above "optimum" angle).
edit: finding maximum speed of guard
The prisoner can get away if the guard goes faster than 3 times the speed the prisoner can swim. To find this, you would need to plug in the value of $\theta_C$ into $f(\theta)$ and set it equal to zero (at this optimum angle). So that at an optimum angle the guard and prisoner meet:
$$ \sin(\theta_C) = \frac{1}{v}, \cos(\theta_C) = \frac{\sqrt{v^2 - 1}}{v}, \tan(\theta_C) = \frac{1}{\sqrt{v^2 - 1}} \\ f(\theta_C) = \frac{3 + \frac{1}{\sqrt{v^2 - 1}}}{v} - \frac{v}{\sqrt{v^2 - 1}} = \frac{3\sqrt{v^2 - 1}+1-v^2}{v\sqrt{v^2 - 1}} \\ 3\sqrt{v^2 - 1} + 1 - v^2 = 0 \rightarrow 3\sqrt{v^2 - 1} = v^2 - 1 \rightarrow \frac{3}{\sqrt{v^2 - 1}} = 1 \\ 9 = v^2 - 1 \rightarrow v = \sqrt{10} \approx 3.16 > 3 $$
So if the guard runs faster than about $3.16$ times the speed that the prisoner swims, then the prisoner cannot escape. And at this maximum speed, the optimum (and only) angle of escape would be:
$$ \sin(\theta) = \frac{1}{\sqrt{10}} \rightarrow \theta \approx 18.43^\circ $$
A Simpler Explanation
It is clear from the above picture that the largest possible $v$ value comes when $\sin(\theta) + 3\cos(\theta)$ achieves its maximum value--its amplitude: $v_{max} = \sqrt{1^2 + 3^2} = \sqrt{10}$. We can find this value $\theta_C$ by either rewriting $\sin(\theta) + 3\cos(\theta) = \sqrt{10}\cos(\theta - \theta_C)$ or by simply finding critical points from the derivative:
$$ f'(\theta) = \cos(\theta) - 3\sin(\theta) \\ \frac{\sin(\theta_C)}{\cos(\theta_C)} = \tan(\theta_C) = \frac{1}{3} $$
This is not a special right triangle so simply find:
$$ \theta_C = \tan^{-1}\left(\frac{1}{3}\right) \approx 18.43^\circ $$
Note that it's important that $\theta_C$ fall between $-45^\circ$ and $+45^\circ$ since these are the only angles where our figure is valid.
JaredJared
Assume that some pre-positioning is allowed. Assume that the square pool has a side $L$, and that the guard runs with unit velocity, while the swimmer swims with velocity $v$, $0<v<1$.
The swimmer, by mirroring any motion of the guard, can always position himself opposite the guard, while moving farther away from the center of the pool. That is, until he (the swimmer) finds himself on the edge of a square, with sides $Lv$, concentric with the larger square pool. Once this position is reached, all the swimmer's efforts are needed to mirror the guard, with no "extra" velocity to get any further away.
At this point, the swimmer should head straight towards the nearest edge. This will be a distance $\frac{L(1-v)}{2}$ away, covered at a velocity $v$, while the guard, no matter which way he turns, will have to run $2L$ at unit velocity to get to the swimmer's exit point. For escape, the time for the guard to reach exit point must be greater than the time for the swimmer to get there:$$\frac{2L}{1} > \frac{L(1-v)}{2v}$$ $$4v>1-v$$ $$v>0.2$$ So for any guard slower that $5X$ the swimmer, escape is possible.
DJohnMDJohnM
The prisoner can always swim toward the point "diametrically opposed" to the guard's location. The point the prisoner will get out of the pool is then twice the side away from the guard. This only fails if the guard can run fast enough so that he keeps the prisoner from getting to the wall. To do that, the guard needs to run two sides before the prisoner swims half a side, so a factor of 4.
Ross MillikanRoss Millikan
$\begingroup$ Half a diagonal, no? The guard is at a vertex of the square. $\endgroup$ – colormegone Mar 27 '14 at 18:08
$\begingroup$ @RecklessReckoner: I am presuming that the guard starts running toward the corner one way or the other. As the guard traverses the side, the swimmer can change course, always pointing away from the guard. So when the guard gets to the middle of a side, the swimmer can swim only half a side to get to the edge. $\endgroup$ – Ross Millikan Mar 27 '14 at 18:11
$\begingroup$ Then we are reading the problem statement differently. I agree that the strategy is to see which side the guard decides to run along, then to swim to the center of the opposite side. $\endgroup$ – colormegone Mar 27 '14 at 18:12
$\begingroup$ Most likely, and I'm not certain, but if the prisoner cannot swim straight out in one direction, then he probably cannot get out without being caught. So changing directions is just kind of like when two kids chase each other around a table--they can just keep avoiding each other, but they cannot escape. $\endgroup$ – Jared Mar 27 '14 at 19:01
$\begingroup$ If you want to find the optimal path the swimmer should take, then this becomes a Calculus of Variations problem and I think a difficult one at that (although, again, I haven't really thought about that). I have just assumed the swimmer goes in a straight line. $\endgroup$ – Jared Mar 27 '14 at 19:03
Not the answer you're looking for? Browse other questions tagged calculus puzzle or ask your own question.
A lady and a monster
Four turtles/bugs puzzle
Why do these trig functions "overpower" each other?
Cops and robbers in a square
Describing the motion of a particle which moves with constant speed, but always in the direction of another, moving particle
Which way should you run from the lions?
What is the minimum time to get from X to Q on a circle by running, swimming, or both?
100 Prison Problem- Why does this solution not work? | CommonCrawl |
Physics > Classical Physics
[Submitted on 14 Oct 2021 (v1), last revised 13 Jan 2022 (this version, v10)]
Title:Lorentz-equivariant flow with four delays of neutral type
Authors:Jayme De Luca
Abstract: We generalize electrodynamics with a second interaction in lightcone. The time-reversible equations for two-body motion define a semiflow in $C^2(\mathbb{R})$ with four state-dependent delays of neutral type and nonlinear gyroscopic terms. Furthermore, if the initial segment includes velocity discontinuities, their propagation requires two energetic Weierstrass-Erdmann continuity conditions as the constraints defining the boundary layer neighborhoods of large velocities and small denominators. Finally, we discuss the motion restricted to a straight line and a fixed one-dimensional segment with vanishing accelerations.
Comments: Inventory of post-referee changes: (a) replaced Fig. 3, (b) added sections 7C and 7D discussing serrated accelerations, (c) discussed a simple neutron model with vanishing far-fields and constant velocities reflected by photonic kicks. (d) removed the word inversion layer and gave more details about the fixed segment
Subjects: Classical Physics (physics.class-ph); Classical Analysis and ODEs (math.CA)
Cite as: arXiv:2110.07338 [physics.class-ph]
(or arXiv:2110.07338v10 [physics.class-ph] for this version)
From: Jayme Vicente De Luca [view email]
[v1] Thu, 14 Oct 2021 16:27:40 UTC (90 KB)
[v2] Tue, 19 Oct 2021 14:32:01 UTC (91 KB)
[v3] Mon, 25 Oct 2021 21:41:07 UTC (92 KB)
[v4] Wed, 3 Nov 2021 16:39:09 UTC (93 KB)
[v5] Mon, 8 Nov 2021 20:30:48 UTC (92 KB)
[v6] Sun, 28 Nov 2021 21:01:32 UTC (92 KB)
[v7] Wed, 1 Dec 2021 13:30:29 UTC (93 KB)
[v8] Sat, 4 Dec 2021 21:25:50 UTC (93 KB)
[v9] Mon, 13 Dec 2021 11:12:25 UTC (93 KB)
[v10] Thu, 13 Jan 2022 12:17:45 UTC (99 KB)
physics.class-ph
math.CA | CommonCrawl |
Clint Talbert and Joel Maher
At Mozilla, one of our very first automation systems was a performance testing framework we dubbed Talos. Talos had been faithfully maintained without substantial modification since its inception in 2007, even though many of the original assumptions and design decisions behind Talos were lost as ownership of the tool changed hands.
In the summer of 2011, we finally began to look askance at the noise and the variation in the Talos numbers, and we began to wonder how we could make some small modification to the system to start improving it. We had no idea we were about to open Pandora's Box.
In this chapter, we will detail what we found as we peeled back layer after layer of this software, what problems we uncovered, and what steps we took to address them in hopes that you might learn from both our mistakes and our successes.
Let's unpack the different parts of Talos. At its heart, Talos is a simple test harness which creates a new Firefox profile, initializes the profile, calibrates the browser, runs a specified test, and finally reports a summary of the test results. The tests live inside the Talos repository and are one of two types: a single page which reports a single number (e.g., startup time via a web page's onload handler) or a collection of pages that are cycled through to measure page load times. Internally, a Firefox extension is used to cycle the pages and collect information such as memory and page load time, to force garbage collection, and to test different browser modes. The original goal was to create as generic a harness as possible to allow the harness to perform all manner of testing and measure some collection of performance attributes as defined by the test itself.
To report its data, the Talos harness can send JSON to Graph Server: an in-house graphing web application that accepts Talos data as long as that data meets a specific, predefined format for each test, value, platform, and configuration. Graph Server also serves as the interface for investigating trends and performance regressions. A local instance of a standard Apache web server serve the pages during a test run.
The final component of Talos is the regression reporting tools. For every check-in to the Firefox repository, several Talos tests are run, these tests upload their data to Graph Server, and another script consumes the data from Graph Server and ascertains whether or not there has been a regression. If a regression is found (i.e., the script's analysis indicates that the code checked in made performance on this test significantly worse), the script emails a message to a mailing list as well as to the individual that checked in the offending code.
While this architecture–summarized in Figure 8.1–seems fairly straightforward, each piece of Talos has morphed over the years as Mozilla has added new platforms, products, and tests. With minimal oversight of the entire system as an end to end solution, Talos wound up in need of some serious work:
Noise–the script watching the incoming data flagged as many spikes in test noise as actual regressions and was impossible to trust.
To determine a regression, the script compared each check-in to Firefox with the values for three check-ins prior and three afterward. This meant that the Talos results for your check-in might not be available for several hours.
Graph Server had a hard requirement that all incoming data be tied to a previously defined platform, branch, test type, and configuration. This meant that adding new tests was difficult as it involved running a SQL statement against the database for each new test.
The Talos harness itself was hard to run because it took its requirement to be generic a little too seriously–it had a "configure" step to generate a configuration script that it would then use to run the test in its next step.
Figure 8.1 - Talos architecture
While hacking on the Talos harness in the summer of 2011 to add support for new platforms and tests, we encountered the results from Jan Larres's master's thesis, in which he investigated the large amounts of noise that appeared in the Talos tests. He analyzed various factors including hardware, the operating system, the file system, drivers, and Firefox that might influence the results of a Talos test. Building on that work, Stephen Lewchuk devoted his internship to trying to statistically reduce the noise we saw in those tests.
Based on their work and interest, we began forming a plan to eliminate or reduce the noise in the Talos tests. We brought together harness hackers to work on the harness itself, web developers to update Graph Server, and statisticians to determine the optimal way to run each test to produce predictable results with minimal noise.
Understanding What You Are Measuring
When doing performance testing, it is important to have useful tests which provide value to the developers of the product and help customers to see how this product will perform under certain conditions. It is also important to have a repeatable environment so you can reproduce results as needed. But, what is most important is understanding what tests you have and what you measure from those tests.
A few weeks into our project, we had all been learning more about the entire system and started experimenting with various parameters to run the tests differently. One recurring question was "what do the numbers mean?" This was not easily answered. Many of the tests had been around for years, with little to no documentation.
Worse yet, it was not possible to produce the same results locally that were reported from an automated test run. It became evident that the harness itself performed calculations, (it would drop the highest value per page, then report the average for the rest of the cycles) and Graph Server did as well (drop the highest page value, then average the pages together). The end result was that no historical data existed that could provide much value, nor did anybody understand the tests we were running.
We did have some knowledge about one particular test. We knew that this test took the top 100 websites snapshotted in time and loaded each page one at a time, repeating 10 times. Talos loaded the page, waited for the mozAfterPaint event, (a standard event which is fired when Firefox has painted the canvas for the webpage) and then recorded the time from loading the page to receiving this event. Looking at the 1000 data points produced from a single test run, there was no obvious pattern. Imagine boiling those 10,000 points down to a single number and tracking that number over time. What if we made CSS parsing faster, but image loading slower? How would we detect that? Would it be possible to see page 17 slow down if all 99 other pages remained the same? To showcase how the values were calculated in the original version of Talos, consider the following numbers.
For the following page load values:
Page 1: 570, 572, 600, 503, 560
Page 3: 1220, 980, 1000, 1100, 1200
First, the Talos harness itself would drop the first value and calculate the median:
Page 1: 565.5
Page 2: 675
Page 3: 1050
These values would be submitted to Graph Server. Graph Server would drop the highest value and calculate the mean using these per page values and it would report that one value:
$$ \frac{565.5 + 675}{2} = 620.25 $$
This final value would be graphed over time, and as you can see it generates an approximate value that is not good for anything more than a coarse evaluation of performance. Furthermore, if a regression is detected using a value like this, it would be extremely difficult to work backwards and see which pages caused the regression so that a developer could be directed to a specific issue to fix.
We were determined to prove that we could reduce the noise in the data from this 100 page test. Since the test measured the time to load a page, we first needed to isolate the test from other influences in the system like caching. We changed the test to load the same page over and over again, rather than cycling between pages, so that load times were measured for a page that was mostly cached. While this approach is not indicative of how end users actually browse the web, it reduced some of the noise in the recorded data. Unfortunately, looking at only 10 data points for a given page was not a useful sample size.
By varying our sample size and measuring the standard deviation of the page load values from many test runs, we determined that noise was reduced if we loaded a page at least 20 times. After much experimentation, this method found a sweet spot with 25 loads and ignoring the first 5 loads. In other words, by reviewing the standard deviation of the values of multiple page loads, we found that 95% of our noisy results occurred within the first five loads. Even though we do not use those first 5 data points, we do store them so that we can change our statistical calculations in the future if we wish.
All this experimentation led us to some new requirements for the data collection that Talos was performing:
All data collected needs to be stored in the database, not just averages of averages.
A test must collect at least 20 useful data points per test (in this case, per page).
To avoid masking regressions in one page by improvements in another page, each page must be calculated independently. No more averaging values across pages.
Each test that is run needs to have a developer who owns the test and documentation on what is being collected and why.
At the end of a test, we must be able to detect a regression for any given page at the time of reporting the results.
Applying these new requirements to the entire Talos system was the right thing to do, but with the ecosystem that had grown up around Talos it would be a major undertaking to switch to this new model. We had a decision to make as to whether we would refactor or rewrite the system.
Rewrite vs. Refactor
Given our research into what had to change on Talos, we knew we would be making some drastic changes. However, all historical changes to Talos at Mozilla had always suffered from a fear of "breaking the numbers." The many pieces of Talos were constructed over the years by well-intentioned contributors whose additions made sense at the time, but without documentation or oversight into the direction of the tool chain, it had become a patchwork of code that was not easy to test, modify, or understand.
Given our fear of the undocumented dark matter in the code base, combined with the issue that we would need to verify our new measurements against the old measurements, we began a refactoring effort to modify Talos and Graph Server in place. However, it was quickly evident that without a massive re-architecture of the database schema, The Graph Server system would never be able to ingest the full set of raw data from the performance tests. Additionally, we had no clean way to apply our newly-researched statistical methods into Graph Server's backend. Therefore, we decided to rewrite Graph Server from scratch, creating a project called Datazilla. This was not a decision made lightly, as other open source projects had forked the Graph Server code base for their own performance automation. On the Talos harness side of the equation, we also did a prototype from scratch. We even had a working prototype that ran a simple test and was about 2000 lines of code lighter.
While we rewrote Graph Server from scratch, we were worried about moving ahead with our new Talos test runner prototype. Our fear was that we might lose the ability to run the numbers "the old way" so that we could compare the new approach with the old. So, we abandoned our prototype and modified the Talos harness itself piecemeal to transform it into a data generator while leaving the existing pieces that performed averages to upload to the old Graph Server system. This was a singularly bad decision. We should have built a separate harness and then compared the new harness with the old one.
Trying to support the original flow of data and the new method for measuring data for each page proved to be difficult. On the positive side, it forced us to restructure much of the code internal to the framework and to streamline quite a few things. But, we had to do all this piecemeal on a running piece of automation, which caused us several headaches in our continuous integration rigs.
It would have been far better to develop both Talos the framework and Datazilla its reporting system in parallel from scratch, leaving all of the old code behind. Especially when it came to staging, it would have been far easier to stage the new system without attempting to wire in the generation of development data for the upcoming Datazilla system in running automation. We had thought it was necessary to do this so that we could generate test data with real builds and real load to ensure that our design would scale properly. In the end, that build data was not worth the complexity of modifying a production system. If we had known at the time that we were embarking on a year long project instead of our projected six month project, we would have rewritten Talos and the results framework from scratch.
Creating a Performance Culture
Being an open source project, we need to embrace the ideas and criticisms from other individuals and projects. There is no director of development saying how things will work. In order to get the most information possible and make the right decision, it was a requirement to pull in many people from many different teams. The project started off with two developers on the Talos framework, two on Datazilla/Graph Server, and two statisticians on loan from our metrics team. We opened up this project to our volunteers from the beginning and pulled in many fresh faces to Mozilla as well as others who used Graph Server and some Talos tests for their own projects. As we worked together, slowly understanding what permutations of test runs would give us less noisy results, we reached out to include several Mozilla developers in the project. Our first meetings with them were understandably rocky, due to the large changes we were proposing to make. The mystery of "Talos" was making this a hard sell for many developers who cared a lot about performance.
The important message that took a while to settle in was why rewriting large components of the system was a good idea, and why we couldn't simply "fix it in place." The most common feedback was to make a few small changes to the existing system, but everyone making that suggestion had no idea how the underlying system worked. We gave many presentations, invited many people to our meetings, held special one-off meetings, blogged, posted, tweeted, etc. We did everything we could to get the word out. Because the only thing more horrible than doing all this work to create a better system would be to do all the work and have no one use it.
It has been a year since our first review of the Talos noise problem. Developers are looking forward to what we are releasing. The Talos framework has been refactored so that it has a clear internal structure and so that it can simultaneously report to Datazilla and the old Graph Server. We have verified that Datazilla can handle the scale of data we are throwing at it (1 TB of data per six months) and have vetted our metrics for calculation results. Most excitingly, we have found a way to deliver a regression/improvement analysis in real time on a per-change basis to the Mozilla trees, which is a big win for developers.
So, now when someone pushes a change to Firefox, here is what Talos does:
Talos collects 25 data points for each page.
All of those numbers are uploaded to Datazilla.
Datazilla performs the statistical analysis after dropping the first five data points. (95% of noise is found in the first 5 data points.)
A Welch's T-Test is then used to analyze the numbers and detect if there are any outliers in the per-page data as compared to previous trends from previous pushes.1
All results of the T-Test analysis are then pushed through a False Discovery Rate filter which ensures that Datazilla can detect any false positives that are simply due to noise.2
Finally, if the results are within our tolerance, Datazilla runs the results through an exponential smoothing algorithm to generate a new trend line.3 If the results are not within our tolerance, they do not form a new trend line and the page is marked as a failure.
We determine overall pass/fail metrics based on the percentage of pages passing. 95% passing is a "pass".
The results come back to the Talos harness in real time, and Talos can then report to the build script whether or not there is a performance regression. All of this takes place with 10-20 Talos runs completing every minute (hence the 1 TB of data) while updating the calculations and stored statistics at the same time.
Taking this from a working solution to replacing the existing solution requires running both systems side by side for a full release of Firefox. This process ensures that we look at all regressions reported by the original Graph Server and make sure they are real and reported by Datazilla as well. Since Datazilla reports on a per-page basis instead of at the test suite level, there will be some necessary acclimation to the new UI and way we report regressions.
Looking back, it would have been faster to have replaced the old Talos harness up front. By refactoring it, however, Mozilla brought many new contributors into the Talos project. Refactoring has also forced us to understand the tests better, which has translated into fixing a lot of broken tests and turning off tests with little to no value. So, when considering whether to rewrite or refactor, total time expended is not the only metric to review.
In the last year, we dug into every part of performance testing automation at Mozilla. We have analyzed the test harness, the reporting tools, and the statistical soundness of the results that were being generated. Over the course of that year, we used what we learned to make the Talos framework easier to maintain, easier to run, simpler to set up, easier to test experimental patches with, and less error prone. We have created Datazilla as an extensible system for storing and retrieving all of our performance metrics from Talos and any future performance automation. We have rebooted our performance statistical analysis and created statistically viable, per-push regression/improvement detection. We have made all of these systems easier to use and more open so that any contributor anywhere can take a look at our code and even experiment with new methods of statistical analysis on our performance data. Our constant commitment to reviewing the data again and again at each milestone of the project and our willingness to throw out data that proved inconclusive or invalid helped us retain our focus as we drove this gigantic project forward. Bringing in people from across teams at Mozilla as well as many new volunteers helped lend the effort validity and also helped to establish a resurgence in performance monitoring and data analysis across several areas of Mozilla's efforts, resulting in an even more data-driven, performance-focused culture.
Https://github.com/mozilla/datazilla/blob/2c369a346fe61072e52b07791492c815fe316291/vendor/dzmetrics/ttest.py.↩
Https://github.com/mozilla/datazilla/blob/2c369a346fe61072e52b07791492c815fe316291/vendor/dzmetrics/fdr.py.↩
Https://github.com/mozilla/datazilla/blob/2c369a346fe61072e52b07791492c815fe316291/vendor/dzmetrics/data_smoothing.py.↩ | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.